Merge remote-tracking branch 'upstream/main' into dev-1.25
commit
e5ea8833be
|
@ -68,8 +68,7 @@ been deprecated. These removals have been superseded by newer, stable/generally
|
|||
## API removals, deprecations, and other changes for Kubernetes 1.24
|
||||
|
||||
* [Dynamic kubelet configuration](https://github.com/kubernetes/enhancements/issues/281): `DynamicKubeletConfig` is used to enable the dynamic configuration of the kubelet. The `DynamicKubeletConfig` flag was deprecated in Kubernetes 1.22. In v1.24, this feature gate will be removed from the kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/). Refer to the ["Dynamic kubelet config is removed" KEP](https://github.com/kubernetes/enhancements/issues/281) for more information.
|
||||
* [Dynamic log sanitization](https://github.com/kubernetes/kubernetes/pull/107207): The experimental dynamic log sanitization feature is deprecated and will be removed in v1.24. This feature introduced a logging filter that could be applied to all Kubernetes system components logs to prevent various types of sensitive information from leaking via logs. Refer to [KEP-1753: Kubernetes system components logs sanitization](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#deprecation) for more information and an [alternative approach](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#alternatives=).
|
||||
* In-tree provisioner to CSI driver migration: This applies to a number of in-tree plugins, including [Portworx](https://github.com/kubernetes/enhancements/issues/2589). Refer to the [In-tree Storage Plugin to CSI Migration Design Doc](https://git.k8s.io/design-proposals-archive/storage/csi-migration.md#background-and-motivations) for more information.
|
||||
* [Dynamic log sanitization](https://github.com/kubernetes/kubernetes/pull/107207): The experimental dynamic log sanitization feature is deprecated and will be removed in v1.24. This feature introduced a logging filter that could be applied to all Kubernetes system components logs to prevent various types of sensitive information from leaking via logs. Refer to [KEP-1753: Kubernetes system components logs sanitization](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#deprecation) for more information and an [alternative approach](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#alternatives=).
|
||||
* [Removing Dockershim from kubelet](https://github.com/kubernetes/enhancements/issues/2221): the Container Runtime Interface (CRI) for Docker (i.e. Dockershim) is currently a built-in container runtime in the kubelet code base. It was deprecated in v1.20. As of v1.24, the kubelet will no longer have dockershim. Check out this blog on [what you need to do be ready for v1.24](/blog/2022/03/31/ready-for-dockershim-removal/).
|
||||
* [Storage capacity tracking for pod scheduling](https://github.com/kubernetes/enhancements/issues/1472): The CSIStorageCapacity API supports exposing currently available storage capacity via CSIStorageCapacity objects and enhances scheduling of pods that use CSI volumes with late binding. In v1.24, the CSIStorageCapacity API will be stable. The API graduating to stable initates the deprecation of the v1beta1 CSIStorageCapacity API. Refer to the [Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking) for more information.
|
||||
* [The `master` label is no longer present on kubeadm control plane nodes](https://github.com/kubernetes/kubernetes/pull/107533). For new clusters, the label 'node-role.kubernetes.io/master' will no longer be added to control plane nodes, only the label 'node-role.kubernetes.io/control-plane' will be added. For more information, refer to [KEP-2067: Rename the kubeadm "master" label and taint](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint).
|
||||
|
|
|
@ -0,0 +1,356 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Stargazing, solutions and staycations: the Kubernetes 1.24 release interview"
|
||||
date: 2022-08-18
|
||||
---
|
||||
|
||||
**Author**: Craig Box (Google)
|
||||
|
||||
The Kubernetes project has participants from all around the globe. Some are friends, some are colleagues, and some are strangers. The one thing that unifies them, no matter their differences, are that they all have an interesting story. It is my pleasure to be the documentarian for the stories of the Kubernetes community in the weekly [Kubernetes Podcast from Google](https://kubernetespodcast.com/). With every new Kubernetes release comes an interview with the release team lead, telling the story of that release, but also their own personal story.
|
||||
|
||||
With 1.25 around the corner, [the tradition continues](https://www.google.com/search?q=%22release+interview%22+site%3Akubernetes.io%2Fblog) with a look back at the story of 1.24. That release was led by [James Laverack](https://twitter.com/jameslaverack) of Jetstack. [James was on the podcast](https://kubernetespodcast.com/episode/178-kubernetes-1.24/) in May, and while you can read his story below, if you can, please do listen to it in his own voice.
|
||||
|
||||
Make sure you [subscribe, wherever you get your podcasts](https://kubernetespodcast.com/subscribe/), so you hear all our stories from the cloud native community, including the story of 1.25 next week.
|
||||
|
||||
*This transcript has been lightly edited and condensed for clarity.*
|
||||
|
||||
---
|
||||
|
||||
**CRAIG BOX: Your journey to Kubernetes went through the financial technology (fintech) industry. Tell me a little bit about how you came to software?**
|
||||
|
||||
JAMES LAVERACK: I took a pretty traditional path to software engineering. I went through school and then I did a computer science degree at the University of Bristol, and then I just ended up taking a software engineer job from there. Somewhat rather by accident, I ended up doing fintech work, which is pretty interesting, pretty engaging.
|
||||
|
||||
But in my most recent fintech job before I joined [Jetstack](https://www.jetstack.io/), I ended up working on a software project. We needed Kubernetes to solve a technical problem. So we implemented Kubernetes, and as often happens, I ended up as the one person of a team that understood the infrastructure, while everyone else was doing all of the application development.
|
||||
|
||||
I ended up enjoying the infrastructure side so much that I decided to move and do that full time. So I looked around and I found Jetstack, whose offices were literally across the road. I could see them out of our office window. And so I decided to just hop across the road and join them, and do all of this Kubernetes stuff more.
|
||||
|
||||
**CRAIG BOX: What's the tech scene like in Bristol? You went there for school and never left?**
|
||||
|
||||
JAMES LAVERACK: Pretty much. It's happened to a lot of people I know and a lot of my friends, is that you go to University somewhere and you're just kind of stuck there forever, so to speak. It's been known for being quite hot in the area in terms of that part of the UK. It has a lot of tech companies, obviously, it was a fintech company I worked at before. I think some larger companies have offices there. For "not London", it's not doing too bad, I don't think.
|
||||
|
||||
**CRAIG BOX: When you say hot, though, that's tech industry, not weather, I'm assuming.**
|
||||
|
||||
JAMES LAVERACK: Yeah, weather is the usual UK. It's kind of a nice overcast and rainy, which I quite like. I'm quite fond of it.
|
||||
|
||||
**CRAIG BOX: Public transport good?**
|
||||
|
||||
JAMES LAVERACK: Buses are all right. We've got a new bus installed recently, which everyone hated while it was being built. And now it's complete, everyone loves. So, standard I think.
|
||||
|
||||
**CRAIG BOX: That is the way. As someone who lived in London for a long time, it's very easy for me to say "well, London's kind of like Singapore. It's its own little city-state." But whenever we did go out to that part of the world, Bath especially, a very lovely town**
|
||||
|
||||
JAMES LAVERACK: Oh, Bath's lovely. I've been a couple of times.
|
||||
|
||||
**CRAIG BOX: Have you been to Box?**
|
||||
|
||||
JAMES LAVERACK: To where, sorry?
|
||||
|
||||
**CRAIG BOX: There's [a town called Box](https://en.wikipedia.org/wiki/Box,_Wiltshire) just outside Bath. I had my picture taken outside all the buildings. Proclaimed myself the mayor.**
|
||||
|
||||
JAMES LAVERACK: Oh, no, I don't think I have.
|
||||
|
||||
**CRAIG BOX: Well, look it up if you're ever in the region, everybody. Let's get back to Jetstack, though. They were across the road. Great company, the [two](https://www.jetstack.io/about/mattbarker/) [Matts](https://www.jetstack.io/about/mattbates/), the co-founders there. What was the interview process like for you?**
|
||||
|
||||
JAMES LAVERACK: It was pretty relaxed. One lunchtime, I just walked down the road and went to a coffee shop with Matt and we had this lovely conversation talking about my background and Jetstack and what I was looking to achieve in a new role and all this. And I'd applied to be a software engineer. And then they kind of at the end of it, he looked over at me and was like, "well, how about being a solutions engineer instead?" And I was like, what's that?
|
||||
|
||||
And he's like, "well, you know, it's just effectively being a software consultant. You go, you help companies implement Kubernetes, users, saying all that stuff you enjoy. But you do it full time." I was like, "well, maybe." And in the end he convinced me. I ended up joining as a solutions engineer with the idea of if I didn't like it, I could transfer to be a software engineer again.
|
||||
|
||||
Nearly three years later, I've never taken them up on the offer. I've just [stayed as a solutions engineer](https://www.jetstack.io/blog/life-as-a-solutions-engineer/) the entire time.
|
||||
|
||||
**CRAIG BOX: At the company you were working at, I guess you were effectively the consultant between the people writing the software and the deployment in Kubernetes. Did it make sense then for you to carry on in that role, as you moved to Jetstack?**
|
||||
|
||||
JAMES LAVERACK: I think so. I think it's something that I enjoyed. Not that I didn't enjoy writing software applications. I always enjoyed it, and we had a really interesting product and a really fun team. But I just found that more interesting. And it was becoming increasingly difficult to justify spending time on it when we had an application to write.
|
||||
|
||||
Which was just completely fine, and that made sense for the needs of the team at the time. But it's not what I wanted to do.
|
||||
|
||||
**CRAIG BOX: Do you think that talks to the split between Kubernetes being for developers or for operators? Do you think there's always going to be the need to have a different set of people who are maintaining the running infrastructure versus the people who are writing the code that run on it?**
|
||||
|
||||
JAMES LAVERACK: I think to some extent, yes, whether or not that's a separate platform team or whether or not that is because the people running it are consultants of some kind. Or whether or not this has been abstracted away from you in some of the more batteries-included versions of Kubernetes — some of the cloud-hosted ones, especially, somewhat remove that need. So I don't think it's absolutely necessary to employ a platform team. But I think someone needs to do it or you need to implicitly or explicitly pay for someone to do it in some way.
|
||||
|
||||
**CRAIG BOX: In the three years you have been at Jetstack now, how different are the jobs that you do for the customers? Is this just a case of learning one thing and rolling it out to multiple people, or is there always a different challenge with everyone you come across?**
|
||||
|
||||
JAMES LAVERACK: I think there's always a different challenge. My role has varied drastically. For example, a long time ago, I did an Istio install. But it was a relatively complicated, single mesh, multi-cluster install. And that was before multi-cluster support was really as readily available as it is now. Conversely, I've worked building custom orchestration platforms on top of Kubernetes for specific customer use cases.
|
||||
|
||||
It's all varied and every single customer engagement is different. That is an element I really like about the job, that variability in how things are and how things go.
|
||||
|
||||
**CRAIG BOX: When the platform catches up and does things like makes it easier to manage multi-cluster environments, do you go back to the customers and bring them up to date with the newest methods?**
|
||||
|
||||
JAMES LAVERACK: It depends. Most of our engagements are to solve a specific problem. And once we've solved that problem, they may have us back. But typically speaking, in my line of work, it's not an ongoing engagement. There are some within Jetstack that do that, but not so much in my team.
|
||||
|
||||
**CRAIG BOX: Your bio suggests that you were once called "the reason any corporate policy evolves." What's the story there?**
|
||||
|
||||
JAMES LAVERACK: [CHUCKLES] I think I just couldn't leave things well enough alone. I was talking to our operations director inside of Jetstack, and he once said to me that whenever he's thinking of a new corporate policy, he asks will it pass the James Laverack test. That is, will I look at it and find some horrendous loophole?
|
||||
|
||||
For example when I first joined, I took a look at our acceptable use policy for company equipment. And it stated that you're not allowed to have copyrighted material on your laptop. And of course, this makes sense, as you know, you don't want people doing software piracy or anything. But as written, that would imply you're not allowed to have anything that is copyrighted by anyone on your machine.
|
||||
|
||||
**CRAIG BOX: Such as perhaps the operating system that comes installed on it?**
|
||||
|
||||
JAMES LAVERACK: Such as perhaps the operating system, or anything. And you know, this clearly didn't make any sense. So he adjusted that, and I've kind of been fiddling with that sort of policy ever since.
|
||||
|
||||
**CRAIG BOX: The release team is often seen as an administrative role versus a pure coding role. Does that speak to the kind of change you've had in career in previously being a software developer and now being more of a consultant, or was there something else that attracted you to get involved in that particular part of the community?**
|
||||
|
||||
JAMES LAVERACK: I wouldn't really consider it less technical. I mean, yes, you do much less coding. This is something that constantly surprises my friends and some of my colleagues, when I tell them more detail about my role. There's not really any coding involved.
|
||||
|
||||
I don't think my role has really changed to have less coding. In fact, one of my more recent projects at Jetstack, a client project, involved a lot of coding. But I think that what attracted me to this role within Kubernetes is really the community. I found it really rewarding to engage with SIG Release and to engage with the release team. So I've always just enjoyed doing it, even though there is, as you say, not all that much coding involved.
|
||||
|
||||
**CRAIG BOX: Indeed; your wife said to you, ["I don't think your job is to code anymore. You just talk to people all day."](https://twitter.com/JamesLaverack/status/1483201645286678529) How did that make you feel?**
|
||||
|
||||
JAMES LAVERACK: Ahh, annoyed, because she was right. This was kind of a couple of months ago when I was in the middle of it with all of the Kubernetes meetings. Also, my client project at the time involved a lot of technical discussion. I was in three or four hours of calls every day. And I don't mind that. But I would come out, in part because of course you're working from home, so she sees me all the time. So I'd come out, I'd grab a coffee and be like, "oh, I've got a meeting, I've got to go." And she'd be like, "do you ever code anymore?"
|
||||
I think it was in fact just after Christmas when she asked me, "when was the last time you programmed anything?" And I had to think about it. Then I realized that perhaps there was a problem there. Well, not a problem, but I realized that perhaps I don't code as much as I used to.
|
||||
|
||||
**CRAIG BOX: Are you the kind of person who will pick up a hobby project to try and fix that?**
|
||||
|
||||
JAMES LAVERACK: Absolutely. I've recently started writing [a Kubernetes operator for my Minecraft server](https://github.com/JamesLaverack/kubernetes-minecraft-operator). That probably tells you about the state I'm in.
|
||||
|
||||
**CRAIG BOX: If it's got Kubernetes in it, it doesn't sound that much of a hobby.**
|
||||
|
||||
JAMES LAVERACK: [LAUGHING] Do you not consider Kubernetes to be a hobby?
|
||||
|
||||
**CRAIG BOX: It depends.**
|
||||
|
||||
JAMES LAVERACK: I think I do.
|
||||
|
||||
**CRAIG BOX: I think by now.**
|
||||
|
||||
JAMES LAVERACK: In some extents.
|
||||
|
||||
**CRAIG BOX: You mentioned observing the release team in process before you decided to get involved. Was that as part of working with customers and looking to see whether a particular feature would make it into a release, or was there some other reason that that was how you saw the Kubernetes community?**
|
||||
|
||||
JAMES LAVERACK: Just after I joined Jetstack, I got the opportunity to go to KubeCon San Diego. I think we actually met there.
|
||||
|
||||
**CRAIG BOX: We did.**
|
||||
|
||||
JAMES LAVERACK: We had dinner, didn't we? So when I went, I'd only been at Jetstack for a few months. I really wasn't involved in the community in any serious way at all. As a result, I just ended up following around my then colleague, James Munnelly. James is lovely. And, you know, I just kind of went around with him, because he knew everyone.
|
||||
|
||||
I ended up in this hotel bar with a bunch of Kubernetes people, including Stephen Augustus, the co-chair of SIG Release and holder of a bunch of other roles within the community. I happened to ask him, I want to get involved. What is a good way to get involved with the Kubernetes community, if I've never been involved before? And he said, oh, you should join the release team.
|
||||
|
||||
**CRAIG BOX: So it's all down to where you end up in the bar with someone.**
|
||||
|
||||
JAMES LAVERACK: Yeah, pretty much.
|
||||
|
||||
**CRAIG BOX: If I'd got to you sooner, you could have been working on Istio.**
|
||||
|
||||
JAMES LAVERACK: Yeah, I could've been working on Istio, I could have ended up in some other SIG doing something. I just happened to be talking to Stephen. And Stephen suggested it, and I gave it a go. And here I am three years later.
|
||||
|
||||
**CRAIG BOX: I think I remember at the time you were working on an etcd operator?**
|
||||
|
||||
JAMES LAVERACK: Yeah, that's correct. That was part of a client project, which they, thankfully [let us open source](https://github.com/improbable-eng/etcd-cluster-operator). This was an operator for etcd, where they had a requirement to run it in Kubernetes, which of course is the opposite way around to how you'd normally want to run it.
|
||||
|
||||
**CRAIG BOX: And I remember having you up at the time, like I'm pretty sure those things exist already, and asking what the need was for there to be something different.**
|
||||
|
||||
JAMES LAVERACK: It was that they needed something very specific. The ones that existed already were all designed to run clusters that couldn't be shut down. As long as one replica stayed up, you could keep running etcd. But they needed to be able to suspend and restart the entire cluster, which means it needs disk-persistence support, which it turns out is quite complicated.
|
||||
|
||||
**CRAIG BOX: It's easier if you just throw all the data away.**
|
||||
|
||||
JAMES LAVERACK: It's much easier to throw all the data away. We needed to be a little bit careful about how we managed it. We thought about forking and changing an existing one. But we realized it would probably just be as easy to start from scratch, so we did that.
|
||||
|
||||
**CRAIG BOX: You've been a member of every release team since that point, since Kubernetes 1.18 in 2020, in a wide range of roles. Which set of roles have you been through?**
|
||||
|
||||
JAMES LAVERACK: I started out as a release notes shadow, and did that for a couple of releases, in 1.18 and 1.19. In 1.20, I was the release notes lead. And then in 1.21, I moved into being a shadow again as an enhancement shadow, before in 1.22 becoming an enhancements lead, but in 1.23 a release lead shadow, and finally in 1.24, release lead as a whole.
|
||||
|
||||
**CRAIG BOX: That's quite a long time to be with the release team. You're obviously going to move into an emeritus role after this release. Do you see yourself still remaining involved? Is it something that you're clearly very passionate about?**
|
||||
|
||||
JAMES LAVERACK: I think I'm going to be around in SIG Release for as long as people want me there. I find it a really interesting part of the community. And I find the people super-interesting and super-inviting.
|
||||
|
||||
**CRAIG BOX: Let's talk then about [Kubernetes 1.24](/blog/2022/05/03/kubernetes-1-24-release-announcement/). First, as always, congratulations on the release.**
|
||||
|
||||
JAMES LAVERACK: Thank you.
|
||||
|
||||
**CRAIG BOX: This release consists of 46 enhancements. 14 have graduated to stable, 15 have moved to beta, and 13 are in alpha. 2 are deprecated and 2 have been removed. How is that versus other releases recently? Is that an average number? That seems like a lot of stable enhancements, especially.**
|
||||
|
||||
JAMES LAVERACK: I think it's pretty similar. Most of the recent releases have been quite similar in the number of enhancements they have and in what categories. For example, in 1.23, the previous release, there were 47. I think 1.22, before that, had 53, so slightly more. But it's around about that number.
|
||||
|
||||
**CRAIG BOX: You didn't want to sneak in two extra so you could say you were one more than the last one?**
|
||||
|
||||
JAMES LAVERACK: No, I don't think so. I think we had enough going on.
|
||||
|
||||
**CRAIG BOX: The release team is obviously beholden to what features the SIGs are developing and what their plans are. Is there ever any coordination between the release process and the SIGs in terms of things like saying, this release is going to be a catch-up release, like the old Snow Leopard releases for macOS, for example, where we say we don't want as many new features, but we really want more stabilization, and could you please work on those kind of things?**
|
||||
|
||||
JAMES LAVERACK: Not really. The cornerstone of a Kubernetes organization is the SIGs themselves, so the special interest groups that make up the organization. It's really up to them what they want to do. We don't do any particular coordination on the style of thing that should be implemented. A lot of SIGs have roadmaps that are looking over multiple releases to try to get features that they think are important in.
|
||||
|
||||
**CRAIG BOX: Let's talk about some of the new features in 1.24. We have been hearing for many releases now about the impending doom which is the removal of Dockershim. [It is gone in 1.24](https://github.com/kubernetes/enhancements/issues/2221). Do we worry?**
|
||||
|
||||
JAMES LAVERACK: I don't think we worry. This is something that the community has been preparing for for a long time. [We've](https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/) [published](https://kubernetes.io/blog/2022/02/17/dockershim-faq/) a [lot](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/) of [documentation](https://kubernetes.io/blog/2022/03/31/ready-for-dockershim-removal/) [about](https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) [how](https://kubernetes.io/blog/2022/05/03/dockershim-historical-context/) you need to approach this. The honest truth is that most users, most application developers in Kubernetes, will simply not notice a difference or have to worry about it.
|
||||
|
||||
It's only really platform teams that administer Kubernetes clusters and people in very specific circumstances that are using Docker directly, not through the Kubernetes API, that are going to experience any issue at all.
|
||||
|
||||
**CRAIG BOX: And I see that Mirantis and Docker have developed a CRI plugin for Docker anyway, so you can just switch over to that and everything continues.**
|
||||
|
||||
JAMES LAVERACK: Yeah, absolutely, or you can use one of the many other CRI implementations. There are two in the CNCF, [containerd](https://containerd.io/), and [CRI-O](https://cri-o.io/).
|
||||
|
||||
**CRAIG BOX: Having gone through the process of communicating this change over several releases, what has the team learnt in terms of how we will communicate a message like this in future?**
|
||||
|
||||
JAMES LAVERACK: I think that this has been really interesting from the perspective that this is the biggest removal that the Kubernetes project has had to date. We've removed features before. In fact, we're removing another one in this release as well. But this is one of the most user-visible changes we've made.
|
||||
|
||||
I think there are very good reasons for doing it. But I think we've learned a lot about how and when to communicate, and the importance of having migration guides, the importance of having official documentation that really clarifies the thing. I think that's the real, it's an area in which the Kubernetes project has matured a lot since I've been on the team.
|
||||
|
||||
**CRAIG BOX: What is the other feature that's being removed?**
|
||||
|
||||
JAMES LAVERACK: The other feature that we're removing is dynamic Kubelet configuration. This is a feature that was in beta for a while. But I believe we decided that it just wasn't being used enough to justify keeping it. So we're removing it. We deprecated it back in 1.22 and we're removing it this release.
|
||||
|
||||
**CRAIG BOX: There was a change in policy a few releases ago that talked about features not being allowed to stay in beta forever. Have there been any features that were at risk of being removed due to lack of maintenance, or are all the SIGs pretty good now at keeping their features on track?**
|
||||
|
||||
JAMES LAVERACK: I think the SIGs are getting pretty good at it. We had a spate of a long time when a lot of features were kind of perpetually in beta. As you remember, Ingress was in beta for a long, long time.
|
||||
|
||||
**CRAIG BOX: I choose to believe it still is.**
|
||||
|
||||
JAMES LAVERACK: [LAUGHTER] I think it's really good that we're moving towards that stability approach with things like Kubernetes. I think it's a very positive change.
|
||||
|
||||
**CRAIG BOX: The fact that Ingress was in beta for so long, along with things like the main workload controllers, for example, did lead people to believing that beta APIs were stable and production ready, and could and should be used. Something that's changing in this release is that [beta APIs are going to be off by default](https://github.com/kubernetes/enhancements/issues/3136). Why that change?**
|
||||
|
||||
JAMES LAVERACK: This is really about encouraging the use of stable APIs. There was a perception, like you say, that beta APIs were actually stable. Because they can be removed very quickly, we often ended up in the state where we wanted to follow the policy and remove a beta API, but were unable to, because it was de facto stable, according to the community. This meant that cluster operators and users had a lot of breaking changes when doing upgrades that could have been avoided. This is really just to help stability as we go through more upgrades in the future.
|
||||
|
||||
**CRAIG BOX: I understand that only applies now to new APIs. Things that are in beta at the moment will continue to be available. So there'll be no breaking changes again?**
|
||||
|
||||
JAMES LAVERACK: That's correct. There's no breaking changes in beta APIs other than the ones we've documented this release. It's only new things.
|
||||
|
||||
**CRAIG BOX: Now in this release, [the artifacts are signed](https://github.com/kubernetes/enhancements/issues/3031) using Cosign signatures, and there is [experimental support for verification of those signatures](https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-images/). What needed to happen to make that process possible?**
|
||||
|
||||
JAMES LAVERACK: This was a huge process from the other half of SIG Release. SIG Release has the release team, but it also has the release engineering team that handles the mechanics of actually pushing releases out. They have spent, and one of my friends over there, Adolfo, has spent a lot of time trying to bring us in line with [SLSA](https://slsa.dev/) compliance. I believe we're [looking now at Level 3 compliance](https://github.com/kubernetes/enhancements/issues/3027).
|
||||
|
||||
SLSA is a framework that describes software supply chain security. That is, of course, a really big issue in our industry at the moment. And it's really good to see the project adopting the best practices for this.
|
||||
|
||||
**CRAIG BOX: I was looking back at [the conversation I had with Rey Lejano about the 1.23 release](https://kubernetespodcast.com/episode/167-kubernetes-1.23/), and we were basically approaching Level 2. We're now obviously stepping up to Level 3. I think I asked Rey at the time was, is it fair to say that SLSA is inspired by large projects like Kubernetes, and in theory, it should be really easy for these projects to tick the boxes to get to that level, because the SLSA framework is written with a project like Kubernetes in mind?**
|
||||
|
||||
JAMES LAVERACK: I think so. I think it's been somewhat difficult, just because it's one thing to do it, but it's another thing to prove that you're doing it, which is the whole point around these frameworks — the assertation, that proof.
|
||||
|
||||
**CRAIG BOX: As an end user of Kubernetes, whether I install it myself or I take it from a service like GKE, what will this provenance then let me prove? If we think back to [the orange juice example we talked to Santiago about recently](https://kubernetespodcast.com/episode/174-in-toto/), how do I tell that my software is safe to run?**
|
||||
|
||||
JAMES LAVERACK: If you're downloading and running Kubernetes yourself, you can use the verifying image signatures feature to verify the thing you've downloaded, and the thing you are running, is actually the thing that the Kubernetes project has released, and that it has been built from the actual source code in the Kubernetes GitHub repository. This can give you a lot of confidence in what you're running, especially if you're running in a highly secure or regulated environment of some kind.
|
||||
|
||||
As an end user, this isn't something that will necessarily directly impact you. But it means that service providers that provide managed Kubernetes options, such as Google and GKE, can provide even greater levels of security and safety themselves about the services that they run.
|
||||
|
||||
**CRAIG BOX: A lot of people get access to their Kubernetes server just by being granted an API endpoint, and they start running kubectl against it. They're not actually installing their own Kubernetes. They have a provider or a platform team do it for them. Do you think it's feasible to get to a world where there's something that you can run when you're deploying your workloads which queries the API server, for example, and gets access to that same provenance data?**
|
||||
|
||||
JAMES LAVERACK: I think it's going to be very difficult to do it that way, simply because this provenance and assertation data implies that you actually have access to the underlying executables, which typically, when you're running in a managed platform, you don't. If you're having Kubernetes provided to you, I think you're still going to have to trust the platform team or the organization that's providing it to you.
|
||||
|
||||
**CRAIG BOX: Just like when you go to the hotel breakfast bar, you have to trust that they've been good with their orange juice.**
|
||||
|
||||
JAMES LAVERACK: Yeah, I think the orange juice example is great. If you're making it yourself, then you can use assertation. If you're not, if you've just been given a glass, then you're going to have to trust who's pouring it.
|
||||
|
||||
**CRAIG BOX: Continuing with our exploration of new stable features, [storage capacity tracking](https://github.com/kubernetes/enhancements/issues/1472) and [volume expansion](https://github.com/kubernetes/enhancements/issues/284) are generally available. What do those features enable me to do?**
|
||||
|
||||
JAMES LAVERACK: This is a really great set of stable features coming out of SIG Storage. Storage capacity tracking allows applications on Kubernetes to use the Kubernetes API to understand how much storage is available, which can drive application decisions. With volume expansion, that again allows an application to use the Kubernetes API to request additional storage, which can enable applications to make all kinds of operational decisions.
|
||||
|
||||
**CRAIG BOX: SIG Storage are also working through [a project to migrate all of their in-tree storage plugins out to CSI plugins](https://github.com/kubernetes/enhancements/issues/625). How are they going with that process?**
|
||||
|
||||
JAMES LAVERACK: In 1.24 we have a couple of them that have been migrated out. The [Azure Disk](https://github.com/kubernetes/enhancements/issues/1490) and [OpenStack Cinder](https://github.com/kubernetes/enhancements/issues/1489) plugins have both been migrated. They're maintaining the original API, but the actual implementation now happens in those CSI plugins.
|
||||
|
||||
**CRAIG BOX: Do they have a long way to go, or are they just cutting off a couple every release?**
|
||||
|
||||
JAMES LAVERACK: They're just doing a couple every release from what I see. There are a couple of others to go. This is really part of a larger theme within Kubernetes, which is pushing application-specific things out behind interfaces, such as the container storage interface and the container runtime interface.
|
||||
|
||||
**CRAIG BOX: That obviously sets up a situation where you have a stable interface and you can have beta implementations of that that are outside of Kubernetes and get around the problem we talked about before with not being able to run beta things.**
|
||||
|
||||
JAMES LAVERACK: Yeah, exactly. It also makes it easy to expand Kubernetes. You don't have to try to get code in-tree in order to implement a new storage engine, for example.
|
||||
|
||||
**CRAIG BOX: [gRPC probes have graduated to beta in 1.24](https://github.com/kubernetes/enhancements/issues/2727). What does that functionality provide?**
|
||||
|
||||
JAMES LAVERACK: This is one of the changes that's going to be most visible to application developers in Kubernetes, I think. Until now, Kubernetes has had the ability to do readiness and liveness checks on containers and be able to make intelligent routing and pod restart decisions based on those. But those checks had to be HTTP REST endpoints.
|
||||
|
||||
With Kubernetes 1.24, we're enabling a beta feature that allows them to use gRPC. This means that if you're building an application that is primarily gRPC-based, as many microservices applications are, you can now use that same technology in order to implement your probes without having to bundle an HTTP server as well.
|
||||
|
||||
**CRAIG BOX: Are there any other enhancements that are particularly notable or relevant perhaps to the work you've been doing?**
|
||||
|
||||
JAMES LAVERACK: There's a really interesting one from SIG Network which is about [avoiding collisions in IP allocations to services](https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#avoiding-collisions-in-ip-allocation-to-services). In existing versions of Kubernetes, you can allocate a service to have a particular internal cluster IP, or you can leave it blank and it will generate its own IP.
|
||||
|
||||
In Kubernetes 1.24, there's an opt-in feature, which allows you to specify a pool for dynamic IPs to be generated from. This means that you can statically allocate an IP to a service and know that IP can not be accidentally dynamically allocated. This is a problem I've actually had in my local Kubernetes cluster, where I use static IP addresses for a bunch of port forwarding rules. I've always worried that during server start-up, they're going to get dynamically allocated to one of the other services. Now, with 1.24, and this feature, I won't have to worry about it more.
|
||||
|
||||
**CRAIG BOX: This is like the analog of allocating an IP in your DHCP server rather than just claiming it statically on your local machine?**
|
||||
|
||||
JAMES LAVERACK: Pretty much. It means that you can't accidentally double allocate something.
|
||||
|
||||
**CRAIG BOX: Why don't we all just use IPv6?**
|
||||
|
||||
JAMES LAVERACK: That is a very deep question I don't think we have time for.
|
||||
|
||||
**CRAIG BOX: The margins of this podcast would be unable to contain it even if we did.**
|
||||
|
||||
JAMES LAVERACK: [LAUGHING]
|
||||
|
||||
**CRAIG BOX: [The theme for Kubernetes 1.24 is Stargazer](https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#release-theme-and-logo). How did you pick that as the theme?**
|
||||
|
||||
JAMES LAVERACK: Every release lead gets to pick their theme, pretty much by themselves. When I started, I asked Rey, the previous release lead, how he picked his theme, because he picked the Next Frontier for Kubernetes 1.23. And he told me that he'd actually picked it before the release even started, which meant for the first couple of weeks and months of the release, I was really worried about it, because I hadn't picked one yet, and I wasn't sure what to pick.
|
||||
|
||||
Then again, I was speaking to another former release lead, and they told me that they picked theirs like two weeks out. It seems to really vary. About halfway through the release, I had some ideas down. I thought maybe we could talk about — I live in a city called Bristol in the UK, which has a very famous bridge — and I thought, oh, we could talk about bridges and architectural and a metaphor for community bridging gaps and things like this. I kind of liked the idea, but it didn't really grab me.
|
||||
|
||||
One thing about me is that I am a serious night owl. I cannot work effectively in the mornings. I've always enjoyed the night. And that got me thinking about astronomy and the stars. I think one night I was trying to get to sleep, because I couldn't sleep, and I was watching [PBS Space Time](https://www.youtube.com/channel/UC7_gcs09iThXybpVgjHZ_7g), which is this fantastic YouTube channel talking about physics. And I'm not a physicist. I don't understand any of the maths. But I find it really interesting as a topic.
|
||||
|
||||
I just thought, well, why don't I make a theme about stars. Kubernetes has often had a space theme in many releases. As I'm sure you're aware, its original name was based off of Star Trek. The previous release had a Star Trek-based theme. I thought, well, let's do that. So I came up with the idea of Stargazer.
|
||||
|
||||
**CRAIG BOX: Once you have a theme, you then need a release logo. I understand you have a household artist?**
|
||||
|
||||
JAMES LAVERACK: [LAUGHS] I don't think she'd appreciate being called that, but, yes. My wife is an artist, and in particular, a digital artist. I had a bit of a conversation with the SIG Release folks to see if they'd be comfortable with my wife doing it, and they said they'd be completely fine with that.
|
||||
|
||||
I asked if she would be willing to spend some time creating a logo for us. And thankfully for me, she was. She has produced this — well, I'm somewhat obliged to say — she produced us a beautiful logo, which you can see in our release blog and probably around social media. It is a telescope set over starry skies, and I absolutely love it.
|
||||
|
||||
**CRAIG BOX: It is objectively very nice. It obviously has the seven stars or the Seven Sisters of the Pleiades. Do the colors have any particular meaning?**
|
||||
|
||||
JAMES LAVERACK: The colors are based on the Kubernetes blue. If you look in the background, that haze is actually in the shape of a Kubernetes wheel from the original Kubernetes logo.
|
||||
|
||||
**CRAIG BOX: You must have to squint at it the right way. Very abstract. As is the wont of art.**
|
||||
|
||||
JAMES LAVERACK: As is the wont.
|
||||
|
||||
**CRAIG BOX: You mentioned before Rey Lejano, the 1.23 release lead. We ask every interview what the person learned from the last lead and what they're going to put in the proverbial envelope for the next. At the time, Rey said that he would encourage you to use teachable moments in the release team meetings. Was that something you were able to do?**
|
||||
|
||||
JAMES LAVERACK: Not as much as I would have liked. I think the thing that I really took from Rey was communicate more. I've made a big effort this time to put as much communication in the open as possible. I was actually worried that I was going to be spamming the SIG Release Slack channel too much. I asked our SIG Release chairs Stephen and Sasha about it. And they said, just don't worry about it. Just spam as much as you want.
|
||||
|
||||
And so I think the majority of the conversation in SIG Release Slack over the past few months has just been me. [LAUGHING] That seemed to work out pretty well.
|
||||
|
||||
**CRAIG BOX: That's what it's for.**
|
||||
|
||||
JAMES LAVERACK: It is what it's for. But SIG Release does more than just the individual release process, of course. It's release engineering, too.
|
||||
|
||||
**CRAIG BOX: I'm sure they'd be interested in what's going on anyway?**
|
||||
|
||||
JAMES LAVERACK: It's true. It's true. It's been really nice to be able to talk to everyone that way, I think.
|
||||
|
||||
**CRAIG BOX: We talked before about your introduction to Kubernetes being at a KubeCon, and meeting people in person. How has it been running the release almost entirely virtually?**
|
||||
|
||||
JAMES LAVERACK: It's not been so bad. The release team has always been geographically distributed, somewhat by design. It's always been a very virtual engagement, so I don't think it's been impacted too, too much by the pandemic and travel restrictions. Of course, I'm looking forward to KubeCon Valencia and being able to see everyone again. But I think the release team has handled excellently in the current situation.
|
||||
|
||||
**CRAIG BOX: What is the advice that you will pass on to [the next release lead](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.25/release-team.md), which has been announced to be Cici Huang from Google?**
|
||||
|
||||
JAMES LAVERACK: I would say to Cici that open communication is really important. I made a habit of posting every single week in SIG Release a summary of what's happened. I'm super-glad that I did that, and I'm going to encourage her to do the same if she wants to.
|
||||
|
||||
**CRAIG BOX: This release was originally due out two weeks earlier, but [it was delayed](https://groups.google.com/a/kubernetes.io/g/dev/c/9IZaUGVMnmo). What happened?**
|
||||
|
||||
JAMES LAVERACK: That delay was the result of a release-blocking bug — an absolute showstopper. This was in the underlying Go implementation of TLS certificate verification. It meant that a lot of clients simply would not be able to connect to clusters or anything else. So we took the decision that we can't release with a bug this big. Thus the term release-blocking.
|
||||
|
||||
The fix had to be merged upstream in Go 1.18.1, and then we had to, of course, rebuild and release release candidates. Given the time we like to have things to sit and stabilize after we make a lot of changes like that, we felt it was more prudent to push out the release by a couple of weeks than risk shipping a broken point-zero.
|
||||
|
||||
**CRAIG BOX: Go 1.18 is itself quite new. How does the project decide how quickly to upgrade its underlying programming language?**
|
||||
|
||||
JAMES LAVERACK: A lot of it is driven by support requirements. We support each release for three releases. So Kubernetes 1.24 will be most likely in support until this time next year, in 2023, as we do three releases per year. That means that right up until May, 2023, we're probably going to be shipping updates for Kubernetes 1.24, which means that the version of Go we're using, and other dependencies, have to be supported as well. My understanding is that the older version of Go, Go 1.17, just wouldn't be supported long enough.
|
||||
|
||||
Any underlying critical bug fixes that were coming in, they wouldn't have been back ported to Go 1.17, and therefore we might not be able to adequately support Kubernetes 1.24.
|
||||
|
||||
**CRAIG BOX: A side effect of the unfortunate delay was an unfortunate holiday situation, where you were booked to take the week after the release off and instead you ended up taking the week before the release off. Were you able to actually have any holiday and relax in that situation?**
|
||||
|
||||
JAMES LAVERACK: Well, I didn't go anywhere, if that's what you're asking.
|
||||
|
||||
**CRAIG BOX: No one ever does. This is what the pandemic's been, staycations.**
|
||||
|
||||
JAMES LAVERACK: Yeah, staycations. It's been interesting. On the one hand, I've done a lot of Kubernetes work in that time. So you could argue it's not really been a holiday. On the other hand, my highly annoying friends have gotten me into playing an MMO, so I've been spending a lot of time playing that.
|
||||
|
||||
**CRAIG BOX: I hear also you have a new vacuum cleaner?**
|
||||
|
||||
JAMES LAVERACK: [LAUGHS] You've been following my Twitter. Yes, I couldn't find the charging cord for my old vacuum cleaner. And so I decided just to buy a new one. I decided, at long last, just to buy one of the nice brand-name ones. And it is just better.
|
||||
|
||||
**CRAIG BOX: This isn't the BBC. You're allowed to name it if you want.**
|
||||
|
||||
JAMES LAVERACK: Yes, we went and bought one of these nice Dyson vacuum cleaners, and the first time I've gotten one so expensive. On the one hand, I feel a little bit bad spending a lot of money on a vacuum cleaner. On the other hand, it's so much easier.
|
||||
|
||||
**CRAIG BOX: Is it one of those handheld ones, like a giant Dust-Buster with a long leg?**
|
||||
|
||||
JAMES LAVERACK: No, I got one of the corded floor ones, because the problem was, of course, I lost the charger for the last one, so I didn't want that to happen again. So I got a wall plug-in one.
|
||||
|
||||
**CRAIG BOX: I must say, going from a standard [Henry Hoover](https://www.myhenry.com/) to — the place we're staying at the moment has what I'll call a knock-off Dyson portable vacuum cleaner — having something that you can just pick up and carry around with you, and not have to worry about the cord, actually does encourage me to keep the place tidier.**
|
||||
|
||||
JAMES LAVERACK: Really? I think our last one was corded, but it didn't encourage us to use it anymore, just because it was so useless.
|
||||
|
||||
---
|
||||
|
||||
_[James Laverack](https://twitter.com/jameslaverack) is a Staff Solutions Engineer at Jetstack, and was the release team lead for Kubernetes 1.24._
|
||||
|
||||
_You can find the [Kubernetes Podcast from Google](http://www.kubernetespodcast.com/) at [@KubernetesPod](https://twitter.com/KubernetesPod) on Twitter, and you can [subscribe](https://kubernetespodcast.com/subscribe/) so you never miss an episode._
|
|
@ -12,7 +12,7 @@ weight: 60
|
|||
Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} is a key security control
|
||||
to ensure that cluster users and workloads have only the access to resources required to
|
||||
execute their roles. It is important to ensure that, when designing permissions for cluster
|
||||
users, the cluster administrator understands the areas where privilge escalation could occur,
|
||||
users, the cluster administrator understands the areas where privilege escalation could occur,
|
||||
to reduce the risk of excessive access leading to security incidents.
|
||||
|
||||
The good practices laid out here should be read in conjunction with the general
|
||||
|
|
|
@ -53,7 +53,7 @@ GitHub teams and OWNERS files.
|
|||
There are two categories of SIG Docs [teams](https://github.com/orgs/kubernetes/teams?query=sig-docs) on GitHub:
|
||||
|
||||
- `@sig-docs-{language}-owners` are approvers and leads
|
||||
- `@sig-docs-{language}-reviewers` are reviewers
|
||||
- `@sig-docs-{language}-reviews` are reviewers
|
||||
|
||||
Each can be referenced with their `@name` in GitHub comments to communicate with
|
||||
everyone in that group.
|
||||
|
|
|
@ -326,7 +326,7 @@ The `service.spec.healthCheckNodePort` field points to a port on every node
|
|||
serving the health check at `/healthz`. You can test this:
|
||||
|
||||
```shell
|
||||
kubectl get pod -o wide -l run=source-ip-app
|
||||
kubectl get pod -o wide -l app=source-ip-app
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
#title: Production environment
|
||||
# title: Production environment
|
||||
# description: Create a production-quality Kubernetes cluster
|
||||
title: Прод оточення
|
||||
weight: 30
|
||||
no_list: true
|
||||
---
|
||||
|
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
# title: On-Premises VMs
|
||||
title: Менеджери віртуалізації
|
||||
weight: 40
|
||||
---
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
# title: Turnkey Cloud Solutions
|
||||
title: Хмарні рішення під ключ
|
||||
weight: 30
|
||||
---
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
# title: "Windows in Kubernetes"
|
||||
title: "Windows в Kubernetes"
|
||||
weight: 50
|
||||
---
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: Дізнатися про основи Kubernetes
|
||||
linkTitle: Основи Kubernetes
|
||||
no_list: true
|
||||
weight: 10
|
||||
card:
|
||||
name: навчальні матеріали
|
||||
|
@ -10,7 +11,7 @@ card:
|
|||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
<html lang="uk">
|
||||
|
||||
<body>
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.24 的删除和弃用"
|
||||
title: "Kubernetes 1.24 中的移除和弃用"
|
||||
date: 2022-04-07
|
||||
slug: upcoming-changes-in-kubernetes-1-24
|
||||
---
|
||||
|
@ -19,9 +19,9 @@ As Kubernetes evolves, features and APIs are regularly revisited and removed. Ne
|
|||
an alternative or improved approach to solving existing problems, motivating the team to remove the
|
||||
old approach.
|
||||
-->
|
||||
**作者**: Mickey Boxell (Oracle)
|
||||
**作者**:Mickey Boxell (Oracle)
|
||||
|
||||
随着 Kubernetes 的发展,特性和 API 会定期被重新访问和删除。
|
||||
随着 Kubernetes 的发展,一些特性和 API 会被定期重检和移除。
|
||||
新特性可能会提供替代或改进的方法,来解决现有的问题,从而激励团队移除旧的方法。
|
||||
|
||||
<!--
|
||||
|
@ -33,9 +33,9 @@ This is discussed below and will be explored in more depth at release time. For
|
|||
changes coming in Kubernetes 1.24, take a look at the in-progress
|
||||
[CHANGELOG](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md).
|
||||
-->
|
||||
我们希望确保你了解 Kubernetes 1.24 版本的变化。 该版本将 **弃用** 一些(测试版/beta)API,
|
||||
转而支持相同 API 的稳定版本。 Kubernetes 1.24 版本的主要变化是
|
||||
[移除 Dockershim](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim)。
|
||||
我们希望确保你了解 Kubernetes 1.24 版本的变化。该版本将 **弃用** 一些(测试版/beta)API,
|
||||
转而支持相同 API 的稳定版本。Kubernetes 1.24
|
||||
版本的主要变化是[移除 Dockershim](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim)。
|
||||
这将在下面讨论,并将在发布时更深入地探讨。
|
||||
要提前了解 Kubernetes 1.24 中的更改,请查看正在更新中的
|
||||
[CHANGELOG](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md)。
|
||||
|
@ -52,13 +52,13 @@ finally be removed.
|
|||
-->
|
||||
## 关于 Dockershim {#a-note-about-dockershim}
|
||||
|
||||
可以肯定地说,随着 Kubernetes 1.24 的发布,最受关注的删除是 Dockershim。
|
||||
Dockershim 在 1.20 版本中已被弃用。 如
|
||||
可以肯定地说,随着 Kubernetes 1.24 的发布,最受关注的是移除 Dockershim。
|
||||
Dockershim 在 1.20 版本中已被弃用。如
|
||||
[Kubernetes 1.20 变更日志](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)中所述:
|
||||
"Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet
|
||||
uses a module called "dockershim" which implements CRI support for Docker and it has seen maintenance
|
||||
issues in the Kubernetes community."
|
||||
随着即将发布的 Kubernetes 1.24,Dockershim 将最终被删除。
|
||||
随着即将发布的 Kubernetes 1.24,Dockershim 将最终被移除。
|
||||
|
||||
<!--
|
||||
In the article [Don't Panic: Kubernetes and Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/),
|
||||
|
@ -91,10 +91,10 @@ and the [updated dockershim removal FAQ](/blog/2022/02/17/dockershim-faq/).
|
|||
Take a look at the [Is Your Cluster Ready for v1.24?](/blog/2022/03/31/ready-for-dockershim-removal/) post to learn about how to ensure your cluster continues to work after upgrading from v1.23 to v1.24.
|
||||
-->
|
||||
有关 Kubernetes 为何不再使用 dockershim 的更多信息,
|
||||
请参见:[Kubernetes 正在离开 Dockershim](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/)
|
||||
请参见:[Kubernetes 即将移除 Dockershim](/zh-cn/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/)
|
||||
和[最新的弃用 Dockershim 的常见问题](/zh-cn/blog/2022/02/17/dockershim-faq/)。
|
||||
|
||||
查看[你的集群准备好使用 v1.24 了吗?](/blog/2022/03/31/ready-for-dockershim-removal/) 一文,
|
||||
查看[你的集群准备好使用 v1.24 版本了吗?](/zh-cn/blog/2022/03/31/ready-for-dockershim-removal/) 一文,
|
||||
了解如何确保你的集群在从 1.23 版本升级到 1.24 版本后继续工作。
|
||||
|
||||
<!--
|
||||
|
@ -110,85 +110,78 @@ same API is available and that APIs have a minimum lifetime as indicated by the
|
|||
* Beta or pre-release API versions must be supported for 3 releases after deprecation.
|
||||
* Alpha or experimental API versions may be removed in any release without prior deprecation notice.
|
||||
-->
|
||||
## Kubernetes API 删除和弃用流程 {#the-Kubernetes-api-removal-and-deprecation-process}
|
||||
## Kubernetes API 移除和弃用流程 {#the-Kubernetes-api-removal-and-deprecation-process}
|
||||
|
||||
Kubernetes 包含大量随时间演变的组件。在某些情况下,这种演变会导致 API、标志或整个特性被删除。
|
||||
Kubernetes 包含大量随时间演变的组件。在某些情况下,这种演变会导致 API、标志或整个特性被移除。
|
||||
为了防止用户面对重大变化,Kubernetes 贡献者采用了一项特性[弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/)。
|
||||
此策略确保仅当同一 API 的较新稳定版本可用并且
|
||||
此策略确保仅当同一 API 的较新稳定版本可用并且
|
||||
API 具有以下稳定性级别所指示的最短生命周期时,才可能弃用稳定版本 API:
|
||||
|
||||
* 正式发布 (GA) 或稳定的 API 版本可能被标记为已弃用,但不得在 Kubernetes 的主版本中删除。
|
||||
* 正式发布 (GA) 或稳定的 API 版本可能被标记为已弃用,但不得在 Kubernetes 的主版本中移除。
|
||||
* 测试版(beta)或预发布 API 版本在弃用后必须支持 3 个版本。
|
||||
* Alpha 或实验性 API 版本可能会在任何版本中被删除,恕不另行通知。
|
||||
* Alpha 或实验性 API 版本可能会在任何版本中被移除,恕不另行通知。
|
||||
|
||||
<!--
|
||||
Removals follow the same deprecation policy regardless of whether an API is removed due to a beta feature
|
||||
graduating to stable or because that API was not proven to be successful. Kubernetes will continue to make
|
||||
sure migration options are documented whenever APIs are removed.
|
||||
-->
|
||||
删除遵循相同的弃用政策,无论 API 是由于 测试版(beta)功能逐渐稳定还是因为该
|
||||
API 未被证明是成功的而被删除。
|
||||
Kubernetes 将继续确保在删除 API 时提供用来迁移的文档。
|
||||
移除遵循相同的弃用政策,无论 API 是由于 测试版(beta)功能逐渐稳定还是因为该
|
||||
API 未被证明是成功的而被移除。
|
||||
Kubernetes 将继续确保在移除 API 时提供用来迁移的文档。
|
||||
|
||||
<!--
|
||||
**Deprecated** APIs are those that have been marked for removal in a future Kubernetes release. **Removed**
|
||||
APIs are those that are no longer available for use in current, supported Kubernetes versions after having
|
||||
been deprecated. These removals have been superseded by newer, stable/generally available (GA) APIs.
|
||||
-->
|
||||
**弃用的** API 是指那些已标记为在未来 Kubernetes 版本中被删除的 API。
|
||||
**删除的** API 是指那些在被弃用后不再可用于当前受支持的 Kubernetes 版本的 API。
|
||||
这些删除已被更新的、稳定的/普遍可用的 (GA) API 所取代。
|
||||
**弃用的** API 是指那些已标记为在未来 Kubernetes 版本中移除的 API。
|
||||
**移除的** API 是指那些在被弃用后不再可用于当前受支持的 Kubernetes 版本的 API。
|
||||
这些移除的 API 已被更新的、稳定的/普遍可用的 (GA) API 所取代。
|
||||
|
||||
<!--
|
||||
## API removals, deprecations, and other changes for Kubernetes 1.24
|
||||
|
||||
* [Dynamic kubelet configuration](https://github.com/kubernetes/enhancements/issues/281): `DynamicKubeletConfig` is used to enable the dynamic configuration of the kubelet. The `DynamicKubeletConfig` flag was deprecated in Kubernetes 1.22. In v1.24, this feature gate will be removed from the kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/). Refer to the ["Dynamic kubelet config is removed" KEP](https://github.com/kubernetes/enhancements/issues/281) for more information.
|
||||
-->
|
||||
## Kubernetes 1.24 的 API 删除、弃用和其他更改 {#api-removals-deprecations-and-other-changes-for-kubernetes-1.24}
|
||||
## Kubernetes 1.24 的 API 移除、弃用和其他更改 {#api-removals-deprecations-and-other-changes-for-kubernetes-1.24}
|
||||
|
||||
* [动态 kubelet 配置](https://github.com/kubernetes/enhancements/issues/281): `DynamicKubeletConfig`
|
||||
用于启用 kubelet 的动态配置。Kubernetes 1.22 中弃用 `DynamicKubeletConfig` 标志。
|
||||
在 1.24 版本中,此特性门控将从 kubelet 中移除。请参阅[重新配置 kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/)。
|
||||
更多详细信息,请参阅[“删除动态 kubelet 配置” 的 KEP](https://github.com/kubernetes/enhancements/issues/281)。
|
||||
在 1.24 版本中,此特性门控将从 kubelet 中移除。请参阅[重新配置 kubelet](/zh-cn/docs/tasks/administer-cluster/reconfigure-kubelet/)。
|
||||
更多详细信息,请参阅[“移除动态 kubelet 配置” 的 KEP](https://github.com/kubernetes/enhancements/issues/281)。
|
||||
<!--
|
||||
* [Dynamic log sanitization](https://github.com/kubernetes/kubernetes/pull/107207): The experimental dynamic log sanitization feature is deprecated and will be removed in v1.24. This feature introduced a logging filter that could be applied to all Kubernetes system components logs to prevent various types of sensitive information from leaking via logs. Refer to [KEP-1753: Kubernetes system components logs sanitization](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#deprecation) for more information and an [alternative approach](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#alternatives=).
|
||||
-->
|
||||
* [动态日志清洗](https://github.com/kubernetes/kubernetes/pull/107207):实验性的动态日志清洗功能已被弃用,
|
||||
将在 1.24 版本中被删除。该功能引入了一个日志过滤器,可以应用于所有 Kubernetes 系统组件的日志,
|
||||
将在 1.24 版本中被移除。该功能引入了一个日志过滤器,可以应用于所有 Kubernetes 系统组件的日志,
|
||||
以防止各种类型的敏感信息通过日志泄漏。有关更多信息和替代方法,请参阅
|
||||
[KEP-1753: Kubernetes 系统组件日志清洗](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#deprecation)。
|
||||
<!--
|
||||
* In-tree provisioner to CSI driver migration: This applies to a number of in-tree plugins, including [Portworx](https://github.com/kubernetes/enhancements/issues/2589). Refer to the [In-tree Storage Plugin to CSI Migration Design Doc](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/csi-migration.md#background-and-motivations) for more information.
|
||||
-->
|
||||
* 树内驱动(In-tree provisioner)向 CSI 卷迁移:这适用于许多树内插件,
|
||||
包括 [Portworx](https://github.com/kubernetes/enhancements/issues/2589)。
|
||||
参见[树内存储插件向 CSI 卷迁移的设计文档](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/csi-migration.md#background-and-motivations)
|
||||
了解更多信息。
|
||||
<!--
|
||||
* [Removing Dockershim from kubelet](https://github.com/kubernetes/enhancements/issues/2221): the Container Runtime Interface (CRI) for Docker (i.e. Dockershim) is currently a built-in container runtime in the kubelet code base. It was deprecated in v1.20. As of v1.24, the kubelet will no longer have dockershim. Check out this blog on [what you need to do be ready for v1.24](/blog/2022/03/31/ready-for-dockershim-removal/).
|
||||
-->
|
||||
* [从 kubelet 中移除 Dockershim](https://github.com/kubernetes/enhancements/issues/2221):Docker
|
||||
的容器运行时接口(CRI)(即 Dockershim)目前是 kubelet 代码中内置的容器运行时。 它在 1.20 版本中已被弃用。
|
||||
从 1.24 版本开始,kubelet 已经移除 dockershim。 查看这篇博客,
|
||||
的容器运行时接口(CRI)(即 Dockershim)目前是 kubelet 代码中内置的容器运行时。它在 1.20 版本中已被弃用。
|
||||
从 1.24 版本开始,kubelet 已经移除 dockershim。查看这篇博客,
|
||||
[了解你需要为 1.24 版本做些什么](/blog/2022/03/31/ready-for-dockershim-removal/)。
|
||||
<!--
|
||||
* [Storage capacity tracking for pod scheduling](https://github.com/kubernetes/enhancements/issues/1472): The CSIStorageCapacity API supports exposing currently available storage capacity via CSIStorageCapacity objects and enhances scheduling of pods that use CSI volumes with late binding. In v1.24, the CSIStorageCapacity API will be stable. The API graduating to stable initates the deprecation of the v1beta1 CSIStorageCapacity API. Refer to the [Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking) for more information.
|
||||
-->
|
||||
* [pod 调度的存储容量追踪](https://github.com/kubernetes/enhancements/issues/1472):CSIStorageCapacity API
|
||||
支持通过 CSIStorageCapacity 对象暴露当前可用的存储容量,并增强了使用带有延迟绑定的 CSI 卷的 Pod 的调度。
|
||||
CSIStorageCapacity API 自 1.24 版本起提供稳定版本。升级到稳定版的 API 将弃用 v1beta1 CSIStorageCapacity API。
|
||||
* [Pod 调度的存储容量追踪](https://github.com/kubernetes/enhancements/issues/1472):CSIStorageCapacity API
|
||||
支持通过 CSIStorageCapacity 对象暴露当前可用的存储容量,并增强了使用带有延迟绑定的 CSI 卷的 Pod 的调度。
|
||||
CSIStorageCapacity API 自 1.24 版本起提供稳定版本。升级到稳定版的 API 将弃用 v1beta1 CSIStorageCapacity API。
|
||||
更多信息请参见 [Pod 调度存储容量约束 KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking)。
|
||||
<!--
|
||||
* [The `master` label is no longer present on kubeadm control plane nodes](https://github.com/kubernetes/kubernetes/pull/107533). For new clusters, the label 'node-role.kubernetes.io/master' will no longer be added to control plane nodes, only the label 'node-role.kubernetes.io/control-plane' will be added. For more information, refer to [KEP-2067: Rename the kubeadm "master" label and taint](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint).
|
||||
-->
|
||||
* [kubeadm 控制面节点上不再存在 `master` 标签](https://github.com/kubernetes/kubernetes/pull/107533)。
|
||||
* [kubeadm 控制面节点上不再存在 `master` 标签](https://github.com/kubernetes/kubernetes/pull/107533)。
|
||||
对于新集群,控制平面节点将不再添加 'node-role.kubernetes.io/master' 标签,
|
||||
只会添加 'node-role.kubernetes.io/control-plane' 标签。更多信息请参考
|
||||
[KEP-2067:重命名 kubeadm “master” 标签和污点](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint)。
|
||||
<!--
|
||||
* [VolumeSnapshot v1beta1 CRD will be removed](https://github.com/kubernetes/enhancements/issues/177). Volume snapshot and restore functionality for Kubernetes and the [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md) (CSI), which provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers, entered beta in v1.20. VolumeSnapshot v1beta1 was deprecated in v1.21 and is now unsupported. Refer to [KEP-177: CSI Snapshot](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot) and [kubernetes-csi/external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter/releases/tag/v4.1.0) for more information.
|
||||
-->
|
||||
* [VolumeSnapshot v1beta1 CRD 在 1.24 版本中将被移除](https://github.com/kubernetes/enhancements/issues/177)。
|
||||
* [VolumeSnapshot v1beta1 CRD 在 1.24 版本中将被移除](https://github.com/kubernetes/enhancements/issues/177)。
|
||||
Kubernetes 和 [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md) (CSI)
|
||||
的卷快照和恢复功能,在 1.20 版本中进入测试版。该功能提供标准化 API 设计 (CRD ) 并为 CSI 卷驱动程序添加了 PV 快照/恢复支持,
|
||||
VolumeSnapshot v1beta1 在 1.21 版本中已被弃用,现在不受支持。更多信息请参考
|
||||
|
@ -211,11 +204,12 @@ Docker Engine dependencies. Before upgrading to v1.24, you decide to either rema
|
|||
-->
|
||||
## 需要做什么 {#what-to-do}
|
||||
|
||||
### 删除 Dockershim {#dockershim-removal}
|
||||
### 移除 Dockershim {#dockershim-removal}
|
||||
|
||||
如前所述,有一些关于从 [dockershim 迁移](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/)的指南。
|
||||
你可以[从查明节点上所使用的容器运行时](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/)开始。
|
||||
如果你的节点使用 dockershim,则还有其他可能的 Docker Engine 依赖项,
|
||||
例如 Pod 或执行 Docker 命令的第三方工具或 Docker 配置文件中的私有注册表。
|
||||
例如 Pod 或执行 Docker 命令的第三方工具或 Docker 配置文件中的私有镜像库。
|
||||
你可以按照[检查移除 Dockershim 是否对你有影响](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/)
|
||||
的指南来查看可能的 Docker 引擎依赖项。在升级到 1.24 版本之前,你决定要么继续使用 Docker Engine 并
|
||||
[将 Docker Engine 节点从 dockershim 迁移到 cri-dockerd](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/),
|
||||
|
@ -235,9 +229,9 @@ documentation to download and install the `kubectl-convert` binary.
|
|||
|
||||
kubectl 的 [`kubectl convert`](/zh-cn/docs/tasks/tools/included/kubectl-convert-overview/)
|
||||
插件有助于解决弃用 API 的迁移问题。该插件方便了不同 API 版本之间清单的转换,
|
||||
例如,从弃用的 API 版本到非弃用的 API 版本。关于 API 迁移过程的更多信息可以在
|
||||
[已弃用 API 的迁移指南](/docs/reference/using-api/deprecation-guide/)中找到。按照
|
||||
[安装 `kubectl convert` 插件](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-convert-plugin)
|
||||
例如,从弃用的 API 版本到非弃用的 API 版本。
|
||||
关于 API 迁移过程的更多信息可以在[已弃用 API 的迁移指南](/zh-cn/docs/reference/using-api/deprecation-guide/)中找到。
|
||||
按照[安装 `kubectl convert` 插件](/zh-cn/docs/tasks/tools/install-kubectl-linux/#install-kubectl-convert-plugin)
|
||||
文档下载并安装 `kubectl-convert` 二进制文件。
|
||||
|
||||
<!--
|
||||
|
@ -251,7 +245,7 @@ Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-depre
|
|||
### 展望未来 {#looking-ahead}
|
||||
|
||||
计划在今年晚些时候发布的 Kubernetes 1.25 和 1.26 版本,将停止提供一些
|
||||
Kubernetes API 的 beta 版本,这些 API 当前为稳定版。1.25 版本还将删除 PodSecurityPolicy,
|
||||
Kubernetes API 的 Beta 版本,这些 API 当前为稳定版。1.25 版本还将移除 PodSecurityPolicy,
|
||||
它已在 Kubernetes 1.21 版本中被弃用,并且不会升级到稳定版。有关详细信息,请参阅
|
||||
[PodSecurityPolicy 弃用:过去、现在和未来](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)。
|
||||
|
||||
|
@ -260,13 +254,13 @@ The official [list of API removals planned for Kubernetes 1.25](/docs/reference/
|
|||
-->
|
||||
[Kubernetes 1.25 计划移除的 API 的官方列表](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-25)是:
|
||||
|
||||
* The beta CronJob API (batch/v1beta1)
|
||||
* The beta EndpointSlice API (discovery.k8s.io/v1beta1)
|
||||
* The beta Event API (events.k8s.io/v1beta1)
|
||||
* The beta HorizontalPodAutoscaler API (autoscaling/v2beta1)
|
||||
* The beta PodDisruptionBudget API (policy/v1beta1)
|
||||
* The beta PodSecurityPolicy API (policy/v1beta1)
|
||||
* The beta RuntimeClass API (node.k8s.io/v1beta1)
|
||||
* Beta CronJob API (batch/v1beta1)
|
||||
* Beta EndpointSlice API (discovery.k8s.io/v1beta1)
|
||||
* Beta Event API (events.k8s.io/v1beta1)
|
||||
* Beta HorizontalPodAutoscaler API (autoscaling/v2beta1)
|
||||
* Beta PodDisruptionBudget API (policy/v1beta1)
|
||||
* Beta PodSecurityPolicy API (policy/v1beta1)
|
||||
* Beta RuntimeClass API (node.k8s.io/v1beta1)
|
||||
|
||||
<!--
|
||||
The official [list of API removals planned for Kubernetes 1.26](/docs/reference/using-api/deprecation-guide/#v1-26) is:
|
||||
|
@ -274,10 +268,10 @@ The official [list of API removals planned for Kubernetes 1.26](/docs/reference/
|
|||
* The beta FlowSchema and PriorityLevelConfiguration APIs (flowcontrol.apiserver.k8s.io/v1beta1)
|
||||
* The beta HorizontalPodAutoscaler API (autoscaling/v2beta2)
|
||||
-->
|
||||
[Kubernetes 1.25 计划移除的 API 的官方列表](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-25)是:
|
||||
[Kubernetes 1.26 计划移除的 API 的官方列表](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-26)是:
|
||||
|
||||
* The beta FlowSchema 和 PriorityLevelConfiguration API (flowcontrol.apiserver.k8s.io/v1beta1)
|
||||
* The beta HorizontalPodAutoscaler API (autoscaling/v2beta2)
|
||||
* Beta FlowSchema 和 PriorityLevelConfiguration API (flowcontrol.apiserver.k8s.io/v1beta1)
|
||||
* Beta HorizontalPodAutoscaler API (autoscaling/v2beta2)
|
||||
|
||||
<!--
|
||||
### Want to know more?
|
||||
|
@ -288,14 +282,16 @@ Deprecations are announced in the Kubernetes release notes. You can see the anno
|
|||
* We will formally announce the deprecations that come with [Kubernetes 1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation) as part of the CHANGELOG for that release.
|
||||
|
||||
For information on the process of deprecation and removal, check out the official Kubernetes [deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) document.
|
||||
-->
|
||||
-->
|
||||
### 了解更多 {#want-to-know-more}
|
||||
|
||||
Kubernetes 发行说明中宣告了弃用信息。你可以在以下版本的发行说明中看到待弃用的公告:
|
||||
|
||||
* [Kubernetes 1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#deprecation)
|
||||
* [Kubernetes 1.22](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#deprecation)
|
||||
* [Kubernetes 1.23](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#deprecation)
|
||||
* 我们将正式宣布 [Kubernetes 1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation) 的弃用信息,
|
||||
作为该版本 CHANGELOG 的一部分。
|
||||
|
||||
有关弃用和删除过程的信息,请查看 Kubernetes 官方[弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) 文档。
|
||||
有关弃用和移除过程的信息,请查看 Kubernetes 官方[弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api)文档。
|
||||
|
||||
|
|
|
@ -0,0 +1,223 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "聚光灯下的 SIG Docs"
|
||||
date: 2022-08-02
|
||||
slug: sig-docs-spotlight-2022
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Spotlight on SIG Docs"
|
||||
date: 2022-08-02
|
||||
slug: sig-docs-spotlight-2022
|
||||
canonicalUrl: https://kubernetes.dev/blog/2022/08/02/sig-docs-spotlight-2022/
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Author:** Purneswar Prasad
|
||||
-->
|
||||
**作者:** Purneswar Prasad
|
||||
|
||||
<!--
|
||||
## Introduction
|
||||
|
||||
The official documentation is the go-to source for any open source project. For Kubernetes,
|
||||
it's an ever-evolving Special Interest Group (SIG) with people constantly putting in their efforts
|
||||
to make details about the project easier to consume for new contributors and users. SIG Docs publishes
|
||||
the official documentation on [kubernetes.io](https://kubernetes.io) which includes,
|
||||
but is not limited to, documentation of the core APIs, core architectural details, and CLI tools
|
||||
shipped with the Kubernetes release.
|
||||
|
||||
To learn more about the work of SIG Docs and its future ahead in shaping the community, I have summarised
|
||||
my conversation with the co-chairs, [Divya Mohan](https://twitter.com/Divya_Mohan02) (DM),
|
||||
[Rey Lejano](https://twitter.com/reylejano) (RL) and Natali Vlatko (NV), who ran through the
|
||||
SIG's goals and how fellow contributors can help.
|
||||
-->
|
||||
## 简介
|
||||
|
||||
官方文档是所有开源项目的首选资料源。对于 Kubernetes,它是一个持续演进的特别兴趣小组 (SIG),
|
||||
人们持续不断努力制作详实的项目资料,让新贡献者和用户更容易取用这些文档。
|
||||
SIG Docs 在 [kubernetes.io](https://kubernetes.io) 上发布官方文档,
|
||||
包括但不限于 Kubernetes 版本发布时附带的核心 API 文档、核心架构细节和 CLI 工具文档。
|
||||
|
||||
为了了解 SIG Docs 的工作及其在塑造社区未来方面的更多信息,我总结了自己与联合主席
|
||||
[Divya Mohan](https://twitter.com/Divya_Mohan02)(下称 DM)、
|
||||
[Rey Lejano](https://twitter.com/reylejano)(下称 RL)和 Natali Vlatko(下称 NV)的谈话,
|
||||
他们讲述了 SIG 的目标以及其他贡献者们如何从旁协助。
|
||||
|
||||
<!--
|
||||
## A summary of the conversation
|
||||
|
||||
### Could you tell us a little bit about what SIG Docs does?
|
||||
|
||||
SIG Docs is the special interest group for documentation for the Kubernetes project on kubernetes.io,
|
||||
generating reference guides for the Kubernetes API, kubeadm and kubectl as well as maintaining the official
|
||||
website’s infrastructure and analytics. The remit of their work also extends to docs releases, translation of docs,
|
||||
improvement and adding new features to existing documentation, pushing and reviewing content for the official
|
||||
Kubernetes blog and engaging with the Release Team for each cycle to get docs and blogs reviewed.
|
||||
-->
|
||||
## 谈话汇总
|
||||
|
||||
### 你能告诉我们 SIG Docs 具体做什么吗?
|
||||
|
||||
SIG Docs 是 kubernetes.io 上针对 Kubernetes 项目文档的特别兴趣小组,
|
||||
为 Kubernetes API、kubeadm 和 kubectl 制作参考指南,并维护官方网站的基础设施和数据分析。
|
||||
他们的工作范围还包括文档发布、文档翻译、改进并向现有文档添加新功能特性、推送和审查官方 Kubernetes 博客的内容,
|
||||
并在每个发布周期与发布团队合作以审查文档和博客。
|
||||
|
||||
<!--
|
||||
### There are 2 subprojects under Docs: blogs and localization. How has the community benefited from it and are there some interesting contributions by those teams you want to highlight?
|
||||
|
||||
**Blogs**: This subproject highlights new or graduated Kubernetes enhancements, community reports, SIG updates
|
||||
or any relevant news to the Kubernetes community such as thought leadership, tutorials and project updates,
|
||||
such as the Dockershim removal and removal of PodSecurityPolicy, which is upcoming in the 1.25 release.
|
||||
Tim Bannister, one of the SIG Docs tech leads, does awesome work and is a major force when pushing contributions
|
||||
through to the docs and blogs.
|
||||
-->
|
||||
### Docs 下有 2 个子项目:博客和本地化。社区如何从中受益?你想强调的这些团队是否侧重于某些贡献?
|
||||
|
||||
**博客**:这个子项目侧重于介绍新的或毕业的 Kubernetes 增强特性、社区报告、SIG 更新或任何与 Kubernetes
|
||||
社区相关的新闻,例如思潮引领、教程和项目更新,例如即将在 1.25 版本中移除 Dockershim 和 PodSecurityPolicy。
|
||||
Tim Bannister 是 SIG Docs 技术负责人之一,他做得工作非常出色,是推动文档和博客贡献的主力人物。
|
||||
|
||||
<!--
|
||||
**Localization**: With this subproject, the Kubernetes community has been able to achieve greater inclusivity
|
||||
and diversity among both users and contributors. This has also helped the project gain more contributors,
|
||||
especially students, since a couple of years ago.
|
||||
One of the major highlights and up-and-coming localizations are Hindi and Bengali. The efforts for Hindi
|
||||
localization are currently being spearheaded by students in India.
|
||||
|
||||
In addition to that, there are two other subprojects: [reference-docs](https://github.com/kubernetes-sigs/reference-docs) and the [website](https://github.com/kubernetes/website), which is built with Hugo and is an important ownership area.
|
||||
-->
|
||||
**本地化**:通过这个子项目,Kubernetes 社区能够在用户和贡献者之间实现更大的包容性和多样性。
|
||||
自几年前以来,这也帮助该项目获得了更多的贡献者,尤其是学生们。
|
||||
主要亮点之一是即将到来的本地化版本:印地语和孟加拉语。印地语的本地化工作目前由印度的学生们牵头。
|
||||
|
||||
除此之外,还有另外两个子项目:[reference-docs](https://github.com/kubernetes-sigs/reference-docs) 和
|
||||
[website](https://github.com/kubernetes/website),后者采用 Hugo 构建,是 Kubernetes 拥有的一个重要阵地。
|
||||
|
||||
<!--
|
||||
### Recently there has been a lot of buzz around the Kubernetes ecosystem as well as the industry regarding the removal of dockershim in the latest 1.24 release. How has SIG Docs helped the project to ensure a smooth change among the end-users? {#dockershim-removal}
|
||||
-->
|
||||
### 最近有很多关于 Kubernetes 生态系统以及业界对最新 1.24 版本中移除 Dockershim 的讨论。SIG Docs 如何帮助该项目确保最终用户们平滑变更? {#dockershim-removal}
|
||||
|
||||
<!--
|
||||
Documenting the removal of Dockershim was a mammoth task, requiring the revamping of existing documentation
|
||||
and communicating to the various stakeholders regarding the deprecation efforts. It needed a community effort,
|
||||
so ahead of the 1.24 release, SIG Docs partnered with Docs and Comms verticals, the Release Lead from the
|
||||
Release Team, and also the CNCF to help put the word out. Weekly meetings and a GitHub project board were
|
||||
set up to track progress, review issues and approve PRs and keep the Kubernetes website updated. This has
|
||||
also helped new contributors know about the depreciation, so that if any good-first-issue pops up, they could chip in.
|
||||
A dedicated Slack channel was used to communicate meeting updates, invite feedback or to solicit help on
|
||||
outstanding issues and PRs. The weekly meeting also continued for a month after the 1.24 release to review related issues and fix them.
|
||||
A huge shoutout to [Celeste Horgan](https://twitter.com/celeste_horgan), who kept the ball rolling on this
|
||||
conversation throughout the deprecation process.
|
||||
-->
|
||||
与 Dockershim 移除有关的文档工作是一项艰巨的任务,需要修改现有文档并就弃用工作与各种利益相关方进行沟通。
|
||||
这需要社区的努力,因此在 1.24 版本发布之前,SIG Docs 与 Docs and Comms 垂直行业、来自发布团队的发布负责人以及
|
||||
CNCF 建立合作关系,帮助在全网宣传。设立了每周例会和 GitHub 项目委员会,以跟踪进度、审查问题和批准 PR,
|
||||
并保持更新 Kubernetes 网站。这也有助于新的贡献者们了解这次弃用,因此如果出现任何 good-first-issue,
|
||||
新的贡献者也可以参与进来。开通了专用的 Slack 频道用于交流会议更新、邀请反馈或就悬而未决的问题和 PR 寻求帮助。
|
||||
每周例会在 1.24 发布后也持续了一个月,以审查并修复相关问题。
|
||||
非常感谢 [Celeste Horgan](https://twitter.com/celeste_horgan),与他的顺畅交流贯穿了这个弃用过程的前前后后。
|
||||
|
||||
<!--
|
||||
### Why should new and existing contributors consider joining this SIG?
|
||||
|
||||
Kubernetes is a vast project and can be intimidating at first for a lot of folks to find a place to start.
|
||||
Any open source project is defined by its quality of documentation and SIG Docs aims to be a welcoming,
|
||||
helpful place for new contributors to get onboard. One gets the perks of working with the project docs
|
||||
as well as learning by reading it. They can also bring their own, new perspective to create and improve
|
||||
the documentation. In the long run if they stick to SIG Docs, they can rise up the ladder to be maintainers.
|
||||
This will help make a big project like Kubernetes easier to parse and navigate.
|
||||
-->
|
||||
### 为什么新老贡献者都应该考虑加入这个 SIG?
|
||||
|
||||
Kubernetes 是一个庞大的项目,起初可能会让很多人难以找到切入点。
|
||||
任何开源项目的优劣总能从文档质量略窥一二,SIG Docs 的目标是建设一个欢迎新贡献者加入并对其有帮助的地方。
|
||||
希望所有人可以轻松参与该项目的文档,并能从阅读中受益。他们还可以带来自己的新视角,以制作和改进文档。
|
||||
从长远来看,如果他们坚持参与 SIG Docs,就可以拾阶而上晋升成为维护者。
|
||||
这将有助于使 Kubernetes 这样的大型项目更易于解析和导航。
|
||||
|
||||
<!--
|
||||
### How do you help new contributors get started? Are there any prerequisites to join?
|
||||
|
||||
There are no such prerequisites to get started with contributing to Docs. But there is certainly a fantastic
|
||||
Contribution to Docs guide which is always kept as updated and relevant as possible and new contributors
|
||||
are urged to read it and keep it handy. Also, there are a lot of useful pins and bookmarks in the
|
||||
community Slack channel [#sig-docs](https://kubernetes.slack.com/archives/C1J0BPD2M). GitHub issues with
|
||||
the good-first-issue labels in the kubernetes/website repo is a great place to create your first PR.
|
||||
Now, SIG Docs has a monthly New Contributor Meet and Greet on the first Tuesday of the month with the
|
||||
first occupant of the New Contributor Ambassador role, [Arsh Sharma](https://twitter.com/RinkiyaKeDad).
|
||||
This has helped in making a more accessible point of contact within the SIG for new contributors.
|
||||
-->
|
||||
### 你如何帮助新的贡献者入门?加入有什么前提条件吗?
|
||||
|
||||
开始为 Docs 做贡献没有这样的前提条件。但肯定有一个很棒的对文档做贡献的指南,这个指南始终尽可能保持更新和贴合实际,
|
||||
希望新手们多多阅读并将其放在趁手的地方。此外,社区 Slack 频道
|
||||
[#sig-docs](https://kubernetes.slack.com/archives/C1J0BPD2M) 中有很多有用的便贴和书签。
|
||||
kubernetes/website 仓库中带有 good-first-issue 标签的那些 GitHub 问题是创建你的第一个 PR 的好地方。
|
||||
现在,SIG Docs 在每月的第一个星期二配合第一任 New Contributor Ambassador(新贡献者大使)角色
|
||||
[Arsh Sharma](https://twitter.com/RinkiyaKeDad) 召开月度 New Contributor Meet and Greet(新贡献者见面会)。
|
||||
这有助于在 SIG 内为新的贡献者建立一个更容易参与的联络形式。
|
||||
|
||||
<!--
|
||||
### Any SIG related accomplishment that you’re really proud of?
|
||||
|
||||
**DM & RL** : The formalization of the localization subproject in the last few months has been a big win
|
||||
for SIG Docs, given all the great work put in by contributors from different countries. Earlier the
|
||||
localization efforts didn’t have any streamlined process and focus was given to provide a structure by
|
||||
drafting a KEP over the past couple of months for localization to be formalized as a subproject, which
|
||||
is planned to be pushed through by the end of third quarter.
|
||||
-->
|
||||
### 你是否有任何真正自豪的 SIG 相关成绩?
|
||||
|
||||
**DM & RL** :鉴于来自不同国家的贡献者们做出的所有出色工作,
|
||||
过去几个月本地化子项目的正式推行对 SIG Docs 来说是一个巨大的胜利。
|
||||
早些时候,本地化工作还没有任何流水线的流程,过去几个月的重点是通过起草一份 KEP 为本地化正式成为一个子项目提供一个框架,
|
||||
这项工作计划在第三个季度结束时完成。
|
||||
|
||||
<!--
|
||||
**DM** : Another area where there has been a lot of success is the New Contributor Ambassador role,
|
||||
which has helped in making a more accessible point of contact for the onboarding of new contributors into the project.
|
||||
|
||||
**NV** : For each release cycle, SIG Docs have to review release docs and feature blogs highlighting
|
||||
release updates within a short window. This is always a big effort for the docs and blogs reviewers.
|
||||
-->
|
||||
**DM**:另一个取得很大成功的领域是 New Contributor Ambassador(新贡献者大使)角色,
|
||||
这个角色有助于为新贡献者参与项目提供更便捷的联系形式。
|
||||
|
||||
**NV**:对于每个发布周期,SIG Docs 都必须在短时间内评审突出介绍发布更新的发布文档和功能特性博客。
|
||||
这对于文档和博客审阅者来说,始终需要付出巨大的努力。
|
||||
|
||||
<!--
|
||||
### Is there something exciting coming up for the future of SIG Docs that you want the community to know?
|
||||
|
||||
SIG Docs is now looking forward to establishing a roadmap, having a steady pipeline of folks being able
|
||||
to push improvements to the documentation and streamlining community involvement in triaging issues and
|
||||
reviewing PRs being filed. To build one such contributor and reviewership base, a mentorship program is
|
||||
being set up to help current contributors become reviewers. This definitely is a space to watch out for more!
|
||||
-->
|
||||
### 你是否有一些关于 SIG Docs 未来令人兴奋的举措想让社区知道?
|
||||
|
||||
SIG Docs 现在期望设计一个路线图,建立稳定的人员流转机制以期推动对文档的改进,
|
||||
简化社区参与 Issue 评判和已提交 PR 的评审工作。
|
||||
为了建立一个这样由贡献者和 Reviewer 组成的群体,我们正在设立一项辅导计划帮助当前的贡献者们成为 Reviewer。
|
||||
这绝对是一项值得关注的举措!
|
||||
|
||||
<!--
|
||||
## Wrap Up
|
||||
|
||||
SIG Docs hosted a [deep dive talk](https://www.youtube.com/watch?v=GDfcBF5et3Q)
|
||||
during on KubeCon + CloudNativeCon North America 2021, covering their awesome SIG.
|
||||
They are very welcoming and have been the starting ground into Kubernetes
|
||||
for a lot of new folks who want to contribute to the project.
|
||||
Join the [SIG's meetings](https://github.com/kubernetes/community/blob/master/sig-docs/README.md) to find out
|
||||
about the most recent research results, their plans for the forthcoming year, and how to get involved in the upstream Docs team as a contributor!
|
||||
-->
|
||||
## 结束语
|
||||
|
||||
SIG Docs 在 KubeCon + CloudNativeCon North America 2021
|
||||
期间举办了一次[深度访谈](https://www.youtube.com/watch?v=GDfcBF5et3Q),涵盖了他们很棒的 SIG 主题。
|
||||
他们非常欢迎想要为 Kubernetes 项目做贡献的新人,对这些新人而言 SIG Docs 已成为加入 Kubernetes 的起跳板。
|
||||
欢迎加入 [SIG 会议](https://github.com/kubernetes/community/blob/master/sig-docs/README.md),
|
||||
了解最新的研究成果、来年的计划以及如何作为贡献者参与上游 Docs 团队!
|
|
@ -6,7 +6,6 @@ description: >
|
|||
关于创建和管理 Kubernetes 集群的底层细节。
|
||||
no_list: true
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Cluster Administration
|
||||
reviewers:
|
||||
|
@ -32,32 +31,40 @@ It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/).
|
|||
<!--
|
||||
## Planning a cluster
|
||||
|
||||
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*.
|
||||
|
||||
Not all distros are actively maintained. Choose distros which have been tested with a recent version of Kubernetes.
|
||||
|
||||
Before choosing a guide, here are some considerations:
|
||||
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure
|
||||
Kubernetes clusters. The solutions listed in this article are called *distros*.
|
||||
-->
|
||||
## 规划集群 {#planning-a-cluster}
|
||||
|
||||
查阅[安装](/zh-cn/docs/setup/)中的指导,获取如何规划、建立以及配置 Kubernetes
|
||||
集群的示例。本文所列的文章称为*发行版* 。
|
||||
集群的示例。本文所列的文章称为**发行版**。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Not all distros are actively maintained. Choose distros which have been tested with a recent
|
||||
version of Kubernetes.
|
||||
-->
|
||||
并非所有发行版都是被积极维护的。
|
||||
请选择使用最近 Kubernetes 版本测试过的发行版。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Before choosing a guide, here are some considerations:
|
||||
-->
|
||||
在选择一个指南前,有一些因素需要考虑:
|
||||
|
||||
<!--
|
||||
- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
|
||||
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
|
||||
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
|
||||
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
|
||||
- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability,
|
||||
multi-node cluster? Choose distros best suited for your needs.
|
||||
- Will you be using **a hosted Kubernetes cluster**, such as
|
||||
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
|
||||
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly
|
||||
support hybrid clusters. Instead, you can set up multiple clusters.
|
||||
- **If you are configuring Kubernetes on-premises**, consider which
|
||||
[networking model](/docs/concepts/cluster-administration/networking/) fits best.
|
||||
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
|
||||
- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
|
||||
latter, choose an actively-developed distro. Some distros only use binary releases, but
|
||||
- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**?
|
||||
If the latter, choose an actively-developed distro. Some distros only use binary releases, but
|
||||
offer a greater variety of choices.
|
||||
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
|
||||
-->
|
||||
|
@ -67,9 +74,9 @@ Before choosing a guide, here are some considerations:
|
|||
这样的**被托管的 Kubernetes 集群**, 还是**管理你自己的集群**?
|
||||
- 你的集群是在**本地**还是**云(IaaS)** 上?Kubernetes 不能直接支持混合集群。
|
||||
作为代替,你可以建立多个集群。
|
||||
- **如果你在本地配置 Kubernetes**,需要考虑哪种
|
||||
[网络模型](/zh-cn/docs/concepts/cluster-administration/networking/)最适合。
|
||||
- 你的 Kubernetes 在**裸金属硬件**上还是**虚拟机(VMs)** 上运行?
|
||||
- **如果你在本地配置 Kubernetes**,
|
||||
需要考虑哪种[网络模型](/zh-cn/docs/concepts/cluster-administration/networking/)最适合。
|
||||
- 你的 Kubernetes 在**裸金属硬件**上还是**虚拟机(VM)**上运行?
|
||||
- 你是想**运行一个集群**,还是打算**参与开发 Kubernetes 项目代码**?
|
||||
如果是后者,请选择一个处于开发状态的发行版。
|
||||
某些发行版只提供二进制发布版,但提供更多的选择。
|
||||
|
@ -78,7 +85,7 @@ Before choosing a guide, here are some considerations:
|
|||
<!--
|
||||
## Managing a cluster
|
||||
|
||||
* Learn how to [manage nodes](/docs/concepts/nodes/node/).
|
||||
* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).
|
||||
|
||||
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
|
||||
-->
|
||||
|
@ -90,58 +97,65 @@ Before choosing a guide, here are some considerations:
|
|||
|
||||
<!--
|
||||
## Securing a cluster
|
||||
|
||||
* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to generate certificates using different tool chains.
|
||||
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node.
|
||||
* [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/) describes how to set up permissions for users and service accounts.
|
||||
* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in Kubernetes, including the various authentication options.
|
||||
* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from authentication, and controls how HTTP calls are handled.
|
||||
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization.
|
||||
* [Using Sysctls in a Kubernetes Cluster](/docs/concepts/cluster-administration/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
|
||||
* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes' audit logs.
|
||||
* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to
|
||||
generate certificates using different tool chains.
|
||||
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes
|
||||
the environment for Kubelet managed containers on a Kubernetes node.
|
||||
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access) describes
|
||||
how Kubernetes implements access control for its own API.
|
||||
* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in
|
||||
Kubernetes, including the various authentication options.
|
||||
* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from
|
||||
authentication, and controls how HTTP calls are handled.
|
||||
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
|
||||
explains plug-ins which intercepts requests to the Kubernetes API server after authentication
|
||||
and authorization.
|
||||
* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/)
|
||||
describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters.
|
||||
* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes'
|
||||
audit logs.
|
||||
-->
|
||||
## 保护集群 {#securing-a-cluster}
|
||||
|
||||
* [生成证书](/zh-cn/docs/tasks/administer-cluster/certificates/)
|
||||
节描述了使用不同的工具链生成证书的步骤。
|
||||
* [Kubernetes 容器环境](/zh-cn/docs/concepts/containers/container-environment/)
|
||||
描述了 Kubernetes 节点上由 Kubelet 管理的容器的环境。
|
||||
* [控制到 Kubernetes API 的访问](/zh-cn/docs/concepts/security/controlling-access/)
|
||||
描述了如何为用户和 service accounts 建立权限许可。
|
||||
* [身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)
|
||||
节阐述了 Kubernetes 中的身份认证功能,包括许多认证选项。
|
||||
* [鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/)
|
||||
与身份认证不同,用于控制如何处理 HTTP 请求。
|
||||
* [使用准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers)
|
||||
阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。
|
||||
* [在 Kubernetes 集群中使用 Sysctls](/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/)
|
||||
* [生成证书](/zh-cn/docs/tasks/administer-cluster/certificates/)描述了使用不同的工具链生成证书的步骤。
|
||||
* [Kubernetes 容器环境](/zh-cn/docs/concepts/containers/container-environment/)描述了
|
||||
Kubernetes 节点上由 Kubelet 管理的容器的环境。
|
||||
* [控制对 Kubernetes API 的访问](/zh-cn/docs/concepts/security/controlling-access/)描述了 Kubernetes
|
||||
如何为自己的 API 实现访问控制。
|
||||
* [身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)阐述了 Kubernetes
|
||||
中的身份认证功能,包括许多认证选项。
|
||||
* [鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/)与身份认证不同,用于控制如何处理 HTTP 请求。
|
||||
* [使用准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers)阐述了在认证和授权之后拦截到
|
||||
Kubernetes API 服务的请求的插件。
|
||||
* [在 Kubernetes 集群中使用 sysctl](/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/)
|
||||
描述了管理员如何使用 `sysctl` 命令行工具来设置内核参数。
|
||||
* [审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/)
|
||||
描述了如何与 Kubernetes 的审计日志交互。
|
||||
* [审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/)描述了如何与 Kubernetes 的审计日志交互。
|
||||
|
||||
<!--
|
||||
### Securing the kubelet
|
||||
|
||||
* [Master-Node communication](/docs/concepts/architecture/master-node-communication/)
|
||||
* [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/)
|
||||
* [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
||||
* [Kubelet authentication/authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/)
|
||||
-->
|
||||
### 保护 kubelet {#securing-the-kubelet}
|
||||
|
||||
* [主控节点通信](/zh-cn/docs/concepts/architecture/control-plane-node-communication/)
|
||||
* [TLS 引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet 认证/授权](/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz/)
|
||||
* [节点与控制面之间的通信](/zh-cn/docs/concepts/architecture/control-plane-node-communication/)
|
||||
* [TLS 启动引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet 认证/鉴权](/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz/)
|
||||
|
||||
<!--
|
||||
## Optional Cluster Services
|
||||
|
||||
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service.
|
||||
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it.
|
||||
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve
|
||||
a DNS name directly to a Kubernetes service.
|
||||
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/)
|
||||
explains how logging in Kubernetes works and how to implement it.
|
||||
-->
|
||||
## 可选集群服务 {#optional-cluster-services}
|
||||
|
||||
* [DNS 集成](/zh-cn/docs/concepts/services-networking/dns-pod-service/)
|
||||
描述了如何将一个 DNS 名解析到一个 Kubernetes service。
|
||||
* [记录和监控集群活动](/zh-cn/docs/concepts/cluster-administration/logging/)
|
||||
阐述了 Kubernetes 的日志如何工作以及怎样实现。
|
||||
* [DNS 集成](/zh-cn/docs/concepts/services-networking/dns-pod-service/)描述了如何将一个 DNS
|
||||
名解析到一个 Kubernetes service。
|
||||
* [记录和监控集群活动](/zh-cn/docs/concepts/cluster-administration/logging/)阐述了 Kubernetes
|
||||
的日志如何工作以及怎样实现。
|
||||
|
||||
|
|
|
@ -10,11 +10,11 @@ content_type: concept
|
|||
<!--
|
||||
Add-ons extend the functionality of Kubernetes.
|
||||
|
||||
This page lists some of the available add-ons and links to their respective installation instructions.
|
||||
This page lists some of the available add-ons and links to their respective installation instructions. The list does not try to be exhaustive.
|
||||
-->
|
||||
Add-ons 扩展了 Kubernetes 的功能。
|
||||
|
||||
本文列举了一些可用的 add-ons 以及到它们各自安装说明的链接。
|
||||
本文列举了一些可用的 add-ons 以及到它们各自安装说明的链接。该列表并不试图详尽无遗。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -66,16 +66,14 @@ Add-ons 扩展了 Kubernetes 的功能。
|
|||
而且包含了在 Kubernetes 中基于 SRIOV、DPDK、OVS-DPDK 和 VPP 的工作负载。
|
||||
<!--
|
||||
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) is a networking provider for Kubernetes based on [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), a virtual networking implementation that came out of the Open vSwitch (OVS) project. OVN-Kubernetes provides an overlay based networking implementation for Kubernetes, including an OVS based implementation of load balancing and network policy.
|
||||
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking.
|
||||
* [Nodus](https://github.com/akraino-edge-stack/icn-nodus) is an OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC).
|
||||
-->
|
||||
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) 是一个 Kubernetes 网络驱动,
|
||||
基于 [OVN(Open Virtual Network)](https://github.com/ovn-org/ovn/)实现,是从 Open vSwitch (OVS)
|
||||
项目衍生出来的虚拟网络实现。OVN-Kubernetes 为 Kubernetes 提供基于覆盖网络的网络实现,
|
||||
包括一个基于 OVS 实现的负载均衡器和网络策略。
|
||||
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) 是一个基于 OVN 的 CNI 控制器插件,
|
||||
提供基于云原生的服务功能链条(Service Function Chaining,SFC)、多种 OVN 覆盖网络、动态子网创建、
|
||||
动态虚拟网络创建、VLAN 驱动网络、直接驱动网络,并且可以驳接其他的多网络插件,
|
||||
适用于基于边缘的、多集群联网的云原生工作负载。
|
||||
* [Nodus](https://github.com/akraino-edge-stack/icn-nodus) 是一个基于 OVN 的 CNI 控制器插件,
|
||||
提供基于云原生的服务功能链 (SFC)。
|
||||
<!--
|
||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
|
||||
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
|
||||
|
|
|
@ -52,13 +52,17 @@ back-off, and other clients that also work this way.
|
|||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Requests classified as "long-running" — primarily watches — are not
|
||||
subject to the API Priority and Fairness filter. This is also true for
|
||||
the `--max-requests-inflight` flag without the API Priority and
|
||||
Fairness feature enabled.
|
||||
Some requests classified as "long-running"—such as remote
|
||||
command execution or log tailing—are not subject to the API
|
||||
Priority and Fairness filter. This is also true for the
|
||||
`--max-requests-inflight` flag without the API Priority and Fairness
|
||||
feature enabled. API Priority and Fairness _does_ apply to **watch**
|
||||
requests. When API Priority and Fairness is disabled, **watch** requests
|
||||
are not subject to the `--max-requests-inflight` limit.
|
||||
-->
|
||||
属于“长时间运行”类型的请求(主要是 `watch`)不受 API 优先级和公平性过滤器的约束。
|
||||
属于 “长时间运行” 类型的某些请求(例如远程命令执行或日志拖尾)不受 API 优先级和公平性过滤器的约束。
|
||||
如果未启用 APF 特性,即便设置 `--max-requests-inflight` 标志,该类请求也不受约束。
|
||||
APF **不** 适用于 **watch** 请求。当 APF 被禁用时,**watch** 请求不受 `--max-requests-inflight` 限制。
|
||||
{{< /caution >}}
|
||||
|
||||
<!-- body -->
|
||||
|
@ -158,6 +162,68 @@ from succeeding.
|
|||
例如,默认配置包括针对领导者选举请求、内置控制器请求和 Pod 请求都单独设置优先级。
|
||||
这表示即使异常的 Pod 向 API 服务器发送大量请求,也无法阻止领导者选举或内置控制器的操作执行成功。
|
||||
|
||||
<!--
|
||||
### Seats Occupied by a Request
|
||||
|
||||
The above description of concurrency management is the baseline story.
|
||||
In it, requests have different durations but are counted equally at
|
||||
any given moment when comparing against a priority level's concurrency
|
||||
limit. In the baseline story, each request occupies one unit of
|
||||
concurrency. The word "seat" is used to mean one unit of concurrency,
|
||||
inspired by the way each passenger on a train or aircraft takes up one
|
||||
of the fixed supply of seats.
|
||||
|
||||
But some requests take up more than one seat. Some of these are **list**
|
||||
requests that the server estimates will return a large number of
|
||||
objects. These have been found to put an exceptionally heavy burden
|
||||
on the server, among requests that take a similar amount of time to
|
||||
run. For this reason, the server estimates the number of objects that
|
||||
will be returned and considers the request to take a number of seats
|
||||
that is proportional to that estimated number.
|
||||
-->
|
||||
### 请求占用的席位 {#seats-occupied-by-a-request}
|
||||
|
||||
上述并发管理的描述是基线情况。其中,各个请求具有不同的持续时间,
|
||||
但在与一个优先级的并发限制进行比较时,这些请求在任何给定时刻都以同等方式进行计数。
|
||||
在这个基线场景中,每个请求占用一个并发单位。
|
||||
我们用 “席位(Seat)” 一词来表示一个并发单位,其灵感来自火车或飞机上每位乘客占用一个固定座位的供应方式。
|
||||
|
||||
但有些请求所占用的席位不止一个。有些请求是服务器预估将返回大量对象的 **list** 请求。
|
||||
和所需运行时间相近的其他请求相比,我们发现这类请求会给服务器带来异常沉重的负担。
|
||||
出于这个原因,服务器估算将返回的对象数量,并认为请求所占用的席位数与估算得到的数量成正比。
|
||||
|
||||
<!--
|
||||
### Execution time tweaks for watch requests
|
||||
|
||||
API Priority and Fairness manages **watch** requests, but this involves a
|
||||
couple more excursions from the baseline behavior. The first concerns
|
||||
how long a **watch** request is considered to occupy its seat. Depending
|
||||
on request parameters, the response to a **watch** request may or may not
|
||||
begin with **create** notifications for all the relevant pre-existing
|
||||
objects. API Priority and Fairness considers a **watch** request to be
|
||||
done with its seat once that initial burst of notifications, if any,
|
||||
is over.
|
||||
|
||||
The normal notifications are sent in a concurrent burst to all
|
||||
relevant **watch** response streams whenever the server is notified of an
|
||||
object create/update/delete. To account for this work, API Priority
|
||||
and Fairness considers every write request to spend some additional
|
||||
time occupying seats after the actual writing is done. The server
|
||||
estimates the number of notifications to be sent and adjusts the write
|
||||
request's number of seats and seat occupancy time to include this
|
||||
extra work.
|
||||
-->
|
||||
### watch 请求的执行时间调整 {#execution-time-tweak-for-watch-requests}
|
||||
|
||||
APF 管理 **watch** 请求,但这需要考量基线行为之外的一些情况。
|
||||
第一个关注点是如何判定 **watch** 请求的席位占用时长。
|
||||
取决于请求参数不同,对 **watch** 请求的响应可能以针对所有预先存在的对象 **create** 通知开头,也可能不这样。
|
||||
一旦最初的突发通知(如果有)结束,APF 将认为 **watch** 请求已经用完其席位。
|
||||
|
||||
每当向服务器通知创建/更新/删除一个对象时,正常通知都会以并发突发的方式发送到所有相关的 **watch** 响应流。
|
||||
为此,APF 认为每个写入请求都会在实际写入完成后花费一些额外的时间来占用席位。
|
||||
服务器估算要发送的通知数量,并调整写入请求的席位数以及包含这些额外工作后的席位占用时间。
|
||||
|
||||
<!--
|
||||
### Queuing
|
||||
|
||||
|
@ -386,7 +452,7 @@ FlowSchema in turn, starting with those with numerically lowest ---
|
|||
which we take to be the logically highest --- `matchingPrecedence` and
|
||||
working onward. The first match wins.
|
||||
-->
|
||||
### FlowSchema
|
||||
### FlowSchema {#flowschema}
|
||||
|
||||
FlowSchema 匹配一些入站请求,并将它们分配给优先级。
|
||||
每个入站请求都会对所有 FlowSchema 测试是否匹配,
|
||||
|
@ -918,7 +984,7 @@ poorly-behaved workloads that may be harming system health.
|
|||
* `apiserver_flowcontrol_priority_level_request_count_watermarks` 是一个直方图向量,
|
||||
记录请求数的高/低水位线,由标签 `phase`(取值为 `waiting` 和 `executing`)和
|
||||
`priority_level` 拆分;
|
||||
标签 `mark` 取值为 `high` 和 `low` 。
|
||||
标签 `mark` 取值为 `high` 和 `low`。
|
||||
`apiserver_flowcontrol_priority_level_request_count_samples` 向量观察到有值新增,
|
||||
则该向量累积。这些水位线显示了样本值的范围。
|
||||
|
||||
|
@ -971,7 +1037,7 @@ poorly-behaved workloads that may be harming system health.
|
|||
-->
|
||||
* `apiserver_flowcontrol_request_wait_duration_seconds` 是一个直方图向量,
|
||||
记录请求排队的时间,
|
||||
由标签 `flow_schema`(表示与请求匹配的 FlowSchema ),
|
||||
由标签 `flow_schema`(表示与请求匹配的 FlowSchema),
|
||||
`priority_level`(表示分配该请求的优先级)
|
||||
和 `execute`(表示请求是否开始执行)进一步区分。
|
||||
|
||||
|
@ -995,7 +1061,7 @@ poorly-behaved workloads that may be harming system health.
|
|||
-->
|
||||
* `apiserver_flowcontrol_request_execution_seconds` 是一个直方图向量,
|
||||
记录请求实际执行需要花费的时间,
|
||||
由标签 `flow_schema`(表示与请求匹配的 FlowSchema )和
|
||||
由标签 `flow_schema`(表示与请求匹配的 FlowSchema)和
|
||||
`priority_level`(表示分配给该请求的优先级)进一步区分。
|
||||
|
||||
<!--
|
||||
|
@ -1120,6 +1186,5 @@ or the feature's [slack channel](https://kubernetes.slack.com/messages/api-prior
|
|||
有关 API 优先级和公平性的设计细节的背景信息,
|
||||
请参阅[增强提案](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1040-priority-and-fairness)。
|
||||
你可以通过 [SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery/)
|
||||
或特性的 [Slack 频道](https://kubernetes.slack.com/messages/api-priority-and-fairness/)
|
||||
提出建议和特性请求。
|
||||
或特性的 [Slack 频道](https://kubernetes.slack.com/messages/api-priority-and-fairness/)提出建议和特性请求。
|
||||
|
||||
|
|
|
@ -58,416 +58,21 @@ Kubernetes 的宗旨就是在应用之间共享机器。
|
|||
|
||||
要了解 Kubernetes 网络模型,请参阅[此处](/zh-cn/docs/concepts/services-networking/)。
|
||||
<!--
|
||||
## How to implement the Kubernetes networking model
|
||||
## How to implement the Kubernetes network model
|
||||
|
||||
There are a number of ways that this network model can be implemented. This
|
||||
document is not an exhaustive study of the various methods, but hopefully serves
|
||||
as an introduction to various technologies and serves as a jumping-off point.
|
||||
The network model is implemented by the container runtime on each node. The most common container runtimes use [Container Network Interface](https://github.com/containernetworking/cni) (CNI) plugins to manage their network and security capabilities. Many different CNI plugins exist from many different vendors. Some of these provide only basic features of adding and removing network interfaces, while others provide more sophisticated solutions, such as integration with other container orchestration systems, running multiple CNI plugins, advanced IPAM features etc.
|
||||
|
||||
The following networking options are sorted alphabetically - the order does not
|
||||
imply any preferential status.
|
||||
See [this page](/docs/concepts/cluster-administration/addons/#networking-and-network-policy) for a non-exhaustive list of networking addons supported by Kubernetes.
|
||||
-->
|
||||
## 如何实现 Kubernetes 的网络模型 {#how-to-implement-the-kubernetes-networking-model}
|
||||
## 如何实现 Kubernetes 的网络模型 {#how-to-implement-the-kubernetes-network-model}
|
||||
|
||||
有很多种方式可以实现这种网络模型,本文档并不是对各种实现技术的详细研究,
|
||||
但是希望可以作为对各种技术的详细介绍,并且成为你研究的起点。
|
||||
网络模型由每个节点上的容器运行时实现。最常见的容器运行时使用
|
||||
[Container Network Interface](https://github.com/containernetworking/cni) (CNI) 插件来管理其网络和安全功能。
|
||||
许多不同的 CNI 插件来自于许多不同的供应商。其中一些仅提供添加和删除网络接口的基本功能,
|
||||
而另一些则提供更复杂的解决方案,例如与其他容器编排系统集成、运行多个 CNI 插件、高级 IPAM 功能等。
|
||||
|
||||
接下来的网络技术是按照首字母排序,顺序本身并无其他意义。
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
<!--
|
||||
### ACI
|
||||
|
||||
[Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers. [ACI](https://www.github.com/noironetworks/aci-containers) provides container networking integration for ACI. An overview of the integration is provided [here](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf).
|
||||
-->
|
||||
### ACI
|
||||
[Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html)
|
||||
提供了一个集成覆盖网络和底层 SDN 的解决方案来支持容器、虚拟机和其他裸机服务器。
|
||||
[ACI](https://www.github.com/noironetworks/aci-containers) 为 ACI 提供了容器网络集成。
|
||||
点击[这里](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf)查看概述。
|
||||
|
||||
<!--
|
||||
### Antrea
|
||||
|
||||
Project [Antrea](https://github.com/vmware-tanzu/antrea) is an opensource Kubernetes networking solution intended to be Kubernetes native. It leverages Open vSwitch as the networking data plane. Open vSwitch is a high-performance programmable virtual switch that supports both Linux and Windows. Open vSwitch enables Antrea to implement Kubernetes Network Policies in a high-performance and efficient manner.
|
||||
Thanks to the "programmable" characteristic of Open vSwitch, Antrea is able to implement an extensive set of networking and security features and services on top of Open vSwitch.
|
||||
-->
|
||||
### Antrea
|
||||
|
||||
[Antrea](https://github.com/vmware-tanzu/antrea) 项目是一个开源的联网解决方案,旨在成为
|
||||
Kubernetes 原生的网络解决方案。它利用 Open vSwitch 作为网络数据平面。
|
||||
Open vSwitch 是一个高性能可编程的虚拟交换机,支持 Linux 和 Windows 平台。
|
||||
Open vSwitch 使 Antrea 能够以高性能和高效的方式实现 Kubernetes 的网络策略。
|
||||
借助 Open vSwitch 可编程的特性,Antrea 能够在 Open vSwitch 之上实现广泛的联网、安全功能和服务。
|
||||
|
||||
<!--
|
||||
### AWS VPC CNI for Kubernetes
|
||||
|
||||
The [AWS VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters. This CNI plugin offers high throughput and availability, low latency, and minimal network jitter. Additionally, users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters. This includes the ability to use VPC flow logs, VPC routing policies, and security groups for network traffic isolation.
|
||||
|
||||
Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network. The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node. The CNI includes controls for pre-allocation of ENIs and IP addresses for fast pod startup times and enables large clusters of up to 2,000 nodes.
|
||||
|
||||
Additionally, the CNI can be run alongside [Calico for network policy enforcement](https://docs.aws.amazon.com/eks/latest/userguide/calico.html). The AWS VPC CNI project is open source with [documentation on GitHub](https://github.com/aws/amazon-vpc-cni-k8s).
|
||||
-->
|
||||
### Kubernetes 的 AWS VPC CNI {#aws-vpc-cni-for-kubernetes}
|
||||
|
||||
[AWS VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) 为 Kubernetes 集群提供了集成的
|
||||
AWS 虚拟私有云(VPC)网络。该 CNI 插件提供了高吞吐量和可用性,低延迟以及最小的网络抖动。
|
||||
此外,用户可以使用现有的 AWS VPC 网络和安全最佳实践来构建 Kubernetes 集群。
|
||||
这包括使用 VPC 流日志、VPC 路由策略和安全组进行网络流量隔离的功能。
|
||||
|
||||
使用该 CNI 插件,可使 Kubernetes Pod 拥有与在 VPC 网络上相同的 IP 地址。
|
||||
CNI 将 AWS 弹性网络接口(ENI)分配给每个 Kubernetes 节点,并将每个 ENI 的辅助 IP 范围用于该节点上的 Pod。
|
||||
CNI 包含用于 ENI 和 IP 地址的预分配的控件,以便加快 Pod 的启动时间,并且能够支持多达 2000 个节点的大型集群。
|
||||
|
||||
此外,CNI 可以与
|
||||
[用于执行网络策略的 Calico](https://docs.aws.amazon.com/eks/latest/userguide/calico.html) 一起运行。
|
||||
AWS VPC CNI 项目是开源的,请查看 [GitHub 上的文档](https://github.com/aws/amazon-vpc-cni-k8s)。
|
||||
|
||||
<!--
|
||||
### Azure CNI for Kubernetes
|
||||
[Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview) is an [open source](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) plugin that integrates Kubernetes Pods with an Azure Virtual Network (also known as VNet) providing network performance at par with VMs. Pods can connect to peered VNet and to on-premises over Express Route or site-to-site VPN and are also directly reachable from these networks. Pods can access Azure services, such as storage and SQL, that are protected by Service Endpoints or Private Link. You can use VNet security policies and routing to filter Pod traffic. The plugin assigns VNet IPs to Pods by utilizing a pool of secondary IPs pre-configured on the Network Interface of a Kubernetes node.
|
||||
|
||||
Azure CNI is available natively in the [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni).
|
||||
-->
|
||||
### Kubernetes 的 Azure CNI {#azure-cni-for-kubernetes}
|
||||
|
||||
[Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview)
|
||||
是一个[开源插件](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md),
|
||||
将 Kubernetes Pods 和 Azure 虚拟网络(也称为 VNet)集成在一起,可提供与 VM 相当的网络性能。
|
||||
Pod 可以通过 Express Route 或者 站点到站点的 VPN 来连接到对等的 VNet,
|
||||
也可以从这些网络来直接访问 Pod。Pod 可以访问受服务端点或者受保护链接的 Azure 服务,比如存储和 SQL。
|
||||
你可以使用 VNet 安全策略和路由来筛选 Pod 流量。
|
||||
该插件通过利用在 Kubernetes 节点的网络接口上预分配的辅助 IP 池将 VNet 分配给 Pod。
|
||||
|
||||
Azure CNI 可以在
|
||||
[Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni) 中获得。
|
||||
|
||||
<!--
|
||||
### Calico
|
||||
|
||||
[Calico](https://projectcalico.docs.tigera.io/about/about-calico/) is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple data planes including: a pure Linux eBPF dataplane, a standard Linux networking dataplane, and a Windows HNS dataplane. Calico provides a full networking stack but can also be used in conjunction with [cloud provider CNIs](https://projectcalico.docs.tigera.io/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations) to provide network policy enforcement.
|
||||
-->
|
||||
### Calico
|
||||
|
||||
[Calico](https://projectcalico.docs.tigera.io/about/about-calico/) 是一个开源的联网及网络安全方案,
|
||||
用于基于容器、虚拟机和本地主机的工作负载。
|
||||
Calico 支持多个数据面,包括:纯 Linux eBPF 的数据面、标准的 Linux 联网数据面
|
||||
以及 Windows HNS 数据面。Calico 在提供完整的联网堆栈的同时,还可与
|
||||
[云驱动 CNIs](https://projectcalico.docs.tigera.io/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations)
|
||||
联合使用,以保证网络策略实施。
|
||||
|
||||
<!--
|
||||
### Cilium
|
||||
|
||||
[Cilium](https://github.com/cilium/cilium) is open source software for
|
||||
providing and transparently securing network connectivity between application
|
||||
containers. Cilium is L7/HTTP aware and can enforce network policies on L3-L7
|
||||
using an identity based security model that is decoupled from network
|
||||
addressing, and it can be used in combination with other CNI plugins.
|
||||
-->
|
||||
### Cilium
|
||||
|
||||
[Cilium](https://github.com/cilium/cilium) 是一个开源软件,用于提供并透明保护应用容器间的网络连接。
|
||||
Cilium 支持 L7/HTTP,可以在 L3-L7 上通过使用与网络分离的基于身份的安全模型寻址来实施网络策略,
|
||||
并且可以与其他 CNI 插件结合使用。
|
||||
|
||||
<!--
|
||||
### CNI-Genie from Huawei
|
||||
|
||||
[CNI-Genie](https://github.com/cni-genie/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/cni-genie/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/flannel-io/flannel#flannel), [Calico](https://projectcalico.docs.tigera.io/about/about-calico/), [Weave-net](https://www.weave.works/oss/net/).
|
||||
|
||||
CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/cni-genie/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin.
|
||||
-->
|
||||
### 华为的 CNI-Genie {#cni-genie-from-huawei}
|
||||
|
||||
[CNI-Genie](https://github.com/cni-genie/CNI-Genie) 是一个 CNI 插件,
|
||||
可以让 Kubernetes 在运行时使用不同的[网络模型](#the-kubernetes-network-model)的
|
||||
[实现同时被访问](https://github.com/cni-genie/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables)。
|
||||
这包括以
|
||||
[CNI 插件](https://github.com/containernetworking/cni#3rd-party-plugins)运行的任何实现,比如
|
||||
[Flannel](https://github.com/flannel-io/flannel#flannel)、
|
||||
[Calico](https://projectcalico.docs.tigera.io/about/about-calico/)、
|
||||
[Weave-net](https://www.weave.works/oss/net/)。
|
||||
|
||||
CNI-Genie 还支持[将多个 IP 地址分配给 Pod](https://github.com/cni-genie/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod),
|
||||
每个都来自不同的 CNI 插件。
|
||||
|
||||
<!--
|
||||
### cni-ipvlan-vpc-k8s
|
||||
[cni-ipvlan-vpc-k8s](https://github.com/lyft/cni-ipvlan-vpc-k8s) contains a set
|
||||
of CNI and IPAM plugins to provide a simple, host-local, low latency, high
|
||||
throughput, and compliant networking stack for Kubernetes within Amazon Virtual
|
||||
Private Cloud (VPC) environments by making use of Amazon Elastic Network
|
||||
Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel's
|
||||
IPvlan driver in L2 mode.
|
||||
|
||||
The plugins are designed to be straightforward to configure and deploy within a
|
||||
VPC. Kubelets boot and then self-configure and scale their IP usage as needed
|
||||
without requiring the often recommended complexities of administering overlay
|
||||
networks, BGP, disabling source/destination checks, or adjusting VPC route
|
||||
tables to provide per-instance subnets to each host (which is limited to 50-100
|
||||
entries per VPC). In short, cni-ipvlan-vpc-k8s significantly reduces the
|
||||
network complexity required to deploy Kubernetes at scale within AWS.
|
||||
-->
|
||||
### cni-ipvlan-vpc-k8s
|
||||
|
||||
[cni-ipvlan-vpc-k8s](https://github.com/lyft/cni-ipvlan-vpc-k8s)
|
||||
包含了一组 CNI 和 IPAM 插件来提供一个简单的、本地主机、低延迟、高吞吐量
|
||||
以及通过使用 Amazon 弹性网络接口(ENI)并使用 Linux 内核的 IPv2 驱动程序
|
||||
以 L2 模式将 AWS 管理的 IP 绑定到 Pod 中,
|
||||
在 Amazon Virtual Private Cloud(VPC)环境中为 Kubernetes 兼容的网络堆栈。
|
||||
|
||||
这些插件旨在直接在 VPC 中进行配置和部署,Kubelets 先启动,
|
||||
然后根据需要进行自我配置和扩展它们的 IP 使用率,而无需经常建议复杂的管理
|
||||
覆盖网络、BGP、禁用源/目标检查或调整 VPC 路由表以向每个主机提供每个实例子网的
|
||||
复杂性(每个 VPC 限制为50-100个条目)。
|
||||
简而言之,cni-ipvlan-vpc-k8s 大大降低了在 AWS 中大规模部署 Kubernetes 所需的网络复杂性。
|
||||
|
||||
<!--
|
||||
### Coil
|
||||
|
||||
[Coil](https://github.com/cybozu-go/coil) is a CNI plugin designed for ease of integration, providing flexible egress networking.
|
||||
Coil operates with a low overhead compared to bare metal, and allows you to define arbitrary egress NAT gateways for external networks.
|
||||
|
||||
-->
|
||||
### Coil
|
||||
|
||||
[Coil](https://github.com/cybozu-go/coil) 是一个为易于集成、提供灵活的出站流量网络而设计的 CNI 插件。
|
||||
与裸机相比,Coil 的额外操作开销低,并允许针对外部网络的出站流量任意定义 NAT 网关。
|
||||
|
||||
<!--
|
||||
### Contiv-VPP
|
||||
|
||||
[Contiv-VPP](https://contivpp.io/) is a user-space, performance-oriented network plugin for
|
||||
Kubernetes, using the [fd.io](https://fd.io/) data plane.
|
||||
-->
|
||||
### Contiv-VPP
|
||||
[Contiv-VPP](https://contivpp.io/) 是用于 Kubernetes 的用户空间、面向性能的网络插件,使用 [fd.io](https://fd.io/) 数据平面。
|
||||
|
||||
### Contrail / Tungsten Fabric
|
||||
|
||||
<!--
|
||||
[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads.
|
||||
-->
|
||||
[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)
|
||||
是基于 [Tungsten Fabric](https://tungsten.io) 的,真正开放的多云网络虚拟化和策略管理平台。
|
||||
Contrail 和 Tungsten Fabric 与各种编排系统集成在一起,例如 Kubernetes、OpenShift、OpenStack 和 Mesos,
|
||||
并为虚拟机、容器或 Pods 以及裸机工作负载提供了不同的隔离模式。
|
||||
|
||||
<!--
|
||||
### DANM
|
||||
|
||||
[DANM](https://github.com/nokia/danm) is a networking solution for telco workloads running in a Kubernetes cluster. It's built up from the following components:
|
||||
|
||||
* A CNI plugin capable of provisioning IPVLAN interfaces with advanced features
|
||||
* An in-built IPAM module with the capability of managing multiple, cluster-wide, discontinuous L3 networks and provide a dynamic, static, or no IP allocation scheme on-demand
|
||||
* A CNI metaplugin capable of attaching multiple network interfaces to a container, either through its own CNI, or through delegating the job to any of the popular CNI solution like SRI-OV, or Flannel in parallel
|
||||
* A Kubernetes controller capable of centrally managing both VxLAN and VLAN interfaces of all Kubernetes hosts
|
||||
* Another Kubernetes controller extending Kubernetes' Service-based service discovery concept to work over all network interfaces of a Pod
|
||||
|
||||
With this toolset DANM is able to provide multiple separated network interfaces, the possibility to use different networking back ends and advanced IPAM features for the pods.
|
||||
-->
|
||||
### DANM
|
||||
|
||||
[DANM](https://github.com/nokia/danm) 是一个针对在 Kubernetes 集群中运行的电信工作负载的网络解决方案。
|
||||
它由以下几个组件构成:
|
||||
|
||||
* 能够配置具有高级功能的 IPVLAN 接口的 CNI 插件
|
||||
* 一个内置的 IPAM 模块,能够管理多个、集群内的、不连续的 L3 网络,并按请求提供动态、静态或无 IP 分配方案
|
||||
* CNI 元插件能够通过自己的 CNI 或通过将任务授权给其他任何流行的 CNI 解决方案(例如 SRI-OV 或 Flannel)来实现将多个网络接口连接到容器
|
||||
* Kubernetes 控制器能够集中管理所有 Kubernetes 主机的 VxLAN 和 VLAN 接口
|
||||
* 另一个 Kubernetes 控制器扩展了 Kubernetes 的基于服务的服务发现概念,以在 Pod 的所有网络接口上工作
|
||||
|
||||
通过这个工具集,DANM 可以提供多个分离的网络接口,可以为 Pod 使用不同的网络后端和高级 IPAM 功能。
|
||||
|
||||
<!--
|
||||
### Flannel
|
||||
|
||||
[Flannel](https://github.com/flannel-io/flannel#flannel) is a very simple overlay
|
||||
network that satisfies the Kubernetes requirements. Many
|
||||
people have reported success with Flannel and Kubernetes.
|
||||
-->
|
||||
### Flannel
|
||||
|
||||
[Flannel](https://github.com/flannel-io/flannel#flannel) 是一个非常简单的能够满足
|
||||
Kubernetes 所需要的覆盖网络。已经有许多人报告了使用 Flannel 和 Kubernetes 的成功案例。
|
||||
|
||||
<!--
|
||||
### Hybridnet
|
||||
|
||||
[Hybridnet](https://github.com/alibaba/hybridnet) is an open source CNI plugin designed for hybrid clouds which provides both overlay and underlay networking for containers in one or more clusters. Overlay and underlay containers can run on the same node and have cluster-wide bidirectional network connectivity.
|
||||
-->
|
||||
### Hybridnet
|
||||
|
||||
[Hybridnet](https://github.com/alibaba/hybridnet) 是一个为混合云设计的开源 CNI 插件,
|
||||
它为一个或多个集群中的容器提供覆盖和底层网络。Overlay 和 underlay 容器可以在同一个节点上运行,
|
||||
并具有集群范围的双向网络连接。
|
||||
|
||||
<!--
|
||||
### Jaguar
|
||||
|
||||
[Jaguar](https://gitlab.com/sdnlab/jaguar) is an open source solution for Kubernetes's network based on OpenDaylight. Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
|
||||
|
||||
### k-vswitch
|
||||
|
||||
[k-vswitch](https://github.com/k-vswitch/k-vswitch) is a simple Kubernetes networking plugin based on [Open vSwitch](https://www.openvswitch.org/). It leverages existing functionality in Open vSwitch to provide a robust networking plugin that is easy-to-operate, performant and secure.
|
||||
-->
|
||||
### Jaguar
|
||||
|
||||
[Jaguar](https://gitlab.com/sdnlab/jaguar) 是一个基于 OpenDaylight 的 Kubernetes 网络开源解决方案。
|
||||
Jaguar 使用 vxlan 提供覆盖网络,而 Jaguar CNIPlugin 为每个 Pod 提供一个 IP 地址。
|
||||
|
||||
### k-vswitch
|
||||
|
||||
[k-vswitch](https://github.com/k-vswitch/k-vswitch) 是一个基于
|
||||
[Open vSwitch](https://www.openvswitch.org/) 的简易 Kubernetes 网络插件。
|
||||
它利用 Open vSwitch 中现有的功能来提供强大的网络插件,该插件易于操作,高效且安全。
|
||||
|
||||
<!--
|
||||
### Knitter
|
||||
|
||||
[Knitter](https://github.com/ZTE/Knitter/) is a network solution which supports multiple networking in Kubernetes. It provides the ability of tenant management and network management. Knitter includes a set of end-to-end NFV container networking solutions besides multiple network planes, such as keeping IP address for applications, IP address migration, etc.
|
||||
|
||||
### Kube-OVN
|
||||
|
||||
[Kube-OVN](https://github.com/alauda/kube-ovn) is an OVN-based kubernetes network fabric for enterprises. With the help of OVN/OVS, it provides some advanced overlay network features like subnet, QoS, static IP allocation, traffic mirroring, gateway, openflow-based network policy and service proxy.
|
||||
-->
|
||||
### Knitter
|
||||
|
||||
[Knitter](https://github.com/ZTE/Knitter/) 是一个支持 Kubernetes 中实现多个网络系统的解决方案。
|
||||
它提供了租户管理和网络管理的功能。除了多个网络平面外,Knitter 还包括一组端到端的 NFV 容器网络解决方案,
|
||||
例如为应用程序保留 IP 地址、IP 地址迁移等。
|
||||
|
||||
### Kube-OVN
|
||||
|
||||
[Kube-OVN](https://github.com/alauda/kube-ovn) 是一个基于 OVN 的用于企业的 Kubernetes 网络架构。
|
||||
借助于 OVN/OVS,它提供了一些高级覆盖网络功能,例如子网、QoS、静态 IP 分配、流量镜像、网关、
|
||||
基于 openflow 的网络策略和服务代理。
|
||||
|
||||
<!--
|
||||
### Kube-router
|
||||
|
||||
[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](https://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
|
||||
-->
|
||||
### Kube-router
|
||||
|
||||
[Kube-router](https://github.com/cloudnativelabs/kube-router) 是 Kubernetes 的专用网络解决方案,
|
||||
旨在提供高性能和易操作性。
|
||||
Kube-router 提供了一个基于 Linux [LVS/IPVS](https://www.linuxvirtualserver.org/software/ipvs.html)
|
||||
的服务代理、一个基于 Linux 内核转发的无覆盖 Pod-to-Pod 网络解决方案和基于 iptables/ipset 的网络策略执行器。
|
||||
|
||||
<!--
|
||||
### L2 networks and linux bridging
|
||||
|
||||
If you have a "dumb" L2 network, such as a simple switch in a "bare-metal"
|
||||
environment, you should be able to do something similar to the above GCE setup.
|
||||
Note that these instructions have only been tried very casually - it seems to
|
||||
work, but has not been thoroughly tested. If you use this technique and
|
||||
perfect the process, please let us know.
|
||||
|
||||
Follow the "With Linux Bridge devices" section of
|
||||
[this very nice tutorial](https://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
|
||||
Lars Kellogg-Stedman.
|
||||
-->
|
||||
### L2 networks and linux bridging
|
||||
|
||||
如果你具有一个“哑”的 L2 网络,例如“裸机”环境中的简单交换机,则应该能够执行与上述 GCE 设置类似的操作。
|
||||
请注意,这些说明仅是非常简单的尝试过-似乎可行,但尚未经过全面测试。
|
||||
如果你使用此技术并完善了流程,请告诉我们。
|
||||
|
||||
根据 Lars Kellogg-Stedman 的这份非常不错的 “Linux 网桥设备”
|
||||
[使用说明](https://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/)来进行操作。
|
||||
|
||||
<!--
|
||||
### Multus (a Multi Network plugin)
|
||||
|
||||
Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
|
||||
|
||||
Multus supports all [reference plugins](https://github.com/containernetworking/plugins) (eg. [Flannel](https://github.com/containernetworking/cni.dev/blob/main/content/plugins/v0.9/meta/flannel.md), [DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp), [Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)) that implement the CNI specification and 3rd party plugins (eg. [Calico](https://github.com/projectcalico/cni-plugin), [Weave](https://github.com/weaveworks/weave), [Cilium](https://github.com/cilium/cilium), [Contiv](https://github.com/contiv/netplugin)). In addition to it, Multus supports [SRIOV](https://github.com/hustcat/sriov-cni), [DPDK](https://github.com/Intel-Corp/sriov-cni), [OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes.
|
||||
-->
|
||||
### Multus (a Multi Network plugin)
|
||||
|
||||
[Multus](https://github.com/Intel-Corp/multus-cni) 是一个多 CNI 插件,
|
||||
使用 Kubernetes 中基于 CRD 的网络对象来支持实现 Kubernetes 多网络系统。
|
||||
|
||||
Multus 支持所有[参考插件](https://github.com/containernetworking/plugins)(比如:
|
||||
[Flannel](https://github.com/containernetworking/cni.dev/blob/main/content/plugins/v0.9/meta/flannel.md)、
|
||||
[DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp)、
|
||||
[Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan) )
|
||||
来实现 CNI 规范和第三方插件(比如:
|
||||
[Calico](https://github.com/projectcalico/cni-plugin)、
|
||||
[Weave](https://github.com/weaveworks/weave)、
|
||||
[Cilium](https://github.com/cilium/cilium)、
|
||||
[Contiv](https://github.com/contiv/netplugin))。
|
||||
除此之外,Multus 还支持
|
||||
[SRIOV](https://github.com/hustcat/sriov-cni)、
|
||||
[DPDK](https://github.com/Intel-Corp/sriov-cni)、
|
||||
[OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) 的工作负载,
|
||||
以及 Kubernetes 中基于云的本机应用程序和基于 NFV 的应用程序。
|
||||
|
||||
<!--
|
||||
### OVN4NFV-K8s-Plugin (OVN based CNI controller & plugin)
|
||||
|
||||
[OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
|
||||
-->
|
||||
### OVN4NFV-K8s-Plugin(基于 OVN 的 CNI 控制器和插件) {#ovn4nfv-k8s-plugin-ovn-based-cni-controller-plugin}
|
||||
|
||||
[OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) 是基于 OVN 的
|
||||
CNI 控制器插件,提供基于云原生的服务功能链 (SFC)、多个 OVN
|
||||
覆盖网络、动态子网创建、虚拟网络的动态创建、VLAN Provider 网络、Direct Provider
|
||||
网络且可与其他多网络插件组合,非常适合多集群网络中基于边缘的云原生工作负载。
|
||||
|
||||
<!--
|
||||
### NSX-T
|
||||
|
||||
[VMware NSX-T](https://docs.vmware.com/en/VMware-NSX-T/index.html) is a network virtualization and security platform. NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks. In addition to vSphere hypervisors, these environments include other hypervisors such as KVM, containers, and bare metal.
|
||||
|
||||
[NSX-T Container Plug-in (NCP)](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) provides integration between NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
|
||||
-->
|
||||
### NSX-T
|
||||
|
||||
[VMware NSX-T](https://docs.vmware.com/en/VMware-NSX-T/index.html) 是一个网络虚拟化和安全平台。
|
||||
NSX-T 可以为多云及多系统管理程序环境提供网络虚拟化,并专注于具有异构端点和技术堆栈的新兴应用程序框架和体系结构。
|
||||
除了 vSphere 管理程序之外,这些环境还包括其他虚拟机管理程序,例如 KVM、容器和裸机。
|
||||
|
||||
[NSX-T Container Plug-in (NCP)](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf)
|
||||
提供了 NSX-T 与容器协调器(例如 Kubernetes)之间的结合,
|
||||
以及 NSX-T 与基于容器的 CaaS/PaaS 平台(例如 Pivotal Container Service(PKS)和 OpenShift)之间的集成。
|
||||
|
||||
<!--
|
||||
### OVN (Open Virtual Networking)
|
||||
|
||||
OVN is an opensource network virtualization solution developed by the
|
||||
Open vSwitch community. It lets one create logical switches, logical routers,
|
||||
stateful ACLs, load-balancers etc to build different virtual networking
|
||||
topologies. The project has a specific Kubernetes plugin and documentation
|
||||
at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes).
|
||||
-->
|
||||
### OVN(开放式虚拟网络) {#ovn-open-virtual-networking}
|
||||
|
||||
OVN 是一个由 Open vSwitch 社区开发的开源的网络虚拟化解决方案。
|
||||
它允许创建逻辑交换器、逻辑路由、状态 ACL、负载均衡等等来建立不同的虚拟网络拓扑。
|
||||
该项目在 [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)
|
||||
提供特定的 Kubernetes 插件和文档。
|
||||
|
||||
<!--
|
||||
### Weave Net from Weaveworks
|
||||
|
||||
[Weave Net](https://www.weave.works/oss/net/) is a
|
||||
resilient and simple to use network for Kubernetes and its hosted applications.
|
||||
Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/)
|
||||
or stand-alone. In either version, it doesn't require any configuration or extra code
|
||||
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
|
||||
-->
|
||||
### Weaveworks 的 Weave Net {#weave-net-from-weaveworks}
|
||||
|
||||
[Weave Net](https://www.weave.works/oss/net/) 为 Kubernetes
|
||||
及其托管应用提供的、弹性且易用的网络系统。
|
||||
Weave Net 可以作为 [CNI 插件](https://www.weave.works/docs/net/latest/cni-plugin/) 运行或者独立运行。
|
||||
在这两种运行方式里,都不需要任何配置或额外的代码即可运行,并且在两种情况下,
|
||||
网络都为每个 Pod 提供一个 IP 地址 -- 这是 Kubernetes 的标准配置。
|
||||
请参阅[此页面](/zh-cn/docs/concepts/cluster-administration/addons/#networking-and-network-policy)了解
|
||||
Kubernetes 支持的网络插件的非详尽列表。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ title: Resource Management for Pods and Containers
|
|||
content_type: concept
|
||||
weight: 40
|
||||
feature:
|
||||
title: Automatic binpacking
|
||||
title: Automatic bin packing
|
||||
description: >
|
||||
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability.
|
||||
Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
|
||||
|
@ -155,7 +155,7 @@ Kubernetes API 服务器读取和修改的对象。
|
|||
For each container, you can specify resource limits and requests,
|
||||
including the following:
|
||||
-->
|
||||
## Pod 和 容器的资源请求和约束
|
||||
## Pod 和 容器的资源请求和约束 {#resource-requests-and-limits-of-pod-and-container}
|
||||
|
||||
针对每个容器,你都可以指定其资源约束和请求,包括如下选项:
|
||||
|
||||
|
@ -170,7 +170,6 @@ including the following:
|
|||
Although you can only specify requests and limits for individual containers,
|
||||
it is also useful to think about the overall resource requests and limits for
|
||||
a Pod.
|
||||
A
|
||||
For a particular resource, a *Pod resource request/limit* is the sum of the
|
||||
resource requests/limits of that type for each container in the Pod.
|
||||
-->
|
||||
|
@ -184,7 +183,7 @@ resource requests/limits of that type for each container in the Pod.
|
|||
|
||||
Limits and requests for CPU resources are measured in *cpu* units.
|
||||
In Kubernetes, 1 CPU unit is equivalent to **1 physical CPU core**,
|
||||
or **1 virtual core**, depending on whether the node is a physical host
|
||||
or **1 virtual core**, depending on whether the node is a physical host
|
||||
or a virtual machine running inside a physical machine.
|
||||
-->
|
||||
## Kubernetes 中的资源单位 {#resource-units-in-kubernetes}
|
||||
|
@ -316,7 +315,7 @@ a Pod on a node if the capacity check fails. This protects against a resource
|
|||
shortage on a node when resource usage later increases, for example, during a
|
||||
daily peak in request rate.
|
||||
-->
|
||||
## 带资源请求的 Pod 如何调度
|
||||
## 带资源请求的 Pod 如何调度 {#how-pods-with-resource-limits-are-run}
|
||||
|
||||
当你创建一个 Pod 时,Kubernetes 调度程序将为 Pod 选择一个节点。
|
||||
每个节点对每种资源类型都有一个容量上限:可为 Pod 提供的 CPU 和内存量。
|
||||
|
@ -328,7 +327,7 @@ daily peak in request rate.
|
|||
<!--
|
||||
## How Kubernetes applies resource requests and limits {#how-pods-with-resource-limits-are-run}
|
||||
|
||||
When the kubelet starts a container of a Pod, the kubelet passes that container's
|
||||
When the kubelet starts a container as part of a Pod, the kubelet passes that container's
|
||||
requests and limits for memory and CPU to the container runtime.
|
||||
|
||||
On Linux, the container runtime typically configures
|
||||
|
@ -337,7 +336,7 @@ limits you defined.
|
|||
-->
|
||||
## Kubernetes 应用资源请求与约束的方式 {#how-pods-with-resource-limits-are-run}
|
||||
|
||||
当 kubelet 启动 Pod 中的容器时,它会将容器的 CPU 和内存请求与约束信息传递给容器运行时。
|
||||
当 kubelet 将容器作为 Pod 的一部分启动时,它会将容器的 CPU 和内存请求与约束信息传递给容器运行时。
|
||||
|
||||
在 Linux 系统上,容器运行时通常会配置内核
|
||||
{{< glossary_tooltip text="CGroups" term_id="cgroup" >}},负责应用并实施所定义的请求。
|
||||
|
@ -414,7 +413,7 @@ are available in your cluster, then Pod resource usage can be retrieved either
|
|||
from the [Metrics API](/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api)
|
||||
directly or from your monitoring tools.
|
||||
-->
|
||||
## 监控计算和内存资源用量
|
||||
## 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage}
|
||||
|
||||
kubelet 会将 Pod 的资源使用情况作为 Pod
|
||||
[`status`](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
|
||||
|
@ -433,7 +432,7 @@ locally-attached writeable devices or, sometimes, by RAM.
|
|||
|
||||
Pods use ephemeral local storage for scratch space, caching, and for logs.
|
||||
The kubelet can provide scratch space to Pods using local ephemeral storage to
|
||||
mount [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
|
||||
mount [`emptyDir`](/docs/concepts/storage/volumes/#emptydir)
|
||||
{{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
|
||||
-->
|
||||
## 本地临时存储 {#local-ephemeral-storage}
|
||||
|
@ -490,7 +489,7 @@ The kubelet also writes
|
|||
[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
|
||||
and treats these similarly to ephemeral local storage.
|
||||
-->
|
||||
### 本地临时性存储的配置
|
||||
### 本地临时性存储的配置 {##configurations-for-local-ephemeral-storage}
|
||||
|
||||
Kubernetes 有两种方式支持节点上配置本地临时性存储:
|
||||
|
||||
|
@ -606,12 +605,12 @@ container of a Pod can specify either or both of the following:
|
|||
* `spec.containers[].resources.limits.ephemeral-storage`
|
||||
* `spec.containers[].resources.requests.ephemeral-storage`
|
||||
|
||||
Limits and requests for `ephemeral-storage` are measured in quantities.
|
||||
Limits and requests for `ephemeral-storage` are measured in byte quantities.
|
||||
You can express storage as a plain integer or as a fixed-point number using one of these suffixes:
|
||||
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
Mi, Ki. For example, the following represent roughly the same value:
|
||||
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
Mi, Ki. For example, the following quantities all represent roughly the same value:
|
||||
-->
|
||||
### 为本地临时性存储设置请求和约束值
|
||||
### 为本地临时性存储设置请求和约束值 {#setting-requests-and-limits-for-local-ephemeral-storage}
|
||||
|
||||
你可以使用 `ephemeral-storage` 来管理本地临时性存储。
|
||||
Pod 中的每个容器可以设置以下属性:
|
||||
|
@ -620,7 +619,7 @@ Pod 中的每个容器可以设置以下属性:
|
|||
* `spec.containers[].resources.requests.ephemeral-storage`
|
||||
|
||||
`ephemeral-storage` 的请求和约束值是按量纲计量的。你可以使用一般整数或者定点数字
|
||||
加上下面的后缀来表达存储量:E、P、T、G、M、K。
|
||||
加上下面的后缀来表达存储量:E、P、T、G、M、k。
|
||||
你也可以使用对应的 2 的幂级数来表达:Ei、Pi、Ti、Gi、Mi、Ki。
|
||||
例如,下面的表达式所表达的大致是同一个值:
|
||||
|
||||
|
@ -641,8 +640,8 @@ or 400 megabytes (`400M`).
|
|||
<!--
|
||||
In the following example, the Pod has two containers. Each container has a request of
|
||||
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
|
||||
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a
|
||||
limit of 8GiB of local ephemeral storage.
|
||||
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
|
||||
a limit of 8GiB of local ephemeral storage.
|
||||
-->
|
||||
|
||||
在下面的例子中,Pod 包含两个容器。每个容器请求 2 GiB 大小的本地临时性存储。
|
||||
|
@ -692,7 +691,7 @@ For more information, see
|
|||
The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node.
|
||||
-->
|
||||
|
||||
### 带临时性存储的 Pods 的调度行为
|
||||
### 带临时性存储的 Pods 的调度行为 {#how-pods-with-ephemeral-storage-requests-are-scheduled}
|
||||
|
||||
当你创建一个 Pod 时,Kubernetes 调度器会为 Pod 选择一个节点来运行之。
|
||||
每个节点都有一个本地临时性存储的上限,是其可提供给 Pods 使用的总量。
|
||||
|
@ -870,6 +869,7 @@ If you want to use project quotas, you should:
|
|||
has project quotas enabled. All XFS filesystems support project quotas.
|
||||
For ext4 filesystems, you need to enable the project quota tracking feature
|
||||
while the filesystem is not mounted.
|
||||
|
||||
```bash
|
||||
# For ext4, with /dev/block-device not mounted
|
||||
sudo tune2fs -O project -Q prjquota /dev/block-device
|
||||
|
@ -962,11 +962,11 @@ asynchronously by the kubelet.
|
|||
kubelet 会异步地对 `status.allocatable` 字段执行自动更新操作,使之包含新资源。
|
||||
|
||||
<!--
|
||||
Because the scheduler uses the node `status.allocatable` value when
|
||||
evaluating Pod fitness, the shceduler only takes account of the new value after
|
||||
the asynchronous update. There may be a short delay between patching the
|
||||
Because the scheduler uses the node's `status.allocatable` value when
|
||||
evaluating Pod fitness, the scheduler only takes account of the new value after
|
||||
that asynchronous update. There may be a short delay between patching the
|
||||
node capacity with a new resource and the time when the first Pod that requests
|
||||
the resource to be scheduled on that node.
|
||||
the resource can be scheduled on that node.
|
||||
-->
|
||||
由于调度器在评估 Pod 是否适合在某节点上执行时会使用节点的 `status.allocatable` 值,
|
||||
调度器只会考虑异步更新之后的新值。
|
||||
|
@ -997,6 +997,7 @@ http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
|
|||
In the preceding request, `~1` is the encoding for the character `/`
|
||||
in the patch path. The operation path value in JSON-Patch is interpreted as a
|
||||
JSON-Pointer. For more details, see
|
||||
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
|
@ -1013,14 +1014,14 @@ Cluster-level extended resources are not tied to nodes. They are usually managed
|
|||
by scheduler extenders, which handle the resource consumption and resource quota.
|
||||
|
||||
You can specify the extended resources that are handled by scheduler extenders
|
||||
in [scheduler policy configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
in [scheduler configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
-->
|
||||
#### 集群层面的扩展资源 {#cluster-level-extended-resources}
|
||||
|
||||
集群层面的扩展资源并不绑定到具体节点。
|
||||
它们通常由调度器扩展程序(Scheduler Extenders)管理,这些程序处理资源消耗和资源配额。
|
||||
|
||||
你可以在[调度器策略配置](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
你可以在[调度器配置](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
中指定由调度器扩展程序处理的扩展资源。
|
||||
|
||||
<!--
|
||||
|
@ -1158,12 +1159,12 @@ to limit the number of PIDs that a given Pod can consume. See
|
|||
If the scheduler cannot find any node where a Pod can fit, the Pod remains
|
||||
unscheduled until a place can be found. An
|
||||
[Event](/docs/reference/kubernetes-api/cluster-resources/event-v1/) is produced
|
||||
each time the scheduler fails to find a place for the Pod, You can use `kubectl`
|
||||
each time the scheduler fails to find a place for the Pod. You can use `kubectl`
|
||||
to view the events for a Pod; for example:
|
||||
-->
|
||||
## 疑难解答
|
||||
## 疑难解答 {#troubleshooting}
|
||||
|
||||
### 我的 Pod 处于悬决状态且事件信息显示 `FailedScheduling`
|
||||
### 我的 Pod 处于悬决状态且事件信息显示 `FailedScheduling` {#my-pods-are-pending-with-event-message-failedscheduling}
|
||||
|
||||
如果调度器找不到该 Pod 可以匹配的任何节点,则该 Pod 将保持未被调度状态,
|
||||
直到找到一个可以被调度到的位置。每当调度器找不到 Pod 可以调度的地方时,
|
||||
|
@ -1240,22 +1241,22 @@ Allocated resources:
|
|||
(Total limits may be over 100 percent, i.e., overcommitted.)
|
||||
CPU Requests CPU Limits Memory Requests Memory Limits
|
||||
------------ ---------- --------------- -------------
|
||||
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
|
||||
680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)
|
||||
```
|
||||
|
||||
<!--
|
||||
In the preceding output, you can see that if a Pod requests more than 1120m
|
||||
CPUs or 6.23Gi of memory, it will not fit on the node.
|
||||
In the preceding output, you can see that if a Pod requests more than 1.120 CPUs
|
||||
or more than 6.23Gi of memory, that Pod will not fit on the node.
|
||||
|
||||
By looking at the "Pods" section, you can see which Pods are taking up space on
|
||||
the node.
|
||||
-->
|
||||
在上面的输出中,你可以看到如果 Pod 请求超过 1120m CPU 或者 6.23Gi 内存,节点将无法满足。
|
||||
在上面的输出中,你可以看到如果 Pod 请求超过 1.120 CPU 或者 6.23Gi 内存,节点将无法满足。
|
||||
|
||||
通过查看 "Pods" 部分,你将看到哪些 Pod 占用了节点上的资源。
|
||||
|
||||
<!--
|
||||
The amount of resources available to Pods is less than the node capacity, because
|
||||
The amount of resources available to Pods is less than the node capacity because
|
||||
system daemons use a portion of the available resources. Within the Kubernetes API,
|
||||
each Node has a `.status.allocatable` field
|
||||
(see [NodeStatus](/docs/reference/kubernetes-api/cluster-resources/node-v1/#NodeStatus)
|
||||
|
@ -1286,7 +1287,7 @@ prevent one team from using so much of any resource that this over-use affects o
|
|||
|
||||
You should also consider what access you grant to that namespace:
|
||||
**full** write access to a namespace allows someone with that access to remove any
|
||||
resource, include a configured ResourceQuota.
|
||||
resource, including a configured ResourceQuota.
|
||||
-->
|
||||
你可以配置[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/)功能特性以限制每个名字空间可以使用的资源总量。
|
||||
当某名字空间中存在 ResourceQuota 时,Kubernetes 会在该名字空间中的对象强制实施配额。
|
||||
|
@ -1305,7 +1306,7 @@ whether a Container is being killed because it is hitting a resource limit, call
|
|||
`kubectl describe pod` on the Pod of interest:
|
||||
-->
|
||||
|
||||
### 我的容器被终止了
|
||||
### 我的容器被终止了 {#my-container-is-terminated}
|
||||
|
||||
你的容器可能因为资源紧张而被终止。要查看容器是否因为遇到资源限制而被杀死,
|
||||
请针对相关的 Pod 执行 `kubectl describe pod`:
|
||||
|
@ -1331,18 +1332,19 @@ Message:
|
|||
IP: 10.244.2.75
|
||||
Containers:
|
||||
simmemleak:
|
||||
Image: saadali/simmemleak
|
||||
Image: saadali/simmemleak:latest
|
||||
Limits:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
State: Running
|
||||
Started: Tue, 07 Jul 2015 12:54:41 -0700
|
||||
Last Termination State: Terminated
|
||||
Exit Code: 1
|
||||
Started: Fri, 07 Jul 2015 12:54:30 -0700
|
||||
Finished: Fri, 07 Jul 2015 12:54:33 -0700
|
||||
Ready: False
|
||||
Restart Count: 5
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
State: Running
|
||||
Started: Tue, 07 Jul 2019 12:54:41 -0700
|
||||
Last State: Terminated
|
||||
Reason: OOMKilled
|
||||
Exit Code: 137
|
||||
Started: Fri, 07 Jul 2019 12:54:30 -0700
|
||||
Finished: Fri, 07 Jul 2019 12:54:33 -0700
|
||||
Ready: False
|
||||
Restart Count: 5
|
||||
Conditions:
|
||||
Type Status
|
||||
Ready False
|
||||
|
@ -1381,13 +1383,13 @@ memory limit (and possibly request) for that container.
|
|||
* Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
|
||||
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
|
||||
and its [resource requirements](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
|
||||
* Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
|
||||
* Read about [project quotas](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F) in XFS
|
||||
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
-->
|
||||
* 获取[分配内存资源给容器和 Pod ](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验
|
||||
* 获取[分配 CPU 资源给容器和 Pod ](/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验
|
||||
* 阅读 API 参考中 [Container](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
|
||||
和其[资源请求](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)定义。
|
||||
* 阅读 XFS 中[配额](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html)的文档
|
||||
* 阅读 XFS 中[配额](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F)的文档
|
||||
* 进一步阅读 [kube-scheduler 配置参考 (v1beta3)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
|
||||
|
|
|
@ -29,25 +29,25 @@ no_list: true
|
|||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Kubernetes is highly configurable and extensible. As a result,
|
||||
there is rarely a need to fork or submit patches to the Kubernetes
|
||||
project code.
|
||||
Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or
|
||||
submit patches to the Kubernetes project code.
|
||||
|
||||
This guide describes the options for customizing a Kubernetes
|
||||
cluster. It is aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to
|
||||
understand how to adapt their Kubernetes cluster to the needs of
|
||||
their work environment. Developers who are prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also find it
|
||||
useful as an introduction to what extension points and patterns
|
||||
exist, and their trade-offs and limitations.
|
||||
This guide describes the options for customizing a Kubernetes cluster. It is aimed at
|
||||
{{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to understand
|
||||
how to adapt their Kubernetes cluster to the needs of their work environment. Developers who are
|
||||
prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or
|
||||
Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also
|
||||
find it useful as an introduction to what extension points and patterns exist, and their
|
||||
trade-offs and limitations.
|
||||
-->
|
||||
Kubernetes 是高度可配置且可扩展的。因此,大多数情况下,你不需要
|
||||
派生自己的 Kubernetes 副本或者向项目代码提交补丁。
|
||||
Kubernetes 是高度可配置且可扩展的。因此,大多数情况下,
|
||||
你不需要派生自己的 Kubernetes 副本或者向项目代码提交补丁。
|
||||
|
||||
本指南描述定制 Kubernetes 的可选方式。主要针对的读者是希望了解如何针对自身工作环境
|
||||
需要来调整 Kubernetes 的{{< glossary_tooltip text="集群管理者" term_id="cluster-operator" >}}。
|
||||
对于那些充当{{< glossary_tooltip text="平台开发人员" term_id="platform-developer" >}}
|
||||
的开发人员或 Kubernetes 项目的{{< glossary_tooltip text="贡献者" term_id="contributor" >}}
|
||||
而言,他们也会在本指南中找到有用的介绍信息,了解系统中存在哪些扩展点和扩展模式,
|
||||
本指南描述定制 Kubernetes 的可选方式。主要针对的读者是希望了解如何针对自身工作环境需要来调整
|
||||
Kubernetes 的{{< glossary_tooltip text="集群管理者" term_id="cluster-operator" >}}。
|
||||
对于那些充当{{< glossary_tooltip text="平台开发人员" term_id="platform-developer" >}}的开发人员或
|
||||
Kubernetes 项目的{{< glossary_tooltip text="贡献者" term_id="contributor" >}}而言,
|
||||
他们也会在本指南中找到有用的介绍信息,了解系统中存在哪些扩展点和扩展模式,
|
||||
以及它们所附带的各种权衡和约束等等。
|
||||
|
||||
<!-- body -->
|
||||
|
@ -55,11 +55,13 @@ Kubernetes 是高度可配置且可扩展的。因此,大多数情况下,你
|
|||
<!--
|
||||
## Overview
|
||||
|
||||
Customization approaches can be broadly divided into *configuration*, which only involves changing flags, local configuration files, or API resources; and *extensions*, which involve running additional programs or services. This document is primarily about extensions.
|
||||
Customization approaches can be broadly divided into *configuration*, which only involves changing
|
||||
flags, local configuration files, or API resources; and *extensions*, which involve running
|
||||
additional programs or services. This document is primarily about extensions.
|
||||
-->
|
||||
## 概述 {#overview}
|
||||
|
||||
定制化的方法主要可分为 *配置(Configuration)* 和 *扩展(Extensions)* 两种。
|
||||
定制化的方法主要可分为 **配置(Configuration)** 和 **扩展(Extensions)** 两种。
|
||||
前者主要涉及改变参数标志、本地配置文件或者 API 资源;
|
||||
后者则需要额外运行一些程序或服务。
|
||||
本文主要关注扩展。
|
||||
|
@ -67,7 +69,8 @@ Customization approaches can be broadly divided into *configuration*, which only
|
|||
<!--
|
||||
## Configuration
|
||||
|
||||
*Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary:
|
||||
*Configuration files* and *flags* are documented in the Reference section of the online
|
||||
documentation, under each binary:
|
||||
|
||||
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
|
||||
* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/)
|
||||
|
@ -77,27 +80,39 @@ Customization approaches can be broadly divided into *configuration*, which only
|
|||
-->
|
||||
## 配置 {#configuration}
|
||||
|
||||
配置文件和参数标志的说明位于在线文档的参考章节,按可执行文件组织:
|
||||
**配置文件**和**参数标志**的说明位于在线文档的参考章节,按可执行文件组织:
|
||||
|
||||
* [kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet/)
|
||||
* [kube-proxy](/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/)
|
||||
* [kube-apiserver](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)
|
||||
* [kube-controller-manager](/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/)
|
||||
* [kube-scheduler](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/).
|
||||
* [kube-scheduler](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||
|
||||
<!--
|
||||
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.
|
||||
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a
|
||||
distribution with managed installation. When they are changeable, they are usually only changeable
|
||||
by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and
|
||||
setting them may require restarting processes. For those reasons, they should be used only when
|
||||
there are no other options.
|
||||
-->
|
||||
在托管的 Kubernetes 服务中或者受控安装的发行版本中,参数标志和配置文件不总是可以
|
||||
修改的。即使它们是可修改的,通常其修改权限也仅限于集群管理员。
|
||||
此外,这些内容在将来的 Kubernetes 版本中很可能发生变化,设置新参数或配置文件可能
|
||||
需要重启进程。
|
||||
有鉴于此,通常应该在没有其他替代方案时才应考虑更改参数标志和配置文件。
|
||||
在托管的 Kubernetes 服务中或者受控安装的发行版本中,参数标志和配置文件不总是可以修改的。
|
||||
即使它们是可修改的,通常其修改权限也仅限于集群管理员。
|
||||
此外,这些内容在将来的 Kubernetes 版本中很可能发生变化,设置新参数或配置文件可能需要重启进程。
|
||||
有鉴于此,应该在没有其他替代方案时才会使用参数标志和配置文件。
|
||||
|
||||
<!--
|
||||
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
|
||||
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/),
|
||||
[PodSecurityPolicies](/docs/concepts/security/pod-security-policy/),
|
||||
[NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control
|
||||
([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs.
|
||||
APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations.
|
||||
They are declarative and use the same conventions as other Kubernetes resources like pods,
|
||||
so new cluster configuration can be repeatable and be managed the same way as applications.
|
||||
And, where they are stable, they enjoy a
|
||||
[defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs.
|
||||
For these reasons, they are preferred over *configuration files* and *flags* where suitable.
|
||||
-->
|
||||
*内置的策略 API*,例如[ResourceQuota](/zh-cn/docs/concepts/policy/resource-quotas/)、
|
||||
**内置的策略 API**,例如[ResourceQuota](/zh-cn/docs/concepts/policy/resource-quotas/)、
|
||||
[PodSecurityPolicies](/zh-cn/docs/concepts/security/pod-security-policy/)、
|
||||
[NetworkPolicy](/zh-cn/docs/concepts/services-networking/network-policies/)
|
||||
和基于角色的访问控制([RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/))
|
||||
|
@ -142,7 +157,7 @@ clusters and managed installations.
|
|||
|
||||
Kubernetes 从设计上即支持通过编写客户端程序来将其操作自动化。
|
||||
任何能够对 Kubernetes API 发出读写指令的程序都可以提供有用的自动化能力。
|
||||
*自动化组件*可以运行在集群上,也可以运行在集群之外。
|
||||
**自动化组件**可以运行在集群上,也可以运行在集群之外。
|
||||
通过遵从本文中的指南,你可以编写高度可用的、运行稳定的自动化组件。
|
||||
自动化组件通常可以用于所有 Kubernetes 集群,包括托管的集群和受控的安装环境。
|
||||
|
||||
|
@ -151,17 +166,16 @@ There is a specific pattern for writing client programs that work well with
|
|||
Kubernetes called the *Controller* pattern. Controllers typically read an
|
||||
object's `.spec`, possibly do things, and then update the object's `.status`.
|
||||
|
||||
A controller is a client of Kubernetes. When Kubernetes is the client and
|
||||
calls out to a remote service, it is called a *Webhook*. The remote service
|
||||
is called a *Webhook Backend*. Like Controllers, Webhooks do add a point of
|
||||
failure.
|
||||
A controller is a client of Kubernetes. When Kubernetes is the client and calls out to a remote
|
||||
service, it is called a *Webhook*. The remote service is called a *Webhook Backend*. Like
|
||||
Controllers, Webhooks do add a point of failure.
|
||||
-->
|
||||
编写客户端程序有一种特殊的 Controller(控制器)模式,能够与 Kubernetes
|
||||
编写客户端程序有一种特殊的 **Controller(控制器)** 模式,能够与 Kubernetes
|
||||
很好地协同工作。控制器通常会读取某个对象的 `.spec`,或许还会执行一些操作,
|
||||
之后更新对象的 `.status`。
|
||||
|
||||
Controller 是 Kubernetes 的客户端。当 Kubernetes 充当客户端,
|
||||
调用某远程服务时,对应的远程组件称作 *Webhook*,远程服务称作 Webhook 后端。
|
||||
调用某远程服务时,对应的远程组件称作 **Webhook**,远程服务称作 Webhook 后端。
|
||||
与控制器模式相似,Webhook 也会在整个架构中引入新的失效点(Point of Failure)。
|
||||
|
||||
<!--
|
||||
|
@ -206,15 +220,19 @@ This diagram shows the extension points in a Kubernetes system.
|
|||

|
||||
|
||||
<!--
|
||||
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
|
||||
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](#api-access-extensions) section.
|
||||
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](#user-defined-types) section. Custom Resources are often used with API Access Extensions.
|
||||
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section.
|
||||
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
|
||||
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](#network-plugins) allow for different implementations of pod networking.
|
||||
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](#storage-plugins).
|
||||
1. Users often interact with the Kubernetes API using `kubectl`.
|
||||
[Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary.
|
||||
They only affect the individual user's local environment, and so cannot enforce site-wide policies.
|
||||
|
||||
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
|
||||
1. The API server handles all requests. Several types of extension points in the API server allow
|
||||
authenticating requests, or blocking them based on their content, editing content, and handling
|
||||
deletion. These are described in the [API Access Extensions](#api-access-extensions) section.
|
||||
|
||||
1. The API server serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are
|
||||
defined by the Kubernetes project and can't be changed. You can also add resources that you
|
||||
define, or that other projects have defined, called *Custom Resources*, as explained in the
|
||||
[Custom Resources](#user-defined-types) section. Custom Resources are often used with API access
|
||||
extensions.
|
||||
-->
|
||||
1. 用户通常使用 `kubectl` 与 Kubernetes API 交互。
|
||||
[kubectl 插件](/zh-cn/docs/tasks/extend-kubectl/kubectl-plugins/)能够扩展 kubectl 程序的行为。
|
||||
|
@ -224,22 +242,37 @@ If you are unsure where to start, this flowchart can help. Note that some soluti
|
|||
基于其内容阻止请求、编辑请求内容、处理删除操作等等。
|
||||
这些扩展点在 [API 访问扩展](#api-access-extensions)节详述。
|
||||
|
||||
3. API 服务器向外提供不同类型的资源(resources)。
|
||||
内置的资源类型,如 `pods`,是由 Kubernetes 项目所定义的,无法改变。
|
||||
3. API 服务器向外提供不同类型的**资源(resources)**。
|
||||
诸如 `pods` 的**内置资源类型**是由 Kubernetes 项目所定义的,无法改变。
|
||||
你也可以添加自己定义的或者其他项目所定义的称作自定义资源(Custom Resources)
|
||||
的资源,正如[自定义资源](#user-defined-types)节所描述的那样。
|
||||
自定义资源通常与 API 访问扩展点结合使用。
|
||||
|
||||
<!--
|
||||
1. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend
|
||||
scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section.
|
||||
|
||||
1. Much of the behavior of Kubernetes is implemented by programs called Controllers which are
|
||||
clients of the API server. Controllers are often used in conjunction with Custom Resources.
|
||||
|
||||
1. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on
|
||||
the cluster network. [Network Plugins](#network-plugins) allow for different implementations of
|
||||
pod networking.
|
||||
|
||||
1. The kubelet also mounts and unmounts volumes for containers. New types of storage can be
|
||||
supported via [Storage Plugins](#storage-plugins).
|
||||
|
||||
If you are unsure where to start, this flowchart can help. Note that some solutions may involve
|
||||
several types of extensions.
|
||||
-->
|
||||
4. Kubernetes 调度器负责决定 Pod 要放置到哪些节点上执行。
|
||||
有几种方式来扩展调度行为。这些方法将在
|
||||
[调度器扩展](#scheduler-extensions)节中展开。
|
||||
有几种方式来扩展调度行为。这些方法将在[调度器扩展](#scheduler-extensions)节中展开。
|
||||
|
||||
5. Kubernetes 中的很多行为都是通过称为控制器(Controllers)的程序来实现的,
|
||||
这些程序也都是 API 服务器的客户端。控制器常常与自定义资源结合使用。
|
||||
|
||||
6. 组件 kubelet 运行在各个节点上,帮助 Pod 展现为虚拟的服务器并在集群网络中拥有自己的 IP。
|
||||
[网络插件](#network-plugins)使得 Kubernetes 能够采用不同实现技术来连接
|
||||
Pod 网络。
|
||||
[网络插件](#network-plugins)使得 Kubernetes 能够采用不同实现技术来连接 Pod 网络。
|
||||
|
||||
7. 组件 kubelet 也会为容器增加或解除存储卷的挂载。
|
||||
通过[存储插件](#storage-plugins),可以支持新的存储类型。
|
||||
|
@ -258,11 +291,14 @@ If you are unsure where to start, this flowchart can help. Note that some soluti
|
|||
|
||||
### User-Defined Types
|
||||
|
||||
Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such as `kubectl`.
|
||||
Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application
|
||||
configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such
|
||||
as `kubectl`.
|
||||
|
||||
Do not use a Custom Resource as data storage for application, user, or monitoring data.
|
||||
|
||||
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||
For more about Custom Resources, see the
|
||||
[Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||
-->
|
||||
## API 扩展 {#api-extensions}
|
||||
|
||||
|
@ -278,7 +314,10 @@ For more about Custom Resources, see the [Custom Resources concept guide](/docs/
|
|||
<!--
|
||||
### Combining New APIs with Automation
|
||||
|
||||
The combination of a custom resource API and a control loop is called the [Operator pattern](/docs/concepts/extend-kubernetes/operator/). The Operator pattern is used to manage specific, usually stateful, applications. These custom APIs and control loops can also be used to control other resources, such as storage or policies.
|
||||
The combination of a custom resource API and a control loop is called the
|
||||
[Operator pattern](/docs/concepts/extend-kubernetes/operator/). The Operator pattern is used to manage
|
||||
specific, usually stateful, applications. These custom APIs and control loops can also be used to
|
||||
control other resources, such as storage or policies.
|
||||
-->
|
||||
### 结合使用新 API 与自动化组件 {#combinding-new-apis-with-automation}
|
||||
|
||||
|
@ -290,88 +329,104 @@ Operator 模式用来管理特定的、通常是有状态的应用。
|
|||
<!--
|
||||
### Changing Built-in Resources
|
||||
|
||||
When you extend the Kubernetes API by adding custom resources, the added resources always fall into a new API Groups. You cannot replace or change existing API groups.
|
||||
Adding an API does not directly let you affect the behavior of existing APIs (e.g. Pods), but API Access Extensions do.
|
||||
When you extend the Kubernetes API by adding custom resources, the added resources always fall
|
||||
into a new API Groups. You cannot replace or change existing API groups.
|
||||
Adding an API does not directly let you affect the behavior of existing APIs (e.g. Pods), but API
|
||||
Access Extensions do.
|
||||
-->
|
||||
### 更改内置资源 {#changing-built-in-resources}
|
||||
|
||||
当你通过添加自定义资源来扩展 Kubernetes 时,所添加的资源通常会被放在一个新的
|
||||
API 组中。你不可以替换或更改现有的 API 组。
|
||||
添加新的 API 不会直接让你影响现有 API (如 Pods)的行为,不过 API
|
||||
添加新的 API 不会直接让你影响现有 API(如 Pod)的行为,不过 API
|
||||
访问扩展能够实现这点。
|
||||
|
||||
<!--
|
||||
### API Access Extensions
|
||||
|
||||
When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/) for more on this flow.
|
||||
When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then
|
||||
subject to various types of Admission Control. See
|
||||
[Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/)
|
||||
for more on this flow.
|
||||
|
||||
Each of these steps offers extension points.
|
||||
|
||||
Kubernetes has several built-in authentication methods that it supports. It can also sit behind an authenticating proxy, and it can send a token from an Authorization header to a remote service for verification (a webhook). All of these methods are covered in the [Authentication documentation](/docs/reference/access-authn-authz/authentication/).
|
||||
Kubernetes has several built-in authentication methods that it supports. It can also sit behind an
|
||||
authenticating proxy, and it can send a token from an Authorization header to a remote service for
|
||||
verification (a webhook). All of these methods are covered in the
|
||||
[Authentication documentation](/docs/reference/access-authn-authz/authentication/).
|
||||
-->
|
||||
### API 访问扩展 {#api-access-extensions}
|
||||
|
||||
当请求到达 Kubernetes API 服务器时,首先要经过身份认证,之后是鉴权操作,
|
||||
再之后要经过若干类型的准入控制器的检查。
|
||||
参见[控制 Kubernetes API 访问](/zh-cn/docs/concepts/security/controlling-access/)
|
||||
以了解此流程的细节。
|
||||
参见[控制 Kubernetes API 访问](/zh-cn/docs/concepts/security/controlling-access/)以了解此流程的细节。
|
||||
|
||||
这些步骤中都存在扩展点。
|
||||
|
||||
Kubernetes 提供若干内置的身份认证方法。它也可以运行在某种身份认证代理的后面,
|
||||
并且可以将来自鉴权头部的令牌发送到某个远程服务(Webhook)来执行验证操作。
|
||||
所有这些方法都在[身份认证文档](/zh-cn/docs/reference/access-authn-authz/authentication/)
|
||||
中有详细论述。
|
||||
所有这些方法都在[身份认证文档](/zh-cn/docs/reference/access-authn-authz/authentication/)中有详细论述。
|
||||
|
||||
<!--
|
||||
### Authentication
|
||||
|
||||
[Authentication](/docs/reference/access-authn-authz/authentication/) maps headers or certificates in all requests to a username for the client making the request.
|
||||
[Authentication](/docs/reference/access-authn-authz/authentication/) maps headers or certificates
|
||||
in all requests to a username for the client making the request.
|
||||
|
||||
Kubernetes provides several built-in authentication methods, and an [Authentication webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) method if those don't meet your needs.
|
||||
Kubernetes provides several built-in authentication methods, and an
|
||||
[Authentication webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
|
||||
method if those don't meet your needs.
|
||||
-->
|
||||
### 身份认证 {#authentication}
|
||||
|
||||
[身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)负责将所有请求中
|
||||
的头部或证书映射到发出该请求的客户端的用户名。
|
||||
[身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/)负责将所有请求中的头部或证书映射到发出该请求的客户端的用户名。
|
||||
|
||||
Kubernetes 提供若干种内置的认证方法,以及
|
||||
[认证 Webhook](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
|
||||
Kubernetes 提供若干种内置的认证方法,
|
||||
以及[认证 Webhook](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
|
||||
方法以备内置方法无法满足你的要求。
|
||||
|
||||
<!--
|
||||
### Authorization
|
||||
|
||||
[Authorization](/docs/reference/access-authn-authz/authorization/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources - it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
|
||||
[Authorization](/docs/reference/access-authn-authz/authorization/) determines whether specific
|
||||
users can read, write, and do other operations on API resources. It works at the level of whole
|
||||
resources -- it doesn't discriminate based on arbitrary object fields. If the built-in
|
||||
authorization options don't meet your needs, [Authorization webhook](/docs/reference/access-authn-authz/webhook/)
|
||||
allows calling out to user-provided code to make an authorization decision.
|
||||
-->
|
||||
### 鉴权 {#authorization}
|
||||
|
||||
[鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/)
|
||||
操作负责确定特定的用户是否可以读、写 API 资源或对其执行其他操作。
|
||||
此操作仅在整个资源集合的层面进行。
|
||||
[鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/)操作负责确定特定的用户是否可以读、写 API
|
||||
资源或对其执行其他操作。此操作仅在整个资源集合的层面进行。
|
||||
换言之,它不会基于对象的特定字段作出不同的判决。
|
||||
如果内置的鉴权选项无法满足你的需要,你可以使用
|
||||
[鉴权 Webhook](/zh-cn/docs/reference/access-authn-authz/webhook/)来调用用户提供
|
||||
的代码,执行定制的鉴权操作。
|
||||
如果内置的鉴权选项无法满足你的需要,
|
||||
你可以使用[鉴权 Webhook](/zh-cn/docs/reference/access-authn-authz/webhook/) 来调用用户提供的代码,
|
||||
执行定制的鉴权操作。
|
||||
|
||||
<!--
|
||||
### Dynamic Admission Control
|
||||
|
||||
After a request is authorized, if it is a write operation, it also goes through [Admission Control](/docs/reference/access-authn-authz/admission-controllers/) steps. In addition to the built-in steps, there are several extensions:
|
||||
After a request is authorized, if it is a write operation, it also goes through
|
||||
[Admission Control](/docs/reference/access-authn-authz/admission-controllers/) steps.
|
||||
In addition to the built-in steps, there are several extensions:
|
||||
|
||||
* The [Image Policy webhook](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) restricts what images can be run in containers.
|
||||
* To make arbitrary admission control decisions, a general [Admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) can be used. Admission Webhooks can reject creations or updates.
|
||||
* The [Image Policy webhook](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook)
|
||||
restricts what images can be run in containers.
|
||||
* To make arbitrary admission control decisions, a general
|
||||
[Admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
|
||||
can be used. Admission Webhooks can reject creations or updates.
|
||||
-->
|
||||
### 动态准入控制 {#dynamic-admission-control}
|
||||
|
||||
请求的鉴权操作结束之后,如果请求的是写操作,还会经过
|
||||
[准入控制](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)处理步骤。
|
||||
请求的鉴权操作结束之后,如果请求的是写操作,
|
||||
还会经过[准入控制](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)处理步骤。
|
||||
除了内置的处理步骤,还存在一些扩展点:
|
||||
|
||||
* [镜像策略 Webhook](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook)
|
||||
能够限制容器中可以运行哪些镜像。
|
||||
* 为了执行任意的准入控制,可以使用一种通用的
|
||||
[准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
|
||||
* 为了执行任意的准入控制,
|
||||
可以使用一种通用的[准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
|
||||
机制。这类 Webhook 可以拒绝对象创建或更新请求。
|
||||
|
||||
<!--
|
||||
|
@ -379,21 +434,23 @@ After a request is authorized, if it is a write operation, it also goes through
|
|||
|
||||
### Storage Plugins
|
||||
|
||||
[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md
|
||||
) allow users to mount volume types without built-in support by having the
|
||||
Kubelet call a Binary Plugin to mount the volume.
|
||||
[Flex Volumes](https://git.k8s.io/design-proposals-archive/storage/flexvolume-deployment.md)
|
||||
allow users to mount volume types without built-in support by having the kubelet call a binary
|
||||
plugin to mount the volume.
|
||||
-->
|
||||
## 基础设施扩展 {#infrastructure-extensions}
|
||||
|
||||
### 存储插件 {#storage-plugins}
|
||||
|
||||
[FlexVolumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md
|
||||
)
|
||||
[Flex Volumes](https://git.k8s.io/design-proposals-archive/storage/flexvolume-deployment.md)
|
||||
卷可以让用户挂载无需内建支持的卷类型,
|
||||
kubelet 会调用可执行文件插件来挂载对应的存储卷。
|
||||
|
||||
<!--
|
||||
FlexVolume is deprecated since Kubernetes v1.23. The Out-of-tree CSI driver is the recommended way to write volume drivers in Kubernetes. See [Kubernetes Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md#kubernetes-volume-plugin-faq-for-storage-vendors) for more information.
|
||||
FlexVolume is deprecated since Kubernetes v1.23. The out-of-tree CSI driver is the recommended way
|
||||
to write volume drivers in Kubernetes. See
|
||||
[Kubernetes Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md#kubernetes-volume-plugin-faq-for-storage-vendors)
|
||||
for more information.
|
||||
-->
|
||||
从 Kubernetes v1.23 开始,FlexVolume 被弃用。
|
||||
在 Kubernetes 中编写卷驱动的推荐方式是使用树外(Out-of-tree)CSI 驱动。
|
||||
|
@ -403,12 +460,13 @@ FlexVolume is deprecated since Kubernetes v1.23. The Out-of-tree CSI driver is t
|
|||
### Device Plugins
|
||||
|
||||
Device plugins allow a node to discover new Node resources (in addition to the
|
||||
builtin ones like cpu and memory) via a [Device
|
||||
Plugin](/docs/concepts/cluster-administration/device-plugins/).
|
||||
builtin ones like cpu and memory) via a
|
||||
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
|
||||
|
||||
### Network Plugins
|
||||
|
||||
Different networking fabrics can be supported via node-level [Network Plugins](/docs/admin/network-plugins/).
|
||||
Different networking fabrics can be supported via node-level
|
||||
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).
|
||||
-->
|
||||
### 设备插件 {#device-plugins}
|
||||
|
||||
|
@ -433,7 +491,7 @@ This is a significant undertaking, and almost all Kubernetes users find they
|
|||
do not need to modify the scheduler.
|
||||
|
||||
The scheduler also supports a
|
||||
[webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md)
|
||||
[webhook](https://git.k8s.io/design-proposals-archive/scheduling/scheduler_extender.md)
|
||||
that permits a webhook backend (scheduler extension) to filter and prioritize
|
||||
the nodes chosen for a pod.
|
||||
-->
|
||||
|
@ -441,14 +499,13 @@ the nodes chosen for a pod.
|
|||
|
||||
调度器是一种特殊的控制器,负责监视 Pod 变化并将 Pod 分派给节点。
|
||||
默认的调度器可以被整体替换掉,同时继续使用其他 Kubernetes 组件。
|
||||
或者也可以在同一时刻使用
|
||||
[多个调度器](/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)。
|
||||
或者也可以在同一时刻使用[多个调度器](/zh-cn/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)。
|
||||
|
||||
这是一项非同小可的任务,几乎绝大多数 Kubernetes
|
||||
用户都会发现其实他们不需要修改调度器。
|
||||
|
||||
调度器也支持一种
|
||||
[Webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md),
|
||||
[Webhook](https://git.k8s.io/design-proposals-archive/scheduling/scheduler_extender.md),
|
||||
允许使用某种 Webhook 后端(调度器扩展)来为 Pod 可选的节点执行过滤和优先排序操作。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
@ -457,8 +514,8 @@ the nodes chosen for a pod.
|
|||
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
* Learn more about Infrastructure extensions
|
||||
* [Network Plugins](/docs/concepts/cluster-administration/network-plugins/)
|
||||
* [Device Plugins](/docs/concepts/cluster-administration/device-plugins/)
|
||||
* [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
|
||||
* [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
|
||||
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)
|
||||
-->
|
||||
|
|
|
@ -18,30 +18,28 @@ Operators are software extensions to Kubernetes that make use of
|
|||
to manage applications and their components. Operators follow
|
||||
Kubernetes principles, notably the [control loop](/docs/concepts/architecture/controller).
|
||||
-->
|
||||
Operator 是 Kubernetes 的扩展软件,它利用
|
||||
[定制资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
管理应用及其组件。
|
||||
Operator 遵循 Kubernetes 的理念,特别是在[控制器](/zh-cn/docs/concepts/architecture/controller)
|
||||
方面。
|
||||
Operator 是 Kubernetes 的扩展软件,
|
||||
它利用[定制资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)管理应用及其组件。
|
||||
Operator 遵循 Kubernetes 的理念,特别是在[控制器](/zh-cn/docs/concepts/architecture/controller)方面。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Motivation
|
||||
|
||||
The Operator pattern aims to capture the key aim of a human operator who
|
||||
The _operator pattern_ aims to capture the key aim of a human operator who
|
||||
is managing a service or set of services. Human operators who look after
|
||||
specific applications and services have deep knowledge of how the system
|
||||
ought to behave, how to deploy it, and how to react if there are problems.
|
||||
|
||||
People who run workloads on Kubernetes often like to use automation to take
|
||||
care of repeatable tasks. The Operator pattern captures how you can write
|
||||
care of repeatable tasks. The operator pattern captures how you can write
|
||||
code to automate a task beyond what Kubernetes itself provides.
|
||||
-->
|
||||
## 初衷
|
||||
## 初衷 {#motivation}
|
||||
|
||||
Operator 模式旨在捕获(正在管理一个或一组服务的)运维人员的关键目标。
|
||||
负责特定应用和 service 的运维人员,在系统应该如何运行、如何部署以及出现问题时如何处理等方面有深入的了解。
|
||||
**Operator 模式** 旨在记述(正在管理一个或一组服务的)运维人员的关键目标。
|
||||
这些运维人员负责一些特定的应用和 Service,他们需要清楚地知道系统应该如何运行、如何部署以及出现问题时如何处理。
|
||||
|
||||
在 Kubernetes 上运行工作负载的人们都喜欢通过自动化来处理重复的任务。
|
||||
Operator 模式会封装你编写的(Kubernetes 本身提供功能以外的)任务自动化代码。
|
||||
|
@ -58,20 +56,19 @@ Kubernetes' {{< glossary_tooltip text="operator pattern" term_id="operator-patte
|
|||
Operators are clients of the Kubernetes API that act as controllers for
|
||||
a [Custom Resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||
-->
|
||||
## Kubernetes 上的 Operator
|
||||
## Kubernetes 上的 Operator {#operators-in-kubernetes}
|
||||
|
||||
Kubernetes 为自动化而生。无需任何修改,你即可以从 Kubernetes 核心中获得许多内置的自动化功能。
|
||||
你可以使用 Kubernetes 自动化部署和运行工作负载, *甚至* 可以自动化 Kubernetes 自身。
|
||||
你可以使用 Kubernetes 自动化部署和运行工作负载, **甚至** 可以自动化 Kubernetes 自身。
|
||||
|
||||
Kubernetes 的 {{< glossary_tooltip text="Operator 模式" term_id="operator-pattern" >}}概念允许你在不修改
|
||||
Kubernetes 自身代码的情况下,通过为一个或多个自定义资源关联{{< glossary_tooltip text="控制器" term_id="controller" >}}
|
||||
来扩展集群的能力。
|
||||
Operator 是 Kubernetes API 的客户端,充当
|
||||
[自定义资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
的控制器。
|
||||
Kubernetes 自身代码的情况下,
|
||||
通过为一个或多个自定义资源关联{{< glossary_tooltip text="控制器" term_id="controller" >}}来扩展集群的能力。
|
||||
Operator 是 Kubernetes API 的客户端,
|
||||
充当[自定义资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)的控制器。
|
||||
|
||||
<!--
|
||||
## An example Operator {#example}
|
||||
## An example operator {#example}
|
||||
|
||||
Some of the things that you can use an operator to automate include:
|
||||
|
||||
|
@ -97,7 +94,7 @@ Some of the things that you can use an operator to automate include:
|
|||
* 在没有内部成员选举程序的情况下,为分布式应用选择首领角色
|
||||
|
||||
<!--
|
||||
What might an Operator look like in more detail? Here's an example:
|
||||
What might an operator look like in more detail? Here's an example:
|
||||
|
||||
1. A custom resource named SampleDB, that you can configure into the cluster.
|
||||
2. A Deployment that makes sure a Pod is running that contains the
|
||||
|
@ -105,18 +102,18 @@ What might an Operator look like in more detail? Here's an example:
|
|||
3. A container image of the operator code.
|
||||
4. Controller code that queries the control plane to find out what SampleDB
|
||||
resources are configured.
|
||||
5. The core of the Operator is code to tell the API server how to make
|
||||
5. The core of the operator is code to tell the API server how to make
|
||||
reality match the configured resources.
|
||||
* If you add a new SampleDB, the operator sets up PersistentVolumeClaims
|
||||
to provide durable database storage, a StatefulSet to run SampleDB and
|
||||
a Job to handle initial configuration.
|
||||
* If you delete it, the Operator takes a snapshot, then makes sure that
|
||||
* If you delete it, the operator takes a snapshot, then makes sure that
|
||||
the StatefulSet and Volumes are also removed.
|
||||
6. The operator also manages regular database backups. For each SampleDB
|
||||
resource, the operator determines when to create a Pod that can connect
|
||||
to the database and take backups. These Pods would rely on a ConfigMap
|
||||
and / or a Secret that has database connection details and credentials.
|
||||
7. Because the Operator aims to provide robust automation for the resource
|
||||
7. Because the operator aims to provide robust automation for the resource
|
||||
it manages, there would be additional supporting code. For this example,
|
||||
code checks to see if the database is running an old version and, if so,
|
||||
creates Job objects that upgrade it for you.
|
||||
|
@ -129,40 +126,38 @@ What might an Operator look like in more detail? Here's an example:
|
|||
3. Operator 代码的容器镜像。
|
||||
4. 控制器代码,负责查询控制平面以找出已配置的 SampleDB 资源。
|
||||
5. Operator 的核心是告诉 API 服务器,如何使现实与代码里配置的资源匹配。
|
||||
* 如果添加新的 SampleDB,Operator 将设置 PersistentVolumeClaims 以提供
|
||||
持久化的数据库存储,设置 StatefulSet 以运行 SampleDB,并设置 Job
|
||||
来处理初始配置。
|
||||
* 如果添加新的 SampleDB,Operator 将设置 PersistentVolumeClaims 以提供持久化的数据库存储,
|
||||
设置 StatefulSet 以运行 SampleDB,并设置 Job 来处理初始配置。
|
||||
* 如果你删除它,Operator 将建立快照,然后确保 StatefulSet 和 Volume 已被删除。
|
||||
6. Operator 也可以管理常规数据库的备份。对于每个 SampleDB 资源,Operator
|
||||
会确定何时创建(可以连接到数据库并进行备份的)Pod。这些 Pod 将依赖于
|
||||
ConfigMap 和/或具有数据库连接详细信息和凭据的 Secret。
|
||||
7. 由于 Operator 旨在为其管理的资源提供强大的自动化功能,因此它还需要一些
|
||||
额外的支持性代码。在这个示例中,代码将检查数据库是否正运行在旧版本上,
|
||||
7. 由于 Operator 旨在为其管理的资源提供强大的自动化功能,因此它还需要一些额外的支持性代码。
|
||||
在这个示例中,代码将检查数据库是否正运行在旧版本上,
|
||||
如果是,则创建 Job 对象为你升级数据库。
|
||||
|
||||
<!--
|
||||
## Deploying Operators
|
||||
## Deploying operators
|
||||
|
||||
The most common way to deploy an Operator is to add the
|
||||
The most common way to deploy an operator is to add the
|
||||
Custom Resource Definition and its associated Controller to your cluster.
|
||||
The Controller will normally run outside of the
|
||||
{{< glossary_tooltip text="control plane" term_id="control-plane" >}},
|
||||
much as you would run any containerized application.
|
||||
For example, you can run the controller in your cluster as a Deployment.
|
||||
-->
|
||||
## 部署 Operator
|
||||
## 部署 Operator {#deploying-operators}
|
||||
|
||||
部署 Operator 最常见的方法是将自定义资源及其关联的控制器添加到你的集群中。
|
||||
跟运行容器化应用一样,控制器通常会运行在
|
||||
{{< glossary_tooltip text="控制平面" term_id="control-plane" >}} 之外。
|
||||
跟运行容器化应用一样,控制器通常会运行在{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}之外。
|
||||
例如,你可以在集群中将控制器作为 Deployment 运行。
|
||||
|
||||
<!--
|
||||
## Using an Operator {#using-operators}
|
||||
## Using an operator {#using-operators}
|
||||
|
||||
Once you have an Operator deployed, you'd use it by adding, modifying or
|
||||
deleting the kind of resource that the Operator uses. Following the above
|
||||
example, you would set up a Deployment for the Operator itself, and then:
|
||||
Once you have an operator deployed, you'd use it by adding, modifying or
|
||||
deleting the kind of resource that the operator uses. Following the above
|
||||
example, you would set up a Deployment for the operator itself, and then:
|
||||
|
||||
```shell
|
||||
kubectl get SampleDB # find configured databases
|
||||
|
@ -182,35 +177,39 @@ kubectl edit SampleDB/example-database # 手动修改某些配置
|
|||
```
|
||||
|
||||
<!--
|
||||
…and that's it! The Operator will take care of applying the changes as well as keeping the existing service in good shape.
|
||||
…and that's it! The operator will take care of applying the changes
|
||||
as well as keeping the existing service in good shape.
|
||||
-->
|
||||
可以了!Operator 会负责应用所作的更改并保持现有服务处于良好的状态。
|
||||
|
||||
<!--
|
||||
## Writing your own Operator {#writing-operator}
|
||||
## Writing your own operator {#writing-operator}
|
||||
-->
|
||||
|
||||
## 编写你自己的 Operator {#writing-operator}
|
||||
|
||||
<!--
|
||||
If there isn't an Operator in the ecosystem that implements the behavior you
|
||||
want, you can code your own.
|
||||
If there isn't an operator in the ecosystem that implements the behavior you
|
||||
want, you can code your own.
|
||||
|
||||
You also implement an Operator (that is, a Controller) using any language / runtime
|
||||
You also implement an operator (that is, a Controller) using any language / runtime
|
||||
that can act as a [client for the Kubernetes API](/docs/reference/using-api/client-libraries/).
|
||||
-->
|
||||
|
||||
如果生态系统中没可以实现你目标的 Operator,你可以自己编写代码。
|
||||
|
||||
你还可以使用任何支持 [Kubernetes API 客户端](/zh-cn/docs/reference/using-api/client-libraries/)
|
||||
的语言或运行时来实现 Operator(即控制器)。
|
||||
你还可以使用任何支持
|
||||
[Kubernetes API 客户端](/zh-cn/docs/reference/using-api/client-libraries/)的语言或运行时来实现
|
||||
Operator(即控制器)。
|
||||
|
||||
<!--
|
||||
Following are a few libraries and tools you can use to write your own cloud native
|
||||
Operator.
|
||||
operator.
|
||||
-->
|
||||
以下是一些库和工具,你可用于编写自己的云原生 Operator。
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
<!--
|
||||
* [Charmed Operator Framework](https://juju.is/)
|
||||
* [Java Operator SDK](https://github.com/java-operator-sdk/java-operator-sdk)
|
||||
* [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework)
|
||||
|
@ -223,9 +222,6 @@ you implement yourself
|
|||
* [Operator Framework](https://operatorframework.io)
|
||||
* [shell-operator](https://github.com/flant/shell-operator)
|
||||
-->
|
||||
以下是一些库和工具,你可用于编写自己的云原生 Operator。
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
* [Charmed Operator Framework](https://juju.is/)
|
||||
* [Java Operator SDK](https://github.com/java-operator-sdk/java-operator-sdk)
|
||||
|
@ -233,7 +229,7 @@ you implement yourself
|
|||
* [kube-rs](https://kube.rs/) (Rust)
|
||||
* [kubebuilder](https://book.kubebuilder.io/)
|
||||
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET operator SDK)
|
||||
* [KUDO](https://kudo.dev/) (Kubernetes 通用声明式 Operator)
|
||||
* [KUDO](https://kudo.dev/)(Kubernetes 通用声明式 Operator)
|
||||
* [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html),可与 Webhooks 结合使用,以实现自己的功能。
|
||||
* [Operator Framework](https://operatorframework.io)
|
||||
* [shell-operator](https://github.com/flant/shell-operator)
|
||||
|
@ -245,14 +241,14 @@ you implement yourself
|
|||
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case
|
||||
* [Publish](https://operatorhub.io/) your operator for other people to use
|
||||
* Read [CoreOS' original article](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern (this is an archived version of the original article).
|
||||
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators
|
||||
* Read [CoreOS' original article](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html) that introduced the operator pattern (this is an archived version of the original article).
|
||||
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building operators
|
||||
-->
|
||||
|
||||
* 阅读 {{< glossary_tooltip text="CNCF" term_id="cncf" >}} [Operator 白皮书](https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md)。
|
||||
* 详细了解 [定制资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* 详细了解[定制资源](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* 在 [OperatorHub.io](https://operatorhub.io/) 上找到现成的、适合你的 Operator
|
||||
* [发布](https://operatorhub.io/)你的 Operator,让别人也可以使用
|
||||
* 阅读 [CoreOS 原始文章](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html),它介绍了 Operator 模式(这是一个存档版本的原始文章)。
|
||||
* 阅读这篇来自谷歌云的关于构建 Operator 最佳实践的
|
||||
[文章](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps)
|
||||
* 阅读这篇来自谷歌云的关于构建 Operator
|
||||
最佳实践的[文章](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps)
|
||||
|
|
|
@ -203,7 +203,7 @@ webhooks:
|
|||
You must replace the `<CA_BUNDLE>` in the above example by a valid CA bundle
|
||||
which is a PEM-encoded CA bundle for validating the webhook's server certificate.
|
||||
-->
|
||||
你必须在以上示例中将 `<CA_BUNDLE>` 替换为一个有效的 VA 证书包,
|
||||
你必须在以上示例中将 `<CA_BUNDLE>` 替换为一个有效的 CA 证书包,
|
||||
这是一个用 PEM 编码的 CA 证书包,用于校验 Webhook 的服务器证书。
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -36,4 +36,4 @@ tags:
|
|||
-->
|
||||
[HostAliases](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#hostalias-v1-core)
|
||||
是一个包含主机名和 IP 地址的可选列表,配置后将被注入到 Pod 内的 hosts 文件中。
|
||||
该选项仅适用于没有配置 hostNetwork 的 Pod.
|
||||
该选项仅适用于没有配置 hostNetwork 的 Pod。
|
||||
|
|
|
@ -44,4 +44,4 @@ You can use Minikube to
|
|||
|
||||
Minikube 在用户计算机上的一个虚拟机内运行单节点 Kubernetes 集群。
|
||||
你可以使用 Minikube
|
||||
[在学习环境中尝试 Kubernetes](/zh-cn/docs/setup/learning-environment/).
|
||||
[在学习环境中尝试 Kubernetes](/zh-cn/docs/setup/learning-environment/)。
|
||||
|
|
|
@ -28,7 +28,7 @@ The [operator pattern](/docs/concepts/extend-kubernetes/operator/) is a system
|
|||
design that links a {{< glossary_tooltip term_id="controller" >}} to one or more custom
|
||||
resources.
|
||||
-->
|
||||
[operator 模式](/zh-cn/docs/concepts/extend-kubernetes/operator/) 是一种系统设计,
|
||||
[operator 模式](/zh-cn/docs/concepts/extend-kubernetes/operator/) 是一种系统设计,
|
||||
将 {{< glossary_tooltip term_id="controller" >}} 关联到一个或多个自定义资源。
|
||||
<!--more-->
|
||||
|
||||
|
|
|
@ -37,6 +37,6 @@ A high-level summary of what phase the Pod is in within its lifecyle.
|
|||
The [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/) is defined by the states or phases of a Pod. There are five possible Pod phases: Pending, Running, Succeeded, Failed, and Unknown. A high-level description of the Pod state is summarized in the [PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core) `phase` field.
|
||||
-->
|
||||
[Pod 生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/) 是关于 Pod
|
||||
处于哪个阶段的概述。包含了下面 5 种可能的阶段: Running、Pending、Succeeded、
|
||||
处于哪个阶段的概述。包含了下面 5 种可能的阶段:Running、Pending、Succeeded、
|
||||
Failed、Unknown。关于 Pod 的阶段的更高级描述请查阅
|
||||
[PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core) `phase` 字段。
|
||||
|
|
|
@ -83,7 +83,7 @@ fieldManager 是与进行这些更改的参与者或实体相关联的名称。
|
|||
A selector to restrict the list of returned objects by their fields. Defaults to everything.
|
||||
<hr>
|
||||
-->
|
||||
根据返回对象的字段限制返回对象列表的选择器。默认为返回所有字段。
|
||||
限制所返回对象的字段的选择器。默认为返回所有字段。
|
||||
<hr>
|
||||
|
||||
## fieldValidation {#fieldValidation}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -55,7 +55,7 @@ part of Kubernetes (this removal was
|
|||
[announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)
|
||||
as part of the v1.20 release).
|
||||
-->
|
||||
v1.24 之前的 Kubernetes 版本包括与 Docker Engine 的直接集成,使用名为 **dockershim** 的组件。
|
||||
v1.24 之前的 Kubernetes 版本直接集成了 Docker Engine 的一个组件,名为 **dockershim**。
|
||||
这种特殊的直接整合不再是 Kubernetes 的一部分
|
||||
(这次删除被作为 v1.20 发行版本的一部分[宣布](/zh-cn/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation))。
|
||||
|
||||
|
@ -66,8 +66,7 @@ to understand how this removal might
|
|||
affect you. To learn about migrating from using dockershim, see
|
||||
[Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/).
|
||||
-->
|
||||
你可以阅读[检查 Dockershim 弃用是否会影响你](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)
|
||||
以了解此删除可能会如何影响你。
|
||||
你可以阅读[检查 Dockershim 移除是否会影响你](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/)以了解此删除可能会如何影响你。
|
||||
要了解如何使用 dockershim 进行迁移,
|
||||
请参阅[从 dockershim 迁移](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/)。
|
||||
|
||||
|
@ -75,7 +74,7 @@ affect you. To learn about migrating from using dockershim, see
|
|||
If you are running a version of Kubernetes other than v{{< skew currentVersion >}},
|
||||
check the documentation for that version.
|
||||
-->
|
||||
如果你正在运行 v{{< skew currentVersion >}} 以外的 Kubernetes 版本,检查该版本的文档。
|
||||
如果你正在运行 v{{< skew currentVersion >}} 以外的 Kubernetes 版本,查看对应版本的文档。
|
||||
{{< /note >}}
|
||||
|
||||
<!-- body -->
|
||||
|
@ -95,8 +94,7 @@ For more information, see [Network Plugin Requirements](/docs/concepts/extend-ku
|
|||
|
||||
如果你确定不需要某个特定设置,则可以跳过它。
|
||||
|
||||
有关更多信息,请参阅[网络插件要求](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements)
|
||||
或特定容器运行时的文档。
|
||||
有关更多信息,请参阅[网络插件要求](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements)或特定容器运行时的文档。
|
||||
|
||||
<!--
|
||||
### Forwarding IPv4 and letting iptables see bridged traffic
|
||||
|
@ -169,12 +167,11 @@ In the field, people have reported cases where nodes that are configured to use
|
|||
for the kubelet and Docker, but `systemd` for the rest of the processes, become unstable under
|
||||
resource pressure.
|
||||
-->
|
||||
单个 cgroup 管理器将简化分配资源的视图,并且默认情况下将对可用资源和使用
|
||||
中的资源具有更一致的视图。
|
||||
单个 cgroup 管理器将简化分配资源的视图,并且默认情况下将对可用资源和使用中的资源具有更一致的视图。
|
||||
当有两个管理器共存于一个系统中时,最终将对这些资源产生两种视图。
|
||||
在此领域人们已经报告过一些案例,某些节点配置让 kubelet 和 docker 使用
|
||||
`cgroupfs`,而节点上运行的其余进程则使用 systemd; 这类节点在资源压力下
|
||||
会变得不稳定。
|
||||
`cgroupfs`,而节点上运行的其余进程则使用 systemd;
|
||||
这类节点在资源压力下会变得不稳定。
|
||||
|
||||
<!--
|
||||
Changing the settings such that your container runtime and kubelet use `systemd` as the cgroup driver
|
||||
|
@ -194,8 +191,8 @@ If you have automation that makes it feasible, replace the node with another usi
|
|||
configuration, or reinstall it using automation.
|
||||
-->
|
||||
注意:更改已加入集群的节点的 cgroup 驱动是一项敏感的操作。
|
||||
如果 kubelet 已经使用某 cgroup 驱动的语义创建了 pod,更改运行时以使用
|
||||
别的 cgroup 驱动,当为现有 Pods 重新创建 PodSandbox 时会产生错误。
|
||||
如果 kubelet 已经使用某 cgroup 驱动的语义创建了 Pod,更改运行时以使用别的
|
||||
cgroup 驱动,当为现有 Pod 重新创建 PodSandbox 时会产生错误。
|
||||
重启 kubelet 也可能无法解决此类问题。
|
||||
|
||||
如果你有切实可行的自动化方案,使用其他已更新配置的节点来替换该节点,
|
||||
|
@ -282,7 +279,7 @@ In order to use it, cgroup v2 must be supported by the CRI runtime as well.
|
|||
If you wish to migrate to the `systemd` cgroup driver in existing kubeadm managed clusters,
|
||||
follow [configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
|
||||
-->
|
||||
### 将 kubeadm 托管的集群迁移到 `systemd` 驱动
|
||||
### 将 kubeadm 管理的集群迁移到 `systemd` 驱动
|
||||
|
||||
如果你希望将现有的由 kubeadm 管理的集群迁移到 `systemd` cgroup 驱动程序,
|
||||
请按照[配置 cgroup 驱动程序](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)操作。
|
||||
|
@ -298,10 +295,10 @@ using the (deprecated) v1alpha2 API instead.
|
|||
-->
|
||||
## CRI 版本支持 {#cri-versions}
|
||||
|
||||
你的容器运行时必须至少支持容器运行时接口的 v1alpha2。
|
||||
你的容器运行时必须至少支持 v1alpha2 版本的容器运行时接口。
|
||||
|
||||
Kubernetes {{< skew currentVersion >}} 默认使用 v1 的 CRI API。如果容器运行时不支持 v1 API,
|
||||
则 kubelet 会回退到使用(已弃用的)v1alpha2 API。
|
||||
Kubernetes {{< skew currentVersion >}} 默认使用 v1 版本的 CRI API。如果容器运行时不支持 v1 版本的 API,
|
||||
则 kubelet 会回退到使用(已弃用的)v1alpha2 版本的 API。
|
||||
|
||||
<!--
|
||||
## Container runtimes
|
||||
|
@ -453,7 +450,7 @@ You should also note the changed `conmon_cgroup`, which has to be set to the val
|
|||
cgroup driver configuration of the kubelet (usually done via kubeadm) and CRI-O
|
||||
in sync.
|
||||
-->
|
||||
你还应该注意到 `conmon_cgroup` 被更改,当使用 CRI-O 和 `cgroupfs` 时,必须将其设置为值 `pod`。
|
||||
你还应该注意当使用 CRI-O 时,并且 CRI-O 的 cgroup 设置为 `cgroupfs` 时,必须将 `conmon_cgroup` 设置为值 `pod`。
|
||||
通常需要保持 kubelet 的 cgroup 驱动配置(通常通过 kubeadm 完成)和 CRI-O 同步。
|
||||
|
||||
<!--
|
||||
|
@ -551,8 +548,7 @@ The command line argument to use is `--pod-infra-container-image`.
|
|||
-->
|
||||
#### 重载沙箱(pause)镜像 {#override-pause-image-cri-dockerd-mcr}
|
||||
|
||||
`cri-dockerd` 适配器能够接受一个命令行参数,用于设置使用哪个容器镜像作为 Pod
|
||||
的基础设施容器(“pause 镜像”)。
|
||||
`cri-dockerd` 适配器能够接受指定用作 Pod 的基础容器的容器镜像(“pause 镜像”)作为命令行参数。
|
||||
要使用的命令行参数是 `--pod-infra-container-image`。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -34,9 +34,10 @@ kops 是一个自动化的制备系统:
|
|||
* 全自动安装流程
|
||||
* 使用 DNS 识别集群
|
||||
* 自我修复:一切都在自动扩缩组中运行
|
||||
* 支持多种操作系统(如 Debian、Ubuntu 16.04、CentOS、RHEL、Amazon Linux 和 CoreOS) - 参考 [images.md](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md)
|
||||
* 支持高可用 - 参考 [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md)
|
||||
* 可以直接提供或者生成 terraform 清单 - 参考 [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
|
||||
* 支持多种操作系统(如 Debian、Ubuntu 16.04、CentOS、RHEL、Amazon Linux 和 CoreOS),
|
||||
参考 [images.md](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md)。
|
||||
* 支持高可用,参考 [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md)。
|
||||
* 可以直接提供或者生成 terraform 清单,参考 [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -322,8 +323,8 @@ kops will create the configuration for your cluster. Note that it _only_ create
|
|||
not actually create the cloud resources - you'll do that in the next step with a `kops update cluster`. This
|
||||
give you an opportunity to review the configuration or change it.
|
||||
-->
|
||||
kops 将为你的集群创建配置。请注意,它_仅_创建配置,实际上并没有创建云资源 -
|
||||
你将在下一步中使用 `kops update cluster` 进行配置。
|
||||
kops 将为你的集群创建配置。请注意,它**仅**创建配置,实际上并没有创建云资源。
|
||||
你将在下一步中使用 `kops update cluster` 进行创建。
|
||||
这使你有机会查看配置或进行更改。
|
||||
|
||||
<!--
|
||||
|
@ -348,9 +349,9 @@ set of instances, which will be registered as kubernetes nodes. On AWS this is
|
|||
You can have several instance groups, for example if you wanted nodes that are a mix of spot and on-demand instances, or
|
||||
GPU and non-GPU instances.
|
||||
-->
|
||||
如果这是你第一次使用 kops,请花几分钟尝试一下!实例组是一组实例,将被注册为 kubernetes 节点。
|
||||
如果这是你第一次使用 kops,请花几分钟尝试一下!实例组是一组实例,将被注册为 Kubernetes 节点。
|
||||
在 AWS 上,这是通过 auto-scaling-groups 实现的。你可以有多个实例组。
|
||||
例如,如果你想要的是混合实例和按需实例的节点,或者 GPU 和非 GPU 实例。
|
||||
例如,你可能想要混合了 Spot 实例和按需实例的节点,或者混合了 GPU 实例和非 GPU 实例的节点。
|
||||
|
||||
<!--
|
||||
### (5/5) Create the cluster in AWS
|
||||
|
@ -372,7 +373,7 @@ applies the changes you have made to the configuration to your cluster - reconfi
|
|||
-->
|
||||
这需要几秒钟的时间才能运行,但实际上集群可能需要几分钟才能准备就绪。
|
||||
每当更改集群配置时,都会使用 `kops update cluster` 工具。
|
||||
它将对配置进行的更改应用于你的集群 - 根据需要重新配置 AWS 或者 kubernetes。
|
||||
它将在集群中应用你对配置进行的更改,根据需要重新配置 AWS 或者 Kubernetes。
|
||||
|
||||
<!--
|
||||
For example, after you `kops edit ig nodes`, then `kops update cluster --yes` to apply your configuration, and
|
||||
|
|
|
@ -23,7 +23,7 @@ Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [i
|
|||
|
||||
Kubespray provides:
|
||||
-->
|
||||
Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、
|
||||
Kubespray 是由若干 [Ansible](https://docs.ansible.com/) Playbook、
|
||||
[清单(inventory)](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory)、
|
||||
制备工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。
|
||||
|
||||
|
@ -207,7 +207,7 @@ Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [
|
|||
Kubespray 提供了一种使用
|
||||
[Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md)
|
||||
验证 Pod 间连接和 DNS 解析的方法。
|
||||
Netchecker 确保 netchecker-agents Pods 可以解析 DNS 请求,
|
||||
Netchecker 确保 netchecker-agents Pod 可以解析 DNS 请求,
|
||||
并在默认命名空间内对每个请求执行 ping 操作。
|
||||
这些 Pod 模仿其他工作负载类似的行为,并用作集群运行状况指示器。
|
||||
<!--
|
||||
|
@ -217,7 +217,7 @@ Kubespray provides additional playbooks to manage your cluster: _scale_ and _upg
|
|||
-->
|
||||
## 集群操作
|
||||
|
||||
Kubespray 提供了其他 Playbooks 来管理集群: **scale** 和 **upgrade**。
|
||||
Kubespray 提供了其他 Playbook 来管理集群: **scale** 和 **upgrade**。
|
||||
<!--
|
||||
### Scale your cluster
|
||||
|
||||
|
|
|
@ -3,3 +3,9 @@ title: "管理 Kubernetes 对象"
|
|||
weight: 25
|
||||
description: 用声明式和命令式范型与 Kubernetes API 交互。
|
||||
---
|
||||
|
||||
<!--
|
||||
title: "Manage Kubernetes Objects"
|
||||
description: Declarative and imperative paradigms for interacting with the Kubernetes API.
|
||||
weight: 25
|
||||
-->
|
|
@ -8,6 +8,7 @@ title: Managing Kubernetes Objects Using Imperative Commands
|
|||
content_type: task
|
||||
weight: 30
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
|
@ -16,7 +17,7 @@ imperative commands built into the `kubectl` command-line tool. This document
|
|||
explains how those commands are organized and how to use them to manage live objects.
|
||||
-->
|
||||
使用构建在 `kubectl` 命令行工具中的指令式命令可以直接快速创建、更新和删除
|
||||
Kubernetes 对象。本文档解释这些命令的组织方式以及如何使用它们来管理现时对象。
|
||||
Kubernetes 对象。本文档解释这些命令的组织方式以及如何使用它们来管理活跃对象。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -70,7 +71,7 @@ the Kubernetes object types.
|
|||
|
||||
- `run`:创建一个新的 Pod 来运行一个容器。
|
||||
- `expose`:创建一个新的 Service 对象为若干 Pod 提供流量负载均衡。
|
||||
- `autoscale`:创建一个新的 Autoscaler 对象来自动对某控制器(如 Deployment)
|
||||
- `autoscale`:创建一个新的 Autoscaler 对象来自动对某控制器(例如:Deployment)
|
||||
执行水平扩缩。
|
||||
|
||||
<!--
|
||||
|
@ -82,8 +83,8 @@ to create.
|
|||
- `create <objecttype> [<subtype>] <instancename>`
|
||||
-->
|
||||
`kubectl` 命令也支持一些对象类型驱动的创建命令。
|
||||
这些命令可以支持更多的对象类别,并且在其动机上体现得更为明显,不过要求
|
||||
用户了解它们所要创建的对象的类别。
|
||||
这些命令可以支持更多的对象类别,并且在其动机上体现得更为明显,
|
||||
不过要求用户了解它们所要创建的对象的类别。
|
||||
|
||||
- `create <对象类别> [<子类别>] <实例名称>`
|
||||
|
||||
|
@ -153,10 +154,10 @@ Setting this aspect may set different fields for different object types:
|
|||
|
||||
- `set <字段>`:设置对象的某一方面。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
In Kubernetes version 1.5, not every verb-driven command has an associated aspect-driven command.
|
||||
-->
|
||||
{{< note >}}
|
||||
在 Kubernetes 1.5 版本中,并非所有动词驱动的命令都有对应的方面驱动的命令。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -169,11 +170,11 @@ however they require a better understanding of the Kubernetes object schema.
|
|||
For more details on patch strings, see the patch section in
|
||||
[API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#patch-operations).
|
||||
-->
|
||||
`kubectl` 工具支持以下额外的方式用来直接更新现时对象,不过这些操作要求
|
||||
`kubectl` 工具支持以下额外的方式用来直接更新活跃对象,不过这些操作要求
|
||||
用户对 Kubernetes 对象的模式定义有很好的了解:
|
||||
|
||||
- `edit`:通过在编辑器中打开现时对象的配置,直接编辑其原始配置。
|
||||
- `patch`:通过使用补丁字符串(Patch String)直接更改某现时对象的的特定字段。
|
||||
- `edit`:通过在编辑器中打开活跃对象的配置,直接编辑其原始配置。
|
||||
- `patch`:通过使用补丁字符串(Patch String)直接更改某活跃对象的特定字段。
|
||||
关于补丁字符串的更详细信息,参见
|
||||
[API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#patch-operations)
|
||||
的 patch 节。
|
||||
|
@ -191,15 +192,17 @@ You can use the `delete` command to delete an object from a cluster:
|
|||
|
||||
- `delete <类别>/<名称>`
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
You can use `kubectl delete` for both imperative commands and imperative object
|
||||
configuration. The difference is in the arguments passed to the command. To use
|
||||
`kubectl delete` as an imperative command, pass the object to be deleted as
|
||||
an argument. Here's an example that passes a Deployment object named nginx:
|
||||
-->
|
||||
你可以使用 `kubectl delete` 来执行指令式命令或者指令式对象配置。不同之处在于
|
||||
传递给命令的参数。要将 `kubectl delete` 作为指令式命令使用,将要删除的对象作为
|
||||
参数传递给它。下面是一个删除名为 `nginx` 的 Deployment 对象的命令:
|
||||
你可以使用 `kubectl delete` 来执行指令式命令或者指令式对象配置。不同之处在于传递给命令的参数。
|
||||
要将 `kubectl delete` 作为指令式命令使用,将要删除的对象作为参数传递给它。
|
||||
下面是一个删除名为 `nginx` 的 Deployment 对象的命令:
|
||||
{{< /note >}}
|
||||
|
||||
```shell
|
||||
kubectl delete deployment/nginx
|
||||
|
@ -207,18 +210,30 @@ kubectl delete deployment/nginx
|
|||
|
||||
<!--
|
||||
## How to view an object
|
||||
-->
|
||||
## 如何查看对象 {#how-to-view-an-object}
|
||||
|
||||
{{< comment >}}
|
||||
<!---
|
||||
TODO(pwittrock): Uncomment this when implemented.
|
||||
|
||||
You can use `kubectl view` to print specific fields of an object.
|
||||
|
||||
- `view`: Prints the value of a specific field of an object.
|
||||
-->
|
||||
你可以使用 `kubectl view` 打印对象的特定字段。
|
||||
|
||||
- `view`:打印对象的特定字段的值。
|
||||
|
||||
{{< /comment >}}
|
||||
-->
|
||||
## 如何查看对象 {#how-to-view-an-object}
|
||||
|
||||
<!--
|
||||
There are several commands for printing information about an object:
|
||||
|
||||
- `get`: Prints basic information about matching objects. Use `get -h` to see a list of options.
|
||||
- `describe`: Prints aggregated detailed information about matching objects.
|
||||
- `logs`: Prints the stdout and stderr for a container running in a Pod.
|
||||
-->
|
||||
用来打印对象信息的命令有好几个:
|
||||
|
||||
- `get`:打印匹配到的对象的基本信息。使用 `get -h` 可以查看选项列表。
|
||||
|
@ -234,12 +249,12 @@ in a `create` command. In some of those cases, you can use a combination of
|
|||
creation. This is done by piping the output of the `create` command to the
|
||||
`set` command, and then back to the `create` command. Here's an example:
|
||||
-->
|
||||
## 使用 `set` 命令在创建对象之前修改对象
|
||||
## 使用 `set` 命令在创建对象之前修改对象 {#using-set-commands-to-modify-objects-before-creation}
|
||||
|
||||
有些对象字段在 `create` 命令中没有对应的标志。在这些场景中,
|
||||
你可以使用 `set` 和 `create` 命令的组合来在对象创建之前设置字段值。
|
||||
这是通过将 `create` 命令的输出用管道方式传递给 `set` 命令来实现的,
|
||||
最后执行 `create` 命令来创建对象。下面是一个例子:
|
||||
有些对象字段在 `create` 命令中没有对应的标志。
|
||||
在这些场景中,你可以使用 `set` 和 `create` 命令的组合来在对象创建之前设置字段值。
|
||||
这是通过将 `create` 命令的输出用管道方式传递给 `set` 命令来实现的,最后执行 `create` 命令来创建对象。
|
||||
下面是一个例子:
|
||||
|
||||
```sh
|
||||
kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -
|
||||
|
@ -250,10 +265,10 @@ kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=cli
|
|||
1. The `kubectl set selector --local -f - -o yaml` command reads the configuration from stdin, and writes the updated configuration to stdout as YAML.
|
||||
1. The `kubectl create -f -` command creates the object using the configuration provided via stdin.
|
||||
-->
|
||||
1. 命令 `kubectl create service -o yaml --dry-run=client` 创建 Service 的配置,但
|
||||
将其以 YAML 格式在标准输出上打印而不是发送给 API 服务器。
|
||||
1. 命令 `kubectl set selector --local -f - -o yaml` 从标准输入读入配置,并将更新后的
|
||||
配置以 YAML 格式输出到标准输出。
|
||||
1. 命令 `kubectl create service -o yaml --dry-run=client` 创建 Service 的配置,
|
||||
但将其以 YAML 格式在标准输出上打印而不是发送给 API 服务器。
|
||||
1. 命令 `kubectl set selector --local -f - -o yaml` 从标准输入读入配置,
|
||||
并将更新后的配置以 YAML 格式输出到标准输出。
|
||||
1. 命令 `kubectl create -f -` 使用标准输入上获得的配置创建对象。
|
||||
|
||||
<!--
|
||||
|
@ -262,7 +277,7 @@ kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=cli
|
|||
You can use `kubectl create --edit` to make arbitrary changes to an object
|
||||
before it is created. Here's an example:
|
||||
-->
|
||||
## 在创建之前使用 `--edit` 更改对象
|
||||
## 在创建之前使用 `--edit` 更改对象 {#using-edit-to-modify-objects-before-creation}
|
||||
|
||||
你可以用 `kubectl create --edit` 来在对象被创建之前执行任意的变更。
|
||||
下面是一个例子:
|
||||
|
@ -276,8 +291,7 @@ kubectl create --edit -f /tmp/srv.yaml
|
|||
1. The `kubectl create service` command creates the configuration for the Service and saves it to `/tmp/srv.yaml`.
|
||||
1. The `kubectl create --edit` command opens the configuration file for editing before it creates the object.
|
||||
-->
|
||||
1. 命令 `kubectl create service` 创建 Service 的配置并将其保存到
|
||||
`/tmp/srv.yaml` 文件。
|
||||
1. 命令 `kubectl create service` 创建 Service 的配置并将其保存到 `/tmp/srv.yaml` 文件。
|
||||
1. 命令 `kubectl create --edit` 在创建 Service 对象打开其配置文件进行编辑。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
@ -292,4 +306,3 @@ kubectl create --edit -f /tmp/srv.yaml
|
|||
* [使用配置文件对 Kubernetes 对象进行声明式管理](/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config/)
|
||||
* [Kubectl 命令参考](/docs/reference/generated/kubectl/kubectl-commands/)
|
||||
* [Kubernetes API 参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
|
|
@ -99,41 +99,15 @@ Configurations with a single API server will experience unavailability while the
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
1. Update all Secrets that hold service account tokens to include both old and new CA certificates.
|
||||
1. Wait for the controller manager to update ca.crt in the service account Secrets to include both old and new CA certificates.
|
||||
|
||||
If any pods are started before new CA is used by API servers, the new Pods get this update and will trust both
|
||||
old and new CAs.
|
||||
If any Pods are started before new CA is used by API servers, the new Pods get this update and will trust both old and new CAs.
|
||||
-->
|
||||
3. 更新所有的保存服务账号令牌的 Secret,使之同时包含老的和新的 CA 证书。
|
||||
3. 等待该控制器管理器更新服务账号 Secret 中的 `ca.crt`,使之同时包含老的和新的 CA 证书。
|
||||
|
||||
如果在 API 服务器使用新的 CA 之前启动了新的 Pod,这些新的 Pod
|
||||
也会获得此更新并且同时信任老的和新的 CA 证书。
|
||||
|
||||
<!--
|
||||
```shell
|
||||
base64_encoded_ca="$(base64 -w0 <path to file containing both old and new CAs>)"
|
||||
|
||||
for namespace in $(kubectl get ns --no-headers | awk '{print $1}'); do
|
||||
for token in $(kubectl get secrets --namespace "$namespace" --field-selector type=kubernetes.io/service-account-token -o name); do
|
||||
kubectl get $token --namespace "$namespace" -o yaml | \
|
||||
/bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}/" | \
|
||||
kubectl apply -f -
|
||||
done
|
||||
done
|
||||
```
|
||||
-->
|
||||
```shell
|
||||
base64_encoded_ca="$(base64 -w0 <同时包含老的和新的 CA 的文件路径>)"
|
||||
|
||||
for namespace in $(kubectl get ns --no-headers | awk '{print $1}'); do
|
||||
for token in $(kubectl get secrets --namespace "$namespace" --field-selector type=kubernetes.io/service-account-token -o name); do
|
||||
kubectl get $token --namespace "$namespace" -o yaml | \
|
||||
/bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}/" | \
|
||||
kubectl apply -f -
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Restart all pods using in-cluster configurations (for example: kube-proxy, CoreDNS, etc) so they can use the
|
||||
updated certificate authority data from Secrets that link to ServiceAccounts.
|
||||
|
|
|
@ -390,7 +390,6 @@ Visually:
|
|||
用图表示:
|
||||
{{< figure src="/zh-cn/docs/images/tutor-service-nodePort-fig02.svg" alt="图 2:源 IP NodePort" class="diagram-large" caption="如图。源 IP(Type=NodePort)保存客户端源 IP 地址" link="" >}}
|
||||
|
||||
|
||||
<!--
|
||||
## Source IP for Services with `Type=LoadBalancer`
|
||||
|
||||
|
@ -510,7 +509,7 @@ serving the health check at `/healthz`. You can test this:
|
|||
路径上提供健康检查的节点的端口。你可以这样测试:
|
||||
|
||||
```shell
|
||||
kubectl get pod -o wide -l run=source-ip-app
|
||||
kubectl get pod -o wide -l app=source-ip-app
|
||||
```
|
||||
<!--
|
||||
The output is similar to this:
|
||||
|
@ -611,8 +610,8 @@ the `service.spec.healthCheckNodePort` field on the Service.
|
|||
-->
|
||||
第一类负载均衡器必须使用负载均衡器和后端之间商定的协议来传达真实的客户端 IP,
|
||||
例如 HTTP [转发](https://tools.ietf.org/html/rfc7239#section-5.2)或
|
||||
[X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For)
|
||||
表头,或[代理协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)。
|
||||
[X-FORWARDED-FOR](https://zh.wikipedia.org/wiki/X-Forwarded-For)
|
||||
标头,或[代理协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)。
|
||||
第二类负载均衡器可以通过创建指向存储在 Service 上的 `service.spec.healthCheckNodePort`
|
||||
字段中的端口的 HTTP 健康检查来利用上述功能。
|
||||
|
||||
|
@ -623,8 +622,8 @@ Delete the Services:
|
|||
-->
|
||||
删除 Service:
|
||||
|
||||
```console
|
||||
$ kubectl delete svc -l app=source-ip-app
|
||||
```shell
|
||||
kubectl delete svc -l app=source-ip-app
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -632,8 +631,8 @@ Delete the Deployment, ReplicaSet and Pod:
|
|||
-->
|
||||
删除 Deployment、ReplicaSet 和 Pod:
|
||||
|
||||
```console
|
||||
$ kubectl delete deployment source-ip-app
|
||||
```shell
|
||||
kubectl delete deployment source-ip-app
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -38,10 +38,10 @@ PersistentVolumes 和 PersistentVolumeClaims 独立于 Pod 生命周期而存在
|
|||
|
||||
{{< warning >}}
|
||||
<!--
|
||||
This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress) to deploy WordPress in production.
|
||||
This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using [WordPress Helm Chart](https://github.com/bitnami/charts/tree/master/bitnami/wordpress) to deploy WordPress in production.
|
||||
-->
|
||||
这种部署并不适合生产场景,因为它使用的是单实例 WordPress 和 MySQL Pod。
|
||||
在生产场景中,请考虑使用 [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress)
|
||||
在生产场景中,请考虑使用 [WordPress Helm Chart](https://github.com/bitnami/charts/tree/master/bitnami/wordpress)
|
||||
部署 WordPress。
|
||||
{{< /warning >}}
|
||||
|
||||
|
|
Loading…
Reference in New Issue