Merge pull request #42935 from windsonsea/0823api
Use relative paths for blogs released in 1.5 yearspull/41619/head
commit
8ae5aaeb40
|
@ -42,7 +42,7 @@ controller container.
|
|||
|
||||
While this is not strictly true, to understand what was done here, it's good to understand how
|
||||
Linux containers (and underlying mechanisms such as kernel namespaces) work.
|
||||
You can read about cgroups in the Kubernetes glossary: [`cgroup`](https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-cgroup) and learn more about cgroups interact with namespaces in the NGINX project article
|
||||
You can read about cgroups in the Kubernetes glossary: [`cgroup`](/docs/reference/glossary/?fundamental=true#term-cgroup) and learn more about cgroups interact with namespaces in the NGINX project article
|
||||
[What Are Namespaces and cgroups, and How Do They Work?](https://www.nginx.com/blog/what-are-namespaces-cgroups-how-do-they-work/).
|
||||
(As you read that, bear in mind that Linux kernel namespaces are a different thing from
|
||||
[Kubernetes namespaces](/docs/concepts/overview/working-with-objects/namespaces/)).
|
||||
|
|
|
@ -41,7 +41,7 @@ gateways and service meshes and guides are available to start exploring quickly.
|
|||
### Getting started
|
||||
|
||||
Gateway API is an official Kubernetes API like
|
||||
[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/).
|
||||
[Ingress](/docs/concepts/services-networking/ingress/).
|
||||
Gateway API represents a superset of Ingress functionality, enabling more
|
||||
advanced concepts. Similar to Ingress, there is no default implementation of
|
||||
Gateway API built into Kubernetes. Instead, there are many different
|
||||
|
|
|
@ -47,7 +47,7 @@ API.
|
|||
Kubernetes 1.0 was released on 10 July 2015 without any mechanism to restrict the
|
||||
security context and sensitive options of workloads, other than an alpha-quality
|
||||
SecurityContextDeny admission plugin (then known as `scdeny`).
|
||||
The [SecurityContextDeny plugin](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#securitycontextdeny)
|
||||
The [SecurityContextDeny plugin](/docs/reference/access-authn-authz/admission-controllers/#securitycontextdeny)
|
||||
is still in Kubernetes today (as an alpha feature) and creates an admission controller that
|
||||
prevents the usage of some fields in the security context.
|
||||
|
||||
|
|
|
@ -169,7 +169,7 @@ JAMES LAVERACK: Not really. The cornerstone of a Kubernetes organization is the
|
|||
|
||||
**CRAIG BOX: Let's talk about some of the new features in 1.24. We have been hearing for many releases now about the impending doom which is the removal of Dockershim. [It is gone in 1.24](https://github.com/kubernetes/enhancements/issues/2221). Do we worry?**
|
||||
|
||||
JAMES LAVERACK: I don't think we worry. This is something that the community has been preparing for for a long time. [We've](https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/) [published](https://kubernetes.io/blog/2022/02/17/dockershim-faq/) a [lot](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/) of [documentation](https://kubernetes.io/blog/2022/03/31/ready-for-dockershim-removal/) [about](https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) [how](https://kubernetes.io/blog/2022/05/03/dockershim-historical-context/) you need to approach this. The honest truth is that most users, most application developers in Kubernetes, will simply not notice a difference or have to worry about it.
|
||||
JAMES LAVERACK: I don't think we worry. This is something that the community has been preparing for for a long time. [We've](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/) [published](/blog/2022/02/17/dockershim-faq/) a [lot](/blog/2021/11/12/are-you-ready-for-dockershim-removal/) of [documentation](/blog/2022/03/31/ready-for-dockershim-removal/) [about](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) [how](/blog/2022/05/03/dockershim-historical-context/) you need to approach this. The honest truth is that most users, most application developers in Kubernetes, will simply not notice a difference or have to worry about it.
|
||||
|
||||
It's only really platform teams that administer Kubernetes clusters and people in very specific circumstances that are using Docker directly, not through the Kubernetes API, that are going to experience any issue at all.
|
||||
|
||||
|
@ -203,7 +203,7 @@ JAMES LAVERACK: This is really about encouraging the use of stable APIs. There w
|
|||
|
||||
JAMES LAVERACK: That's correct. There's no breaking changes in beta APIs other than the ones we've documented this release. It's only new things.
|
||||
|
||||
**CRAIG BOX: Now in this release, [the artifacts are signed](https://github.com/kubernetes/enhancements/issues/3031) using Cosign signatures, and there is [experimental support for verification of those signatures](https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/). What needed to happen to make that process possible?**
|
||||
**CRAIG BOX: Now in this release, [the artifacts are signed](https://github.com/kubernetes/enhancements/issues/3031) using Cosign signatures, and there is [experimental support for verification of those signatures](/docs/tasks/administer-cluster/verify-signed-artifacts/). What needed to happen to make that process possible?**
|
||||
|
||||
JAMES LAVERACK: This was a huge process from the other half of SIG Release. SIG Release has the release team, but it also has the release engineering team that handles the mechanics of actually pushing releases out. They have spent, and one of my friends over there, Adolfo, has spent a lot of time trying to bring us in line with [SLSA](https://slsa.dev/) compliance. I believe we're [looking now at Level 3 compliance](https://github.com/kubernetes/enhancements/issues/3027).
|
||||
|
||||
|
@ -251,7 +251,7 @@ With Kubernetes 1.24, we're enabling a beta feature that allows them to use gRPC
|
|||
|
||||
**CRAIG BOX: Are there any other enhancements that are particularly notable or relevant perhaps to the work you've been doing?**
|
||||
|
||||
JAMES LAVERACK: There's a really interesting one from SIG Network which is about [avoiding collisions in IP allocations to services](https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#avoiding-collisions-in-ip-allocation-to-services). In existing versions of Kubernetes, you can allocate a service to have a particular internal cluster IP, or you can leave it blank and it will generate its own IP.
|
||||
JAMES LAVERACK: There's a really interesting one from SIG Network which is about [avoiding collisions in IP allocations to services](/blog/2022/05/03/kubernetes-1-24-release-announcement/#avoiding-collisions-in-ip-allocation-to-services). In existing versions of Kubernetes, you can allocate a service to have a particular internal cluster IP, or you can leave it blank and it will generate its own IP.
|
||||
|
||||
In Kubernetes 1.24, there's an opt-in feature, which allows you to specify a pool for dynamic IPs to be generated from. This means that you can statically allocate an IP to a service and know that IP can not be accidentally dynamically allocated. This is a problem I've actually had in my local Kubernetes cluster, where I use static IP addresses for a bunch of port forwarding rules. I've always worried that during server start-up, they're going to get dynamically allocated to one of the other services. Now, with 1.24, and this feature, I won't have to worry about it more.
|
||||
|
||||
|
@ -267,7 +267,7 @@ JAMES LAVERACK: That is a very deep question I don't think we have time for.
|
|||
|
||||
JAMES LAVERACK: [LAUGHING]
|
||||
|
||||
**CRAIG BOX: [The theme for Kubernetes 1.24 is Stargazer](https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#release-theme-and-logo). How did you pick that as the theme?**
|
||||
**CRAIG BOX: [The theme for Kubernetes 1.24 is Stargazer](/blog/2022/05/03/kubernetes-1-24-release-announcement/#release-theme-and-logo). How did you pick that as the theme?**
|
||||
|
||||
JAMES LAVERACK: Every release lead gets to pick their theme, pretty much by themselves. When I started, I asked Rey, the previous release lead, how he picked his theme, because he picked the Next Frontier for Kubernetes 1.23. And he told me that he'd actually picked it before the release even started, which meant for the first couple of weeks and months of the release, I was really worried about it, because I hadn't picked one yet, and I wasn't sure what to pick.
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ In this SIG Storage spotlight, [Frederico Muñoz](https://twitter.com/fredericom
|
|||
|
||||
**Frederico (FSM)**: Hello, thank you for the opportunity of learning more about SIG Storage. Could you tell us a bit about yourself, your role, and how you got involved in SIG Storage.
|
||||
|
||||
**Xing Yang (XY)**: I am a Tech Lead at VMware, working on Cloud Native Storage. I am also a Co-Chair of SIG Storage. I started to get involved in K8s SIG Storage at the end of 2017, starting with contributing to the [VolumeSnapshot](https://kubernetes.io/docs/concepts/storage/volume-snapshots/) project. At that time, the VolumeSnapshot project was still in an experimental, pre-alpha stage. It needed contributors. So I volunteered to help. Then I worked with other community members to bring VolumeSnapshot to Alpha in K8s 1.12 release in 2018, Beta in K8s 1.17 in 2019, and eventually GA in 1.20 in 2020.
|
||||
**Xing Yang (XY)**: I am a Tech Lead at VMware, working on Cloud Native Storage. I am also a Co-Chair of SIG Storage. I started to get involved in K8s SIG Storage at the end of 2017, starting with contributing to the [VolumeSnapshot](/docs/concepts/storage/volume-snapshots/) project. At that time, the VolumeSnapshot project was still in an experimental, pre-alpha stage. It needed contributors. So I volunteered to help. Then I worked with other community members to bring VolumeSnapshot to Alpha in K8s 1.12 release in 2018, Beta in K8s 1.17 in 2019, and eventually GA in 1.20 in 2020.
|
||||
|
||||
**FSM**: Reading the [SIG Storage charter](https://github.com/kubernetes/community/blob/master/sig-storage/charter.md) alone it’s clear that SIG Storage covers a lot of ground, could you describe how the SIG is organised?
|
||||
|
||||
|
@ -34,7 +34,7 @@ We also have other regular meetings, i.e., CSI Implementation meeting, Object Bu
|
|||
|
||||
**XY**: In Kubernetes, there are multiple components involved for a volume operation. For example, creating a Pod to use a PVC has multiple components involved. There are the Attach Detach Controller and the external-attacher working on attaching the PVC to the pod. There’s the Kubelet that works on mounting the PVC to the pod. Of course the CSI driver is involved as well. There could be race conditions sometimes when coordinating between multiple components.
|
||||
|
||||
Another challenge is regarding core vs [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRD), not really storage specific. CRD is a great way to extend Kubernetes capabilities while not adding too much code to the Kubernetes core itself. However, this also means there are many external components that are needed when running a Kubernetes cluster.
|
||||
Another challenge is regarding core vs [Custom Resource Definitions](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRD), not really storage specific. CRD is a great way to extend Kubernetes capabilities while not adding too much code to the Kubernetes core itself. However, this also means there are many external components that are needed when running a Kubernetes cluster.
|
||||
|
||||
From the SIG Storage side, one most notable example is Volume Snapshot. Volume Snapshot APIs are defined as CRDs. API definitions and controllers are out-of-tree. There is a common snapshot controller and a snapshot validation webhook that should be deployed on the control plane, similar to how kube-controller-manager is deployed. Although Volume Snapshot is a CRD, it is a core feature of SIG Storage. It is recommended for the K8s cluster distros to deploy Volume Snapshot CRDs, the snapshot controller, and the snapshot validation webhook, however, most of the time we don’t see distros deploy them. So this becomes a problem for the storage vendors: now it becomes their responsibility to deploy these non-driver specific common components. This could cause conflicts if a customer wants to use more than one storage system and deploy more than one CSI driver.
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ PodSecurityPolicy was initially [deprecated in v1.21](/blog/2021/04/06/podsecuri
|
|||
|
||||
### Support for cgroups v2 Graduates to Stable
|
||||
|
||||
It has been more than two years since the Linux kernel cgroups v2 API was declared stable. With some distributions now defaulting to this API, Kubernetes must support it to continue operating on those distributions. cgroups v2 offers several improvements over cgroups v1, for more information see the [cgroups v2](https://kubernetes.io/docs/concepts/architecture/cgroups/) documentation. While cgroups v1 will continue to be supported, this enhancement puts us in a position to be ready for its eventual deprecation and replacement.
|
||||
It has been more than two years since the Linux kernel cgroups v2 API was declared stable. With some distributions now defaulting to this API, Kubernetes must support it to continue operating on those distributions. cgroups v2 offers several improvements over cgroups v1, for more information see the [cgroups v2](/docs/concepts/architecture/cgroups/) documentation. While cgroups v1 will continue to be supported, this enhancement puts us in a position to be ready for its eventual deprecation and replacement.
|
||||
|
||||
|
||||
### Improved Windows support
|
||||
|
@ -53,11 +53,11 @@ It has been more than two years since the Linux kernel cgroups v2 API was declar
|
|||
|
||||
### Promoted SeccompDefault to Beta
|
||||
|
||||
SeccompDefault promoted to beta, see the tutorial [Restrict a Container's Syscalls with seccomp](https://kubernetes.io/docs/tutorials/security/seccomp/#enable-the-use-of-runtimedefault-as-the-default-seccomp-profile-for-all-workloads) for more details.
|
||||
SeccompDefault promoted to beta, see the tutorial [Restrict a Container's Syscalls with seccomp](/docs/tutorials/security/seccomp/#enable-the-use-of-runtimedefault-as-the-default-seccomp-profile-for-all-workloads) for more details.
|
||||
|
||||
### Promoted endPort in Network Policy to Stable
|
||||
|
||||
Promoted `endPort` in [Network Policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/#targeting-a-range-of-ports) to GA. Network Policy providers that support `endPort` field now can use it to specify a range of ports to apply a Network Policy. Previously, each Network Policy could only target a single port.
|
||||
Promoted `endPort` in [Network Policy](/docs/concepts/services-networking/network-policies/#targeting-a-range-of-ports) to GA. Network Policy providers that support `endPort` field now can use it to specify a range of ports to apply a Network Policy. Previously, each Network Policy could only target a single port.
|
||||
|
||||
Please be aware that `endPort` field **must be supported** by the Network Policy provider. If your provider does not support `endPort`, and this field is specified in a Network Policy, the Network Policy will be created covering only the port field (single port).
|
||||
|
||||
|
@ -75,7 +75,7 @@ The [CSI Ephemeral Volume](https://github.com/kubernetes/enhancements/tree/maste
|
|||
|
||||
### Promoted CRD Validation Expression Language to Beta
|
||||
|
||||
[CRD Validation Expression Language](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/2876-crd-validation-expression-language/README.md) is promoted to beta, which makes it possible to declare how custom resources are validated using the [Common Expression Language (CEL)](https://github.com/google/cel-spec). Please see the [validation rules](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) guide.
|
||||
[CRD Validation Expression Language](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/2876-crd-validation-expression-language/README.md) is promoted to beta, which makes it possible to declare how custom resources are validated using the [Common Expression Language (CEL)](https://github.com/google/cel-spec). Please see the [validation rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) guide.
|
||||
|
||||
### Promoted Server Side Unknown Field Validation to Beta
|
||||
|
||||
|
@ -83,7 +83,7 @@ Promoted the `ServerSideFieldValidation` feature gate to beta (on by default). T
|
|||
|
||||
### Introduced KMS v2 API
|
||||
|
||||
Introduce KMS v2alpha1 API to add performance, rotation, and observability improvements. Encrypt data at rest (ie Kubernetes `Secrets`) with DEK using AES-GCM instead of AES-CBC for kms data encryption. No user action is required. Reads with AES-GCM and AES-CBC will continue to be allowed. See the guide [Using a KMS provider for data encryption](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/) for more information.
|
||||
Introduce KMS v2alpha1 API to add performance, rotation, and observability improvements. Encrypt data at rest (ie Kubernetes `Secrets`) with DEK using AES-GCM instead of AES-CBC for kms data encryption. No user action is required. Reads with AES-GCM and AES-CBC will continue to be allowed. See the guide [Using a KMS provider for data encryption](/docs/tasks/administer-cluster/kms-provider/) for more information.
|
||||
|
||||
### Kube-proxy images are now based on distroless images
|
||||
|
||||
|
|
|
@ -10,11 +10,11 @@ slug: pod-security-admission-stable
|
|||
The release of Kubernetes v1.25 marks a major milestone for Kubernetes out-of-the-box pod security
|
||||
controls: Pod Security admission (PSA) graduated to stable, and Pod Security Policy (PSP) has been
|
||||
removed.
|
||||
[PSP was deprecated in Kubernetes v1.21](https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/),
|
||||
[PSP was deprecated in Kubernetes v1.21](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/),
|
||||
and no longer functions in Kubernetes v1.25 and later.
|
||||
|
||||
The Pod Security admission controller replaces PodSecurityPolicy, making it easier to enforce predefined
|
||||
[Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/) by
|
||||
[Pod Security Standards](/docs/concepts/security/pod-security-standards/) by
|
||||
simply adding a label to a namespace. The Pod Security Standards are maintained by the K8s
|
||||
community, which means you automatically get updated security policies whenever new
|
||||
security-impacting Kubernetes features are introduced.
|
||||
|
@ -56,7 +56,7 @@ Warning: myjob-g342hj (and 6 other pods): host namespaces, allowPrivilegeEscalat
|
|||
```
|
||||
|
||||
Additionally, when you apply a non-privileged label to a namespace that has been
|
||||
[configured to be exempt](https://kubernetes.io/docs/concepts/security/pod-security-admission/#exemptions),
|
||||
[configured to be exempt](/docs/concepts/security/pod-security-admission/#exemptions),
|
||||
you will now get a warning alerting you to this fact:
|
||||
|
||||
```
|
||||
|
@ -65,7 +65,7 @@ Warning: namespace 'kube-system' is exempt from Pod Security, and the policy (en
|
|||
|
||||
### Changes to the Pod Security Standards
|
||||
|
||||
The [Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/),
|
||||
The [Pod Security Standards](/docs/concepts/security/pod-security-standards/),
|
||||
which Pod Security admission enforces, have been updated with support for the new Pod OS
|
||||
field. In v1.25 and later, if you use the Restricted policy, the following Linux-specific restrictions will no
|
||||
longer be required if you explicitly set the pod's `.spec.os.name` field to `windows`:
|
||||
|
@ -76,14 +76,14 @@ longer be required if you explicitly set the pod's `.spec.os.name` field to `win
|
|||
|
||||
In Kubernetes v1.23 and earlier, the kubelet didn't enforce the Pod OS field.
|
||||
If your cluster includes nodes running a v1.23 or older kubelet, you should explicitly
|
||||
[pin Restricted policies](https://kubernetes.io/docs/concepts/security/pod-security-admission/#pod-security-admission-labels-for-namespaces)
|
||||
[pin Restricted policies](/docs/concepts/security/pod-security-admission/#pod-security-admission-labels-for-namespaces)
|
||||
to a version prior to v1.25.
|
||||
|
||||
## Migrating from PodSecurityPolicy to the Pod Security admission controller
|
||||
|
||||
For instructions to migrate from PodSecurityPolicy to the Pod Security admission controller, and
|
||||
for help choosing a migration strategy, refer to the
|
||||
[migration guide](https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/).
|
||||
[migration guide](/docs/tasks/configure-pod-container/migrate-from-psp/).
|
||||
We're also developing a tool called
|
||||
[pspmigrator](https://github.com/kubernetes-sigs/pspmigrator) to automate parts
|
||||
of the migration process.
|
||||
|
|
|
@ -13,7 +13,7 @@ CSI Inline Volumes are similar to other ephemeral volume types, such as `configM
|
|||
|
||||
## What's new in 1.25?
|
||||
|
||||
There are a couple of new bug fixes related to this feature in 1.25, and the [CSIInlineVolume feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) has been locked to `True` with the graduation to GA. There are no new API changes, so users of this feature during beta should not notice any significant changes aside from these bug fixes.
|
||||
There are a couple of new bug fixes related to this feature in 1.25, and the [CSIInlineVolume feature gate](/docs/reference/command-line-tools-reference/feature-gates/) has been locked to `True` with the graduation to GA. There are no new API changes, so users of this feature during beta should not notice any significant changes aside from these bug fixes.
|
||||
|
||||
- [#89290 - CSI inline volumes should support fsGroup](https://github.com/kubernetes/kubernetes/issues/89290)
|
||||
- [#79980 - CSI volume reconstruction does not work for ephemeral volumes](https://github.com/kubernetes/kubernetes/issues/79980)
|
||||
|
@ -95,8 +95,8 @@ Cluster administrators may choose to omit (or remove) `Ephemeral` from `volumeLi
|
|||
|
||||
For more information on this feature, see:
|
||||
|
||||
- [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes)
|
||||
- [Kubernetes documentation](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes)
|
||||
- [CSI documentation](https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html)
|
||||
- [KEP-596](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/596-csi-inline-volumes/README.md)
|
||||
- [Beta blog post for CSI Inline Volumes](https://kubernetes.io/blog/2020/01/21/csi-ephemeral-inline-volumes/)
|
||||
- [Beta blog post for CSI Inline Volumes](/blog/2020/01/21/csi-ephemeral-inline-volumes/)
|
||||
|
||||
|
|
|
@ -75,12 +75,10 @@ the CSI provisioner receives the credentials from the Secret as part of the Node
|
|||
CSI volumes that require secrets for online expansion will have NodeExpandSecretRef
|
||||
field set. If not set, the NodeExpandVolume CSI RPC call will be made without a secret.
|
||||
|
||||
|
||||
|
||||
## Trying it out
|
||||
|
||||
1. Enable the `CSINodeExpandSecret` feature gate (please refer to
|
||||
[Feature Gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/)).
|
||||
[Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/)).
|
||||
|
||||
1. Create a Secret, and then a StorageClass that uses that Secret.
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ slug: crd-validation-rules-beta
|
|||
|
||||
**Authors:** Joe Betz (Google), Cici Huang (Google), Kermit Alexander (Google)
|
||||
|
||||
In Kubernetes 1.25, [Validation rules for CustomResourceDefinitions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) (CRDs) have graduated to Beta!
|
||||
In Kubernetes 1.25, [Validation rules for CustomResourceDefinitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) (CRDs) have graduated to Beta!
|
||||
|
||||
Validation rules make it possible to declare how custom resources are validated using the [Common Expression Language](https://github.com/google/cel-spec) (CEL). For example:
|
||||
|
||||
|
|
|
@ -48,7 +48,7 @@ whenever this particular rule is not satisfied.
|
|||
|
||||
For more details about the capabilities and limitations of Validation Rules using
|
||||
CEL, please refer to
|
||||
[validation rules](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules).
|
||||
[validation rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules).
|
||||
The [CEL specification](https://github.com/google/cel-spec) is also a good
|
||||
reference for information specifically related to the language.
|
||||
|
||||
|
@ -651,5 +651,5 @@ For native types, the same behavior can be achieved using kube-openapi’s marke
|
|||
|
||||
Usage of CEL within Kubernetes Validation Rules is so much more powerful than
|
||||
what has been shown in this article. For more information please check out
|
||||
[validation rules](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
|
||||
[validation rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
|
||||
in the Kubernetes documentation and [CRD Validation Rules Beta](https://kubernetes.io/blog/2022/09/23/crd-validation-rules-beta/) blog post.
|
||||
|
|
|
@ -77,7 +77,7 @@ section](#ci-cd-systems), though!)
|
|||
#### Controllers that use either a GET-modify-PUT sequence or a PATCH {#get-modify-put-patch-controllers}
|
||||
|
||||
This kind of controller GETs an object (possibly from a
|
||||
[**watch**](https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes)),
|
||||
[**watch**](/docs/reference/using-api/api-concepts/#efficient-detection-of-changes)),
|
||||
modifies it, and then PUTs it back to write its changes. Sometimes it constructs
|
||||
a custom PATCH, but the semantics are the same. Most existing controllers
|
||||
(especially those in-tree) work like this.
|
||||
|
|
|
@ -189,7 +189,7 @@ set of annotations to control conflict resolution for CI/CD-related tooling.
|
|||
|
||||
On the other side, non CI/CD-related controllers should ensure that they don't
|
||||
cause unnecessary conflicts when modifying objects. As of
|
||||
[the server-side apply documentation](https://kubernetes.io/docs/reference/using-api/server-side-apply/#using-server-side-apply-in-a-controller),
|
||||
[the server-side apply documentation](/docs/reference/using-api/server-side-apply/#using-server-side-apply-in-a-controller),
|
||||
it is strongly recommended for controllers to always perform force-applying. When
|
||||
following this recommendation, controllers should really make sure that only
|
||||
fields related to the controller are included in the applied object.
|
||||
|
|
|
@ -52,7 +52,7 @@ Setting the `--image-repository` flag.
|
|||
kubeadm init --image-repository=k8s.gcr.io
|
||||
```
|
||||
|
||||
Or in [kubeadm config](https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/) `ClusterConfiguration`:
|
||||
Or in [kubeadm config](/docs/reference/config-api/kubeadm-config.v1beta3/) `ClusterConfiguration`:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta3
|
||||
|
|
|
@ -69,7 +69,7 @@ to be considered as an alpha level feature in CRI-O and Kubernetes and the
|
|||
security implications are still under consideration.
|
||||
|
||||
Once containers and pods are running it is possible to create a checkpoint.
|
||||
[Checkpointing](https://kubernetes.io/docs/reference/node/kubelet-checkpoint-api/)
|
||||
[Checkpointing](/docs/reference/node/kubelet-checkpoint-api/)
|
||||
is currently only exposed on the **kubelet** level. To checkpoint a container,
|
||||
you can run `curl` on the node where that container is running, and trigger a
|
||||
checkpoint:
|
||||
|
|
|
@ -124,7 +124,7 @@ Introduced in Kubernetes v1.24, [this
|
|||
feature](https://github.com/kubernetes/enhancements/issues/3031) constitutes a significant milestone
|
||||
in improving the security of the Kubernetes release process. All release artifacts are signed
|
||||
keyless using [cosign](https://github.com/sigstore/cosign/), and both binary artifacts and images
|
||||
[can be verified](https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/).
|
||||
[can be verified](/docs/tasks/administer-cluster/verify-signed-artifacts/).
|
||||
|
||||
### Support for Windows privileged containers graduates to stable
|
||||
|
||||
|
@ -223,8 +223,8 @@ This release includes a total of eleven enhancements promoted to Stable:
|
|||
Kubernetes with this release.
|
||||
|
||||
* [CRI `v1alpha2` API is removed](https://github.com/kubernetes/kubernetes/pull/110618)
|
||||
* [Removal of the `v1beta1` flow control API group](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#flowcontrol-resources-v126)
|
||||
* [Removal of the `v2beta2` HorizontalPodAutoscaler API](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#horizontalpodautoscaler-v126)
|
||||
* [Removal of the `v1beta1` flow control API group](/docs/reference/using-api/deprecation-guide/#flowcontrol-resources-v126)
|
||||
* [Removal of the `v2beta2` HorizontalPodAutoscaler API](/docs/reference/using-api/deprecation-guide/#horizontalpodautoscaler-v126)
|
||||
* [GlusterFS plugin removed from available in-tree drivers](https://github.com/kubernetes/enhancements/issues/3446)
|
||||
* [Removal of legacy command line arguments relating to logging](https://github.com/kubernetes/kubernetes/pull/112120)
|
||||
* [Removal of `kube-proxy` userspace modes](https://github.com/kubernetes/kubernetes/pull/112133)
|
||||
|
|
|
@ -118,7 +118,7 @@ Below is an overview of how the Kubernetes project is using kubelet credential p
|
|||
|
||||
{{< figure src="kubelet-credential-providers-enabling.png" caption="Figure 4: Kubelet credential provider configuration used for Kubernetes e2e testing" >}}
|
||||
|
||||
For more configuration details, see [Kubelet Credential Providers](https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/).
|
||||
For more configuration details, see [Kubelet Credential Providers](/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/).
|
||||
|
||||
## Getting Involved
|
||||
|
||||
|
|
|
@ -123,6 +123,6 @@ and scheduler. You're more than welcome to test it out and tell us (SIG Scheduli
|
|||
|
||||
## Additional resources
|
||||
|
||||
- [Pod Scheduling Readiness](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)
|
||||
- [Pod Scheduling Readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)
|
||||
in the Kubernetes documentation
|
||||
- [Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/3521-pod-scheduling-readiness/README.md)
|
||||
|
|
|
@ -17,7 +17,7 @@ give application owners greater flexibility in managing disruptions.
|
|||
|
||||
## What problems does this solve?
|
||||
|
||||
API-initiated eviction of pods respects PodDisruptionBudgets (PDBs). This means that a requested [voluntary disruption](https://kubernetes.io/docs/concepts/scheduling-eviction/#pod-disruption)
|
||||
API-initiated eviction of pods respects PodDisruptionBudgets (PDBs). This means that a requested [voluntary disruption](/docs/concepts/scheduling-eviction/#pod-disruption)
|
||||
via an eviction to a Pod, should not disrupt a guarded application and `.status.currentHealthy` of a PDB should not fall
|
||||
below `.status.desiredHealthy`. Running pods that are [Unhealthy](/docs/tasks/run-application/configure-pdb/#healthiness-of-a-pod)
|
||||
do not count towards the PDB status, but eviction of these is only possible in case the application
|
||||
|
|
|
@ -92,7 +92,7 @@ we can see that there are a relevant number of metrics, logs, and tracing
|
|||
[KEPs](https://www.k8s.dev/resources/keps/) in the pipeline. Would you like to
|
||||
point out important things for last release (maybe alpha & stable milestone candidates?)
|
||||
|
||||
**Han (HK)**: We can now generate [documentation](https://kubernetes.io/docs/reference/instrumentation/metrics/)
|
||||
**Han (HK)**: We can now generate [documentation](/docs/reference/instrumentation/metrics/)
|
||||
for every single metric in the main Kubernetes code base! We have a pretty fancy
|
||||
static analysis pipeline that enables this functionality. We’ve also added feature
|
||||
metrics so that you can look at your metrics to determine which features are enabled
|
||||
|
|
|
@ -123,8 +123,8 @@ repository](https://github.com/aws/aws-eks-best-practices/tree/master/policies/k
|
|||
that will block them from being pulled. You can use these third-party policies with any Kubernetes
|
||||
cluster.
|
||||
|
||||
**Option 5**: As a **LAST** possible option, you can use a [Mutating
|
||||
Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
|
||||
**Option 5**: As a **LAST** possible option, you can use a
|
||||
[Mutating Admission Webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
|
||||
to change the image address dynamically. This should only be
|
||||
considered a stopgap till your manifests have been updated. You can
|
||||
find a (third party) Mutating Webhook and Kyverno policy in
|
||||
|
|
|
@ -37,7 +37,7 @@ the information about this change and what to do if it impacts you.
|
|||
## The Kubernetes API Removal and Deprecation process
|
||||
|
||||
The Kubernetes project has a well-documented
|
||||
[deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/)
|
||||
[deprecation policy](/docs/reference/using-api/deprecation-policy/)
|
||||
for features. This policy states that stable APIs may only be deprecated when
|
||||
a newer, stable version of that same API is available and that APIs have a
|
||||
minimum lifetime for each stability level. A deprecated API has been marked
|
||||
|
@ -214,7 +214,7 @@ that argument, which has been deprecated since the v1.24 release.
|
|||
## Looking ahead
|
||||
|
||||
The official list of
|
||||
[API removals](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-29)
|
||||
[API removals](/docs/reference/using-api/deprecation-guide/#v1-29)
|
||||
planned for Kubernetes v1.29 includes:
|
||||
|
||||
- The `flowcontrol.apiserver.k8s.io/v1beta2` API version of FlowSchema and
|
||||
|
|
|
@ -177,7 +177,7 @@ The complete details of the Kubernetes v1.27 release are available in our [relea
|
|||
|
||||
## Availability
|
||||
|
||||
Kubernetes v1.27 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.27.0). To get started with Kubernetes, you can run local Kubernetes clusters using [minikube](https://minikube.sigs.k8s.io/docs/), [kind](https://kind.sigs.k8s.io/), etc. You can also easily install v1.27 using [kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
Kubernetes v1.27 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.27.0). To get started with Kubernetes, you can run local Kubernetes clusters using [minikube](https://minikube.sigs.k8s.io/docs/), [kind](https://kind.sigs.k8s.io/), etc. You can also easily install v1.27 using [kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
|
||||
## Release team
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ the issue while being more transparent and less disruptive to end-users.
|
|||
|
||||
## What's next?
|
||||
|
||||
In preparation to [graduate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-stages) the feed
|
||||
In preparation to [graduate](/docs/reference/command-line-tools-reference/feature-gates/#feature-stages) the feed
|
||||
to stable i.e. `General Availability` stage, SIG Security is still gathering feedback from end users who are using the updated beta feed.
|
||||
|
||||
To help us continue to improve the feed in future Kubernetes Releases please share feedback by adding a comment to
|
||||
|
|
|
@ -184,7 +184,7 @@ server requests.
|
|||
|
||||
We conducted a test that created 12k secrets and measured the time taken for the API server to
|
||||
encrypt the resources. The metric used was
|
||||
[`apiserver_storage_transformation_duration_seconds`](https://kubernetes.io/docs/reference/instrumentation/metrics/).
|
||||
[`apiserver_storage_transformation_duration_seconds`](/docs/reference/instrumentation/metrics/).
|
||||
For KMS v1, the test was run on a managed Kubernetes v1.25 cluster with 2 nodes. There was no
|
||||
additional load on the cluster during the test. For KMS v2, the test was run in the Kubernetes CI
|
||||
environment with the following [cluster
|
||||
|
|
|
@ -88,7 +88,7 @@ read [non-graceful node shutdown](/docs/concepts/architecture/nodes/#non-gracefu
|
|||
## Improvements to CustomResourceDefinition validation rules
|
||||
|
||||
The [Common Expression Language (CEL)](https://github.com/google/cel-go) can be used to validate
|
||||
[custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/). The primary goal is to allow the majority of the validation use cases that might once have needed you, as a CustomResourceDefinition (CRD) author, to design and implement a webhook. Instead, and as a beta feature, you can add _validation expressions_ directly into the schema of a CRD.
|
||||
[custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). The primary goal is to allow the majority of the validation use cases that might once have needed you, as a CustomResourceDefinition (CRD) author, to design and implement a webhook. Instead, and as a beta feature, you can add _validation expressions_ directly into the schema of a CRD.
|
||||
|
||||
CRDs need direct support for non-trivial validation. While admission webhooks do support CRDs validation, they significantly complicate the development and operability of CRDs.
|
||||
|
||||
|
@ -263,7 +263,7 @@ The complete details of the Kubernetes v1.28 release are available in our [relea
|
|||
|
||||
## Availability
|
||||
|
||||
Kubernetes v1.28 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.28.0). To get started with Kubernetes, you can run local Kubernetes clusters using [minikube](https://minikube.sigs.k8s.io/docs/), [kind](https://kind.sigs.k8s.io/), etc. You can also easily install v1.28 using [kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
Kubernetes v1.28 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.28.0). To get started with Kubernetes, you can run local Kubernetes clusters using [minikube](https://minikube.sigs.k8s.io/docs/), [kind](https://kind.sigs.k8s.io/), etc. You can also easily install v1.28 using [kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
|
||||
## Release Team
|
||||
|
||||
|
|
|
@ -79,7 +79,7 @@ that are shutdown/failed and automatically failover workloads to another node.
|
|||
## How can I learn more?
|
||||
|
||||
Check out additional documentation on this feature
|
||||
[here](https://kubernetes.io/docs/concepts/architecture/nodes/#non-graceful-node-shutdown).
|
||||
[here](/docs/concepts/architecture/nodes/#non-graceful-node-shutdown).
|
||||
|
||||
## How to get involved?
|
||||
|
||||
|
|
|
@ -5,8 +5,7 @@ date: 2023-08-23
|
|||
slug: kubelet-podresources-api-GA
|
||||
---
|
||||
|
||||
**Author:**
|
||||
Francesco Romani (Red Hat)
|
||||
**Author:** Francesco Romani (Red Hat)
|
||||
|
||||
The podresources API is an API served by the kubelet locally on the node, which exposes the compute resources exclusively
|
||||
allocated to containers. With the release of Kubernetes 1.28, that API is now Generally Available.
|
||||
|
@ -14,10 +13,10 @@ allocated to containers. With the release of Kubernetes 1.28, that API is now Ge
|
|||
## What problem does it solve?
|
||||
|
||||
The kubelet can allocate exclusive resources to containers, like
|
||||
[CPUs, granting exclusive access to full cores](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/)
|
||||
or [memory, either regions or hugepages](https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/).
|
||||
[CPUs, granting exclusive access to full cores](/docs/tasks/administer-cluster/cpu-management-policies/)
|
||||
or [memory, either regions or hugepages](/docs/tasks/administer-cluster/memory-manager/).
|
||||
Workloads which require high performance, or low latency (or both) leverage these features.
|
||||
The kubelet also can assign [devices to containers](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
|
||||
The kubelet also can assign [devices to containers](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
|
||||
Collectively, these features which enable exclusive assignments are known as "resource managers".
|
||||
|
||||
Without an API like podresources, the only possible option to learn about resource assignment was to read the state files the
|
||||
|
@ -28,7 +27,7 @@ moving to podresources API or to other supported APIs.
|
|||
|
||||
## Overview of the API
|
||||
|
||||
The podresources API was [initially proposed to enable device monitoring](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
|
||||
The podresources API was [initially proposed to enable device monitoring](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
|
||||
In order to enable monitoring agents, a key prerequisite is to enable introspection of device assignment, which is performed by the kubelet.
|
||||
Serving this purpose was the initial goal of the API. The first iteration of the API only had a single function implemented, `List`,
|
||||
to return information about the assignment of devices to containers.
|
||||
|
|
|
@ -153,7 +153,7 @@ Swap configuration on a node is exposed to a cluster admin via the
|
|||
As a cluster administrator, you can specify the node's behaviour in the
|
||||
presence of swap memory by setting `memorySwap.swapBehavior`.
|
||||
|
||||
The kubelet [employs the CRI](https://kubernetes.io/docs/concepts/architecture/cri/)
|
||||
The kubelet [employs the CRI](/docs/concepts/architecture/cri/)
|
||||
(container runtime interface) API to direct the CRI to
|
||||
configure specific cgroup v2 parameters (such as `memory.swap.max`) in a manner that will
|
||||
enable the desired swap configuration for a container. The CRI is then responsible to
|
||||
|
|
Loading…
Reference in New Issue