Merge pull request #45624 from chanieljdan/merged-main-dev-1.30

Merge main branch into dev-1.30
pull/45649/head
Kubernetes Prow Robot 2024-03-21 12:21:39 -07:00 committed by GitHub
commit 86bff2d05b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
30 changed files with 4107 additions and 630 deletions

View File

@ -239,6 +239,31 @@ body.td-404 main .error-details {
}
}
.search-item.nav-item {
input, input::placeholder {
color: black;
}
}
.flip-nav .search-item {
.td-search-input, .search-bar {
background-color: $medium-grey;
}
input, input::placeholder, .search-icon {
color: white;
}
textarea:focus, input:focus {
color: white;
}
}
@media only screen and (max-width: 1500px) {
header nav .search-item {
display: none;
}
}
/* FOOTER */
footer {
background-color: #303030;

View File

@ -0,0 +1,210 @@
---
layout: blog
title: 'Using Go workspaces in Kubernetes'
date: 2024-03-19T08:30:00-08:00
slug: go-workspaces-in-kubernetes
canonicalUrl: https://www.kubernetes.dev/blog/2024/03/19/go-workspaces-in-kubernetes/
---
**Author:** Tim Hockin (Google)
The [Go programming language](https://go.dev/) has played a huge role in the
success of Kubernetes. As Kubernetes has grown, matured, and pushed the bounds
of what "regular" projects do, the Go project team has also grown and evolved
the language and tools. In recent releases, Go introduced a feature called
"workspaces" which was aimed at making projects like Kubernetes easier to
manage.
We've just completed a major effort to adopt workspaces in Kubernetes, and the
results are great. Our codebase is simpler and less error-prone, and we're no
longer off on our own technology island.
## GOPATH and Go modules
Kubernetes is one of the most visible open source projects written in Go. The
earliest versions of Kubernetes, dating back to 2014, were built with Go 1.3.
Today, 10 years later, Go is up to version 1.22 — and let's just say that a
_whole lot_ has changed.
In 2014, Go development was entirely based on
[`GOPATH`](https://go.dev/wiki/GOPATH). As a Go project, Kubernetes lived by the
rules of `GOPATH`. In the buildup to Kubernetes 1.4 (mid 2016), we introduced a
directory tree called `staging`. This allowed us to pretend to be multiple
projects, but still exist within one git repository (which had advantages for
development velocity). The magic of `GOPATH` allowed this to work.
Kubernetes depends on several code-generation tools which have to find, read,
and write Go code packages. Unsurprisingly, those tools grew to rely on
`GOPATH`. This all worked pretty well until Go introduced modules in Go 1.11
(mid 2018).
Modules were an answer to many issues around `GOPATH`. They gave more control to
projects on how to track and manage dependencies, and were overall a great step
forward. Kubernetes adopted them. However, modules had one major drawback —
most Go tools could not work on multiple modules at once. This was a problem
for our code-generation tools and scripts.
Thankfully, Go offered a way to temporarily disable modules (`GO111MODULE` to
the rescue). We could get the dependency tracking benefits of modules, but the
flexibility of `GOPATH` for our tools. We even wrote helper tools to create fake
`GOPATH` trees and played tricks with symlinks in our vendor directory (which
holds a snapshot of our external dependencies), and we made it all work.
And for the last 5 years it _has_ worked pretty well. That is, it worked well
unless you looked too closely at what was happening. Woe be upon you if you
had the misfortune to work on one of the code-generation tools, or the build
system, or the ever-expanding suite of bespoke shell scripts we use to glue
everything together.
## The problems
Like any large software project, we Kubernetes developers have all learned to
deal with a certain amount of constant low-grade pain. Our custom `staging`
mechanism let us bend the rules of Go; it was a little clunky, but when it
worked (which was most of the time) it worked pretty well. When it failed, the
errors were inscrutable and un-Googleable — nobody else was doing the silly
things we were doing. Usually the fix was to re-run one or more of the `update-*`
shell scripts in our aptly named `hack` directory.
As time went on we drifted farther and farher from "regular" Go projects. At
the same time, Kubernetes got more and more popular. For many people,
Kubernetes was their first experience with Go, and it wasn't always a good
experience.
Our eccentricities also impacted people who consumed some of our code, such as
our client library and the code-generation tools (which turned out to be useful
in the growing ecosystem of custom resources). The tools only worked if you
stored your code in a particular `GOPATH`-compatible directory structure, even
though `GOPATH` had been replaced by modules more than four years prior.
This state persisted because of the confluence of three factors:
1. Most of the time it only hurt a little (punctuated with short moments of
more acute pain).
1. Kubernetes was still growing in popularity - we all had other, more urgent
things to work on.
1. The fix was not obvious, and whatever we came up with was going to be both
hard and tedious.
As a Kubernetes maintainer and long-timer, my fingerprints were all over the
build system, the code-generation tools, and the `hack` scripts. While the pain
of our mess may have been low _on_average_, I was one of the people who felt it
regularly.
## Enter workspaces
Along the way, the Go language team saw what we (and others) were doing and
didn't love it. They designed a new way of stitching multiple modules together
into a new _workspace_ concept. Once enrolled in a workspace, Go tools had
enough information to work in any directory structure and across modules,
without `GOPATH` or symlinks or other dirty tricks.
When I first saw this proposal I knew that this was the way out. This was how
to break the logjam. If workspaces was the technical solution, then I would
put in the work to make it happen.
## The work
Adopting workspaces was deceptively easy. I very quickly had the codebase
compiling and running tests with workspaces enabled. I set out to purge the
repository of anything `GOPATH` related. That's when I hit the first real bump -
the code-generation tools.
We had about a dozen tools, totalling several thousand lines of code. All of
them were built using an internal framework called
[gengo](https://github.com/kubernetes/gengo), which was built on Go's own
parsing libraries. There were two main problems:
1. Those parsing libraries didn't understand modules or workspaces.
1. `GOPATH` allowed us to pretend that Go _package paths_ and directories on
disk were interchangeable in trivial ways. They are not.
Switching to a
[modules- and workspaces-aware parsing](https://pkg.go.dev/golang.org/x/tools/go/packages)
library was the first step. Then I had to make a long series of changes to
each of the code-generation tools. Critically, I had to find a way to do it
that was possible for some other person to review! I knew that I needed
reviewers who could cover the breadth of changes and reviewers who could go
into great depth on specific topics like gengo and Go's module semantics.
Looking at the history for the areas I was touching, I asked Joe Betz and Alex
Zielenski (SIG API Machinery) to go deep on gengo and code-generation, Jordan
Liggitt (SIG Architecture and all-around wizard) to cover Go modules and
vendoring and the `hack` scripts, and Antonio Ojea (wearing his SIG Testing
hat) to make sure the whole thing made sense. We agreed that a series of small
commits would be easiest to review, even if the codebase might not actually
work at each commit.
Sadly, these were not mechanical changes. I had to dig into each tool to
figure out where they were processing disk paths versus where they were
processing package names, and where those were being conflated. I made
extensive use of the [delve](https://github.com/go-delve/delve) debugger, which
I just can't say enough good things about.
One unfortunate result of this work was that I had to break compatibility. The
gengo library simply did not have enough information to process packages
outside of GOPATH. After discussion with gengo and Kubernetes maintainers, we
agreed to make [gengo/v2](https://github.com/kubernetes/gengo/tree/master/v2).
I also used this as an opportunity to clean up some of the gengo APIs and the
tools' CLIs to be more understandable and not conflate packages and
directories. For example you can't just string-join directory names and
assume the result is a valid package name.
Once I had the code-generation tools converted, I shifted attention to the
dozens of scripts in the `hack` directory. One by one I had to run them, debug,
and fix failures. Some of them needed minor changes and some needed to be
rewritten.
Along the way we hit some cases that Go did not support, like workspace
vendoring. Kubernetes depends on vendoring to ensure that our dependencies are
always available, even if their source code is removed from the internet (it
has happened more than once!). After discussing with the Go team, and looking
at possible workarounds, they decided the right path was to
[implement workspace vendoring](https://github.com/golang/go/issues/60056).
The eventual Pull Request contained over 200 individual commits.
## Results
Now that this work has been merged, what does this mean for Kubernetes users?
Pretty much nothing. No features were added or changed. This work was not
about fixing bugs (and hopefully none were introduced).
This work was mainly for the benefit of the Kubernetes project, to help and
simplify the lives of the core maintainers. In fact, it would not be a lie to
say that it was rather self-serving - my own life is a little bit better now.
This effort, while unusually large, is just a tiny fraction of the overall
maintenance work that needs to be done. Like any large project, we have lots of
"technical debt" — tools that made point-in-time assumptions and need
revisiting, internal APIs whose organization doesn't make sense, code which
doesn't follow conventions which didn't exist at the time, and tests which
aren't as rigorous as they could be, just to throw out a few examples. This
work is often called "grungy" or "dirty", but in reality it's just an
indication that the project has grown and evolved. I love this stuff, but
there's far more than I can ever tackle on my own, which makes it an
interesting way for people to get involved. As our unofficial motto goes:
"chop wood and carry water".
Kubernetes used to be a case-study of how _not_ to do large-scale Go
development, but now our codebase is simpler (and in some cases faster!) and
more consistent. Things that previously seemed like they _should_ work, but
didn't, now behave as expected.
Our project is now a little more "regular". Not completely so, but we're
getting closer.
## Thanks
This effort would not have been possible without tons of support.
First, thanks to the Go team for hearing our pain, taking feedback, and solving
the problems for us.
Special mega-thanks goes to Michael Matloob, on the Go team at Google, who
designed and implemented workspaces. He guided me every step of the way, and
was very generous with his time, answering all my questions, no matter how
dumb.
Writing code is just half of the work, so another special thanks to my
reviewers: Jordan Liggitt, Joe Betz, Alexander Zielenski, and Antonio Ojea.
These folks brought a wealth of expertise and attention to detail, and made
this work smarter and safer.

View File

@ -189,6 +189,17 @@ To submit a blog post follow these directions:
- **Tutorials** that only apply to specific releases or versions and not all future versions
- References to pre-GA APIs or features
### Mirroring from the Kubernetes Contributor Blog
To mirror a blog post from the [Kubernetes contributor blog](https://www.kubernetes.dev/blog/), follow these guidelines:
- Keep the blog content the same. If there are changes, they should be made to the original article first, and then to the mirrored article.
- The mirrored blog should have a `canonicalUrl`, that is, essentially the url of the original blog after it has been published.
- [Kubernetes contributor blogs](https://kubernetes.dev/blog) have their authors mentioned in the YAML header, while the Kubernetes blog posts mention authors in the blog content itself. This should be changed when mirroring the content.
- Publication dates stay the same as the original blog.
All of the other guidelines and expectations detailed above apply as well.
## Submit a case study
Case studies highlight how organizations are using Kubernetes to solve real-world problems. The

View File

@ -19,9 +19,6 @@ see [Reviewing changes](/docs/contribute/review/).
Each day in a week-long shift as PR Wrangler:
- Triage and tag incoming issues daily. See
[Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
for guidelines on how SIG Docs uses metadata.
- Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality
and adherence to the [Style](/docs/contribute/style/style-guide/) and
[Content](/docs/contribute/style/content-guide/) guides.
@ -44,6 +41,11 @@ Each day in a week-long shift as PR Wrangler:
issues as [good first issues](https://kubernetes.dev/docs/guide/help-wanted/#good-first-issue).
- Using style fixups as good first issues is a good way to ensure a supply of easier tasks
to help onboard new contributors.
- Also check for pull requests against the [reference docs generator](https://github.com/kubernetes-sigs/reference-docs) code, and review those (or bring in help).
- Support the [issue wrangler](/docs/contribute/participate/issue-wrangler/) to
triage and tag incoming issues daily.
See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
for guidelines on how SIG Docs uses metadata.
{{< note >}}
PR wrangler duties do not apply to localization PRs (non-English PRs).

View File

@ -9,6 +9,10 @@ This section contains the following reference topics about nodes:
* the kubelet's [checkpoint API](/docs/reference/node/kubelet-checkpoint-api/)
* a list of [Articles on dockershim Removal and on Using CRI-compatible Runtimes](/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/)
* [Kubelet Device Manager API Versions](/docs/reference/node/device-plugin-api-versions)
* [Node Labels Populated By The Kubelet](/docs/reference/node/node-labels)
* [Node `.status` information](/docs/reference/node/node-status/)
You can also read node reference details from elsewhere in the

View File

@ -78,7 +78,8 @@ The **autoscaling/v2beta2** API version of HorizontalPodAutoscaler is no longer
* Migrate manifests and API clients to use the **autoscaling/v2** API version, available since v1.23.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `targetAverageUtilization` is replaced with `target.averageUtilization` and `target.type: Utilization`. See [Autoscaling on multiple metrics and custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics).
### v1.25
The **v1.25** release stopped serving the following deprecated API versions:
@ -130,6 +131,8 @@ The **autoscaling/v2beta1** API version of HorizontalPodAutoscaler is no longer
* Migrate manifests and API clients to use the **autoscaling/v2** API version, available since v1.23.
* All existing persisted objects are accessible via the new API
* Notable changes:
* `targetAverageUtilization` is replaced with `target.averageUtilization` and `target.type: Utilization`. See [Autoscaling on multiple metrics and custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics).
#### PodDisruptionBudget {#poddisruptionbudget-v125}

View File

@ -84,7 +84,7 @@ Next install the `amqp-tools` so you can work with message queues.
The next commands show what you need to run inside the interactive shell in that Pod:
```shell
apt-get update && apt-get install -y curl ca-certificates amqp-tools python dnsutils
apt-get update && apt-get install -y curl ca-certificates amqp-tools python3 dnsutils
```
Later, you will make a container image that includes these packages.

View File

@ -59,6 +59,12 @@ You could also download the following files directly:
- [`rediswq.py`](/examples/application/job/redis/rediswq.py)
- [`worker.py`](/examples/application/job/redis/worker.py)
To start a single instance of Redis, you need to create the redis pod and redis service:
```shell
kubectl apply -f https://k8s.io/examples/application/job/redis/redis-pod.yaml
kubectl apply -f https://k8s.io/examples/application/job/redis/redis-service.yaml
```
## Filling the queue with tasks
@ -171,7 +177,7 @@ Since the workers themselves detect when the workqueue is empty, and the Job con
know about the workqueue, it relies on the workers to signal when they are done working.
The workers signal that the queue is empty by exiting with success. So, as soon as **any** worker
exits with success, the controller knows the work is done, and that the Pods will exit soon.
So, you need to set the completion count of the Job to 1. The job controller will wait for
So, you need to leave the completion count of the Job unset. The job controller will wait for
the other pods to complete too.
## Running the Job

View File

@ -227,7 +227,6 @@ dengan nama untuk melakukan pemeriksaan _liveness_ HTTP atau TCP:
ports:
- name: liveness-port
containerPort: 8080
hostPort: 8080
livenessProbe:
httpGet:
@ -251,7 +250,6 @@ Sehingga, contoh sebelumnya menjadi:
ports:
- name: liveness-port
containerPort: 8080
hostPort: 8080
livenessProbe:
httpGet:

View File

@ -0,0 +1,105 @@
---
title: Utilize o Cilium para NetworkPolicy
content_type: task
weight: 30
---
<!-- overview -->
Essa página mostra como utilizar o Cilium para NetworkPolicy.
Para saber mais sobre o Cilium, leia o artigo [Introdução ao Cilium (em inglês)](https://docs.cilium.io/en/stable/overview/intro).
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!-- steps -->
## Fazendo o Deploy do Cilium no Minikube para Testes Básicos
Para familiarizar-se com o Cilium você poderá seguir o guia [Guia de Primeiros Passos do Cilium no Kubernetes (em inglês)](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/) e realizar uma instalação básica do Cilium através de um DaemonSet no minikube.
Inicie o minikube, a versão mínima exigida é >= v1.5.2, com os seguintes argumentos:
```shell
minikube version
```
```
minikube version: v1.5.2
```
```shell
minikube start --network-plugin=cni
```
Para o minikube, você poderá instalar o Cilium utilizando a ferramenta de linha de comando (CLI). Para isso, primeiro faça o download da última versão do CLI com o seguinte comando:
```shell
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
```
Em seguida extraia o arquivo baixado para o diretório `/usr/local/bin` com os comandos:
```shell
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz
```
Após executar os passos acima, você poderá instalar o Cilium utilizando o comando abaixo:
```shell
cilium install
```
O Cilium irá detectar as configurações do cluster automaticamente, criará e instalará os componentes apropriados para que a instalação seja bem sucedida.
Os componentes são:
- Certificate Authority (CA) no Secret `cilium-ca` e os certificados para o Hubble (camada de observabilidade do Cilium).
- Service accounts.
- Cluster roles.
- ConfigMap.
- Um agente DaemonSet e um Operator Deployment.
Após a instalação, você poderá visualizar o status geral do Deployment do Cilium com o comando `cilium status`.
Confira a saída esperada da opção `status` [aqui](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#validate-the-installation).
O restante do guia de primeiros passos utiliza como base uma aplicação de exemplo para explicar como aplicar políticas de segurança tanto para L3/L4 (como endereço de IP + porta), quanto para L7 (como HTTP).
## Fazendo o deploy do Cilium para uso em produção
Para instruções detalhadas de como fazer o deploy do Cilium em produção, acesse: [Guia de Instalação do Cilium no Kubernetes (em inglês)](https://docs.cilium.io/en/stable/network/kubernetes/concepts/).
Essa documentação inclui detalhes sobre os requisitos, instruções e exemplos de DaemonSet para produção.
<!-- discussion -->
## Entendendo os componentes do Cilium
Ao realizar o deploy do Cilium no cluster, Pods são adicionados ao namespace `kube-system`. Para ver essa lista de Pods execute:
```shell
kubectl get pods --namespace=kube-system -l k8s-app=cilium
```
Você verá uma lista de Pods similar a essa:
```console
NAME READY STATUS RESTARTS AGE
cilium-kkdhz 1/1 Running 0 3m23s
...
```
Um Pod `cilium` roda em cada um dos nós do seu cluster e garante as políticas de rede no tráfego de/para Pods naquele nó usando o Linux BPF.
## {{% heading "whatsnext" %}}
Uma vez que seu cluster estiver rodando, você pode seguir o artigo [Declarar uma Network Policy (em inglês)](/docs/tasks/administer-cluster/declare-network-policy/) para testar as políticas de NetworkPolicy do Kubernetes com o Cilium.
Divirta-se! Se tiver dúvidas, nos contate usando o [Canal Slack do Cilium](https://cilium.herokuapp.com/).

View File

@ -131,7 +131,7 @@ description: |-
<p>Чтобы увидеть версию текущего образа в приложении, воспользуйтесь подкомандой <code>describe pods</code> и посмотрите на поле <code>Image</code>:</p>
<p><code><b>kubectl describe pods</b></code></p>
<p>Чтобы обновить версию образа приложения до v2, воспользуемся подкомандой <code>set image</code>, для которой укажем имя деплоймента и новую версию нужного образа:</p>
<p><code><b>kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2</b></code></p>
<p><code><b>kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=docker.io/jocatalin/kubernetes-bootcamp:v2</b></code></p>
<p>Эта команда перевела деплоймент на использование другого образа для приложения и инициировала плавающее обновление. Статус новых и старых подов (т. е. тех, которые будут остановлены) можно проверить с помощью подкоманды <code>get pods</code>:</p>
<p><code><b>kubectl get pods</b></code></p>
</div>

15
content/uk/blog/_index.md Normal file
View File

@ -0,0 +1,15 @@
---
title: Блог Kubernetes
linkTitle: Блог
menu:
main:
title: "Блог"
weight: 40
post: >
<p>Читайте останні новини щодо Kubernetes та контейнеризацію загалом, а також отримуйте технічні інструкції, які щойно виходять з друку.</p>
---
{{< comment >}}
Щодо інформації про участь у створенні матеріалів для блогу, див. [https://kubernetes.io/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post](https://kubernetes.io/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post)
{{< /comment >}}

View File

@ -0,0 +1,54 @@
---
layout: blog
title: 'Огляд Kubernetes v1.30'
date: 2024-03-12
slug: kubernetes-1-30-upcoming-changes
---
**Автори:** Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko
## Швидкий огляд: зміни у Kubernetes v1.30 {#a-quick-look-exciting-changes-in-kubernetes-v1.30}
Новий рік, новий реліз Kubernetes. Ми на половині релізного циклу і маємо чимало цікавих та чудових поліпшень у версії v1.30. Від абсолютно нових можливостей у режимі альфа до вже сталих функцій, які переходять у стабільний режим, а також довгоочікуваних поліпшень — цей випуск має щось для усіх, на що варто звернути увагу!
Щоб підготувати вас до офіційного випуску, ось короткий огляд удосконалень, про які ми найбільше хочемо розповісти!
## Основні зміни для Kubernetes v1.30 {#major-changes-for-kubernetes-v1.30}
### Структуровані параметри для динамічного розподілу ресурсів ([KEP-4381](https://kep.k8s.io/4381)) {#structured-parameters-for-dynamic-resource-allocation-kep-4381-https-kep-k8s-io-4381}
[Динамічний розподіл ресурсів](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/) було додано до Kubernetes у версії v1.26 у режимі альфа. Він визначає альтернативу традиційному API пристроїв для запиту доступу до ресурсів сторонніх постачальників. За концепцією, динамічний розподіл ресурсів використовує параметри для ресурсів, що є абсолютно непрозорими для ядра Kubernetes. Цей підхід створює проблему для Cluster Autoscaler (CA) чи будь-якого контролера вищого рівня, який повинен приймати рішення для групи Podʼів (наприклад, планувальник завдань). Він не може симулювати ефект виділення чи звільнення заявок з плином часу. Інформацію для цього можуть надавати лише драйвери DRA сторонніх постачальників.
Структуровані параметри для динамічного розподілу ресурсів — це розширення оригінальної реалізації, яке розвʼязує цю проблему, створюючи фреймворк для підтримки параметрів заявок, що є більш прозорими. Замість обробки семантики всіх параметрів заявок самостійно, драйвери можуть керувати ресурсами та описувати їх, використовуючи конкретну "структуровану модель", заздалегідь визначену Kubernetes. Це дозволить компонентам, які обізнані з цією "структурованою моделлю", приймати рішення щодо цих ресурсів без залучення зовнішнього контролера. Наприклад, планувальник може швидко обробляти заявки без зайвої комунікації з драйверами динамічного розподілу ресурсів. Робота, виконана для цього релізу, зосереджена на визначенні необхідного фреймворку для активації різних "структурованих моделей" та реалізації моделі "пойменованих ресурсів". Ця модель дозволяє перераховувати окремі екземпляри ресурсів та, на відміну від традиційного API пристроїв, додає можливість вибору цих екземплярів індивідуально за атрибутами.
### Підтримка своп-памʼяті на вузлах ([KEP-2400](https://kep.k8s.io/2400)) {#node-memory-swap-support-kep-2400-https-kep-k8s-io-2400}
У Kubernetes v1.30 підтримка своп-памʼяті на вузлах Linux отримує значущі зміни в способі її функціонування, з основним акцентом на покращенні стабільності системи. В попередніх версіях Kubernetes функція `NodeSwap` була типово вимкненою, а при увімкненні використовувала поведінку `UnlimitedSwap`. З метою досягнення кращої стабільності, поведінка `UnlimitedSwap` (яка може компрометувати стабільність вузла) буде видалена у версії v1.30.
Оновлена, все ще бета-версія підтримки своп на вузлах Linux буде стандартно доступною. Однак типовою поведінкою буде запуск вузла в режимі `NoSwap` (а не `UnlimitedSwap`). У режимі `NoSwap` kubelet підтримує роботу на вузлі, де активний простір своп, але Podʼи не використовують жодного page-файлу. Для того, щоб kubelet працював на цьому вузлі, вам все ще потрібно встановити `--fail-swap-on=false`. Однак велика зміна стосується іншого режиму: `LimitedSwap`. У цьому режимі kubelet фактично використовує page-файл на вузлі та дозволяє Podʼам виділяти деяку частину їхньої віртуальної памʼяті. Контейнери (і їхні батьківські Podʼи) не мають доступу до своп поза їхнім обмеженням памʼяті, але система все ще може використовувати простір своп, якщо він доступний.
Група Kubernetes Node (SIG Node) також оновить документацію, щоб допомогти вам зрозуміти, як використовувати оновлену реалізацію, на основі відгуків від кінцевих користувачів, учасників та широкої спільноти Kubernetes.
Для отримання додаткових відомостей про підтримку своп на вузлах Linux в Kubernetes, прочитайте попередній [пост блогу](/blog/2023/08/24/swap-linux-beta/) чи [документацію про своп на вузлах](/docs/concepts/architecture/nodes/#swap-memory).
### Підтримка просторів імен користувачів в Pod ([KEP-127](https://kep.k8s.io/127)) {#support-user-namespaces-in-pods-kep-127-https-kep-k8s-io-127}
[Простори імен користувачів](/docs/concepts/workloads/pods/user-namespaces) — це функція лише для Linux, яка краще ізолює Podʼи для запобігання або помʼякшення кількох важливих CVEs із високим/критичним рейтингом, включаючи [CVE-2024-21626](https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv), опубліковану у січні 2024 року. У Kubernetes 1.30 підтримка просторів імен користувачів переходить у бета-версію і тепер підтримує Podʼи з томами та без них, власні діапазони UID/GID та багато іншого!
### Конфігурація структурованої авторизації ([KEP-3221](https://kep.k8s.io/3221)) {#structured-authorization-configuration-kep-3221-https-kep-k8s-io-3221}
Підтримка [конфігурації структурованої авторизації](/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file) переходить у бета-версію та буде типово увімкненою. Ця функція дозволяє створювати ланцюги авторизації з кількома вебхуками із чітко визначеними параметрами, які перевіряють запити в певному порядку та надають деталізований контроль — такий, як явна відмова у випадку невдач. Використання конфігураційного файлу навіть дозволяє вказати правила [CEL](/docs/reference/using-api/cel/) для попередньої фільтрації запитів, перш ніж вони будуть відправлені до вебхуків, допомагаючи вам запобігти непотрібним викликам. Сервер API також автоматично перезавантажує ланцюг авторизатора при зміні конфігураційного файлу.
Вам необхідно вказати шлях до конфігурації авторизації, використовуючи аргумент командного рядка `--authorization-config`. Якщо ви хочете продовжувати використовувати аргументи командного рядка замість конфігураційного файлу, вони продовжать працювати як є. Щоб отримати доступ до нових можливостей вебхуків авторизації, таких як кілька вебхуків, політика невдачі та правила попередньої фільтрації, перейдіть до використання параметрів у файлі `--authorization-config`. З версії Kubernetes 1.30 формат конфігураційного файлу є бета-рівнем, і потрібно вказувати лише `--authorization-config`, оскільки feature gate вже увімкнено. Приклад конфігурації із всіма можливими значеннями наведено в [документації з авторизації](/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file). Докладніше читайте в [документації з авторизації](/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file).
### Автомасштабування Podʼів на основі ресурсів контейнера ([KEP-1610](https://kep.k8s.io/1610)) {#container-resource-based-pod-autoscaling-kep-1610-https-kep-k8s-io-1610}
Горизонтальне автомасштабування Podʼів на основі метрик `ContainerResource` перейде у стабільний стан у версії v1.30. Це нова функціональність для HorizontalPodAutoscaler дозволяє налаштовувати автоматичне масштабування на основі використання ресурсів для окремих контейнерів, а не загального використання ресурсів для всіх контейнерів у Podʼіві. Докладні відомості можна знайти у нашій [попередній статті](2023/05/02/hpa-container-resource-metric/) або в [метриках ресурсів контейнера](/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics).
### CEL для керування допуском ([KEP-3488](https://kep.k8s.io/3488)) {#cel-for-admission-control-kep-3488-https-kep-k8s-io-3488}
Інтеграція Common Expression Language (CEL) для керування допуском у Kubernetes вводить більш динамічний та виразний спосіб оцінки запитів на допуск. Ця функція дозволяє визначати та застосовувати складні, деталізовані політики безпосередньо через API Kubernetes, підвищуючи безпеку та здатність до управління без втрати продуктивності чи гнучкості.
Додавання CEL до керування допуском у Kubernetes дає адміністраторам кластерів можливість створювати складні правила, які можуть оцінювати вміст API-запитів на основі бажаного стану та політик кластера, не вдаючись до вебхуків, які використовують контролери доступу. Цей рівень контролю є важливим для забезпечення цілісності, безпеки та ефективності операцій у кластері, роблячи середовища Kubernetes більш надійними та адаптованими до різних сценаріїв використання та вимог. Для отримання докладної інформації щодо використання CEL для керування допуском дивіться [документацію API](/docs/reference/access-authn-authz/validating-admission-policy/) для ValidatingAdmissionPolicy.
Ми сподіваємося, що ви так само нетерпляче чекаєте на цей випуск, як і ми. Стежте за блогом, щоб дізнатись про офіційний випуск через кілька тижнів, де буде представлено ще більше відомостей!

View File

@ -0,0 +1,246 @@
---
layout: blog
title: 'Kubernetes v1.30 初探'
date: 2024-03-12
slug: kubernetes-1-30-upcoming-changes
---
<!--
layout: blog
title: 'A Peek at Kubernetes v1.30'
date: 2024-03-12
slug: kubernetes-1-30-upcoming-changes
-->
<!--
**Authors:** Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko
-->
**作者:** Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko
**译者:** Paco Xu (DaoCloud)
<!--
## A quick look: exciting changes in Kubernetes v1.30
It's a new year and a new Kubernetes release. We're halfway through the release cycle and
have quite a few interesting and exciting enhancements coming in v1.30. From brand new features
in alpha, to established features graduating to stable, to long-awaited improvements, this release
has something for everyone to pay attention to!
To tide you over until the official release, here's a sneak peek of the enhancements we're most
excited about in this cycle!
-->
## 快速预览Kubernetes v1.30 中令人兴奋的变化
新年新版本v1.30 发布周期已过半,我们将迎来一系列有趣且令人兴奋的增强功能。
从全新的 alpha 特性,到已有的特性升级为稳定版,再到期待已久的改进,这个版本对每个人都有值得关注的内容!
为了让你在正式发布之前对其有所了解,下面给出我们在这个周期中最为期待的增强功能的预览!
<!--
## Major changes for Kubernetes v1.30
-->
## Kubernetes v1.30 的主要变化
<!--
### Structured parameters for dynamic resource allocation ([KEP-4381](https://kep.k8s.io/4381))
-->
### 动态资源分配DRA的结构化参数 ([KEP-4381](https://kep.k8s.io/4381))
<!--
[Dynamic resource allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/) was
added to Kubernetes as an alpha feature in v1.26. It defines an alternative to the traditional
device-plugin API for requesting access to third-party resources. By design, dynamic resource
allocation uses parameters for resources that are completely opaque to core Kubernetes. This
approach poses a problem for the Cluster Autoscaler (CA) or any higher-level controller that
needs to make decisions for a group of pods (e.g. a job scheduler). It cannot simulate the effect of
allocating or deallocating claims over time. Only the third-party DRA drivers have the information
available to do this.
-->
[动态资源分配DRA](/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/) 在 Kubernetes v1.26 中作为 alpha 特性添加。
它定义了一种替代传统设备插件 device plugin API 的方式,用于请求访问第三方资源。
在设计上动态资源分配DRA使用的资源参数对于核心 Kubernetes 完全不透明。
这种方法对于集群自动缩放器CA或任何需要为一组 Pod 做决策的高级控制器(例如作业调度器)都会带来问题。
这一设计无法模拟在不同时间分配或释放请求的效果。
只有第三方 DRA 驱动程序才拥有信息来做到这一点。
<!--
Structured Parameters for dynamic resource allocation is an extension to the original
implementation that addresses this problem by building a framework to support making these claim
parameters less opaque. Instead of handling the semantics of all claim parameters themselves,
drivers could manage resources and describe them using a specific "structured model" pre-defined by
Kubernetes. This would allow components aware of this "structured model" to make decisions about
these resources without outsourcing them to some third-party controller. For example, the scheduler
could allocate claims rapidly without back-and-forth communication with dynamic resource
allocation drivers. Work done for this release centers on defining the framework necessary to enable
different "structured models" and to implement the "named resources" model. This model allows
listing individual resource instances and, compared to the traditional device plugin API, adds the
ability to select those instances individually via attributes.
-->
动态资源分配DRA的结构化参数是对原始实现的扩展它通过构建一个框架来支持增加请求参数的透明度来解决这个问题。
驱动程序不再需要自己处理所有请求参数的语义,而是可以使用 Kubernetes 预定义的特定“结构化模型”来管理和描述资源。
这一设计允许了解这个“结构化规范”的组件做出关于这些资源的决策,而不再将它们外包给某些第三方控制器。
例如调度器可以在不与动态资源分配DRA驱动程序反复通信的前提下快速完成分配请求。
这个版本的工作重点是定义一个框架来支持不同的“结构化模型”,并实现“命名资源”模型。
此模型允许列出各个资源实例,同时,与传统的设备插件 API 相比,模型增加了通过属性逐一选择实例的能力。
<!--
### Node memory swap support ([KEP-2400](https://kep.k8s.io/2400))
-->
### 节点交换内存 SWAP 支持 ([KEP-2400](https://kep.k8s.io/2400))
<!--
In Kubernetes v1.30, memory swap support on Linux nodes gets a big change to how it works - with a
strong emphasis on improving system stability. In previous Kubernetes versions, the `NodeSwap`
feature gate was disabled by default, and when enabled, it used `UnlimitedSwap` behavior as the
default behavior. To achieve better stability, `UnlimitedSwap` behavior (which might compromise node
stability) will be removed in v1.30.
-->
在 Kubernetes v1.30 中Linux 节点上的交换内存支持机制有了重大改进,其重点是提高系统的稳定性。
以前的 Kubernetes 版本默认情况下禁用了 `NodeSwap` 特性门控。当门控被启用时,`UnlimitedSwap` 行为被作为默认行为。
为了提高稳定性,`UnlimitedSwap` 行为(可能会影响节点的稳定性)将在 v1.30 中被移除。
<!--
The updated, still-beta support for swap on Linux nodes will be available by default. However, the
default behavior will be to run the node set to `NoSwap` (not `UnlimitedSwap`) mode. In `NoSwap`
mode, the kubelet supports running on a node where swap space is active, but Pods don't use any of
the page file. You'll still need to set `--fail-swap-on=false` for the kubelet to run on that node.
However, the big change is the other mode: `LimitedSwap`. In this mode, the kubelet actually uses
the page file on that node and allows Pods to have some of their virtual memory paged out.
Containers (and their parent pods) do not have access to swap beyond their memory limit, but the
system can still use the swap space if available.
-->
更新后的 Linux 节点上的交换内存支持仍然是 beta 级别,并且默认情况下开启。
然而,节点默认行为是使用 `NoSwap`(而不是 `UnlimitedSwap`)模式。
`NoSwap` 模式下kubelet 支持在启用了磁盘交换空间的节点上运行,但 Pod 不会使用页面文件pagefile
你仍然需要为 kubelet 设置 `--fail-swap-on=false` 才能让 kubelet 在该节点上运行。
特性的另一个重大变化是针对另一种模式:`LimitedSwap`。
`LimitedSwap` 模式下kubelet 会实际使用节点上的页面文件,并允许 Pod 的一些虚拟内存被换页出去。
容器(及其父 Pod访问交换内存空间不可超出其内存限制但系统的确可以使用可用的交换空间。
<!--
Kubernetes' Node special interest group (SIG Node) will also update the documentation to help you
understand how to use the revised implementation, based on feedback from end users, contributors,
and the wider Kubernetes community.
-->
Kubernetes 的 SIG Node 小组还将根据最终用户、贡献者和更广泛的 Kubernetes 社区的反馈更新文档,
以帮助你了解如何使用经过修订的实现。
<!--
Read the previous [blog post](/blog/2023/08/24/swap-linux-beta/) or the [node swap
documentation](/docs/concepts/architecture/nodes/#swap-memory) for more details on
Linux node swap support in Kubernetes.
-->
阅读之前的[博客文章](/zh-cn/blog/2023/08/24/swap-linux-beta/)或[交换内存管理文档](/zh-cn/docs/concepts/architecture/nodes/#swap-memory)以获取有关
Kubernetes 中 Linux 节点交换支持的更多详细信息。
<!--
### Support user namespaces in pods ([KEP-127](https://kep.k8s.io/127))
-->
### 支持 Pod 运行在用户命名空间 ([KEP-127](https://kep.k8s.io/127))
<!--
[User namespaces](/docs/concepts/workloads/pods/user-namespaces) is a Linux-only feature that better
isolates pods to prevent or mitigate several CVEs rated high/critical, including
[CVE-2024-21626](https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv),
published in January 2024. In Kubernetes 1.30, support for user namespaces is migrating to beta and
now supports pods with and without volumes, custom UID/GID ranges, and more!
-->
[用户命名空间](/zh-cn/docs/concepts/workloads/pods/user-namespaces) 是一个仅在 Linux 上可用的特性,它更好地隔离 Pod
以防止或减轻几个高/严重级别的 CVE包括 2024 年 1 月发布的 [CVE-2024-21626](https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv)。
在 Kubernetes 1.30 中,对用户命名空间的支持正在迁移到 beta并且现在支持带有和不带有卷的 Pod自定义 UID/GID 范围等等!
<!--
### Structured authorization configuration ([KEP-3221](https://kep.k8s.io/3221))
-->
### 结构化鉴权配置([KEP-3221](https://kep.k8s.io/3221))
<!--
Support for [structured authorization
configuration](/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file)
is moving to beta and will be enabled by default. This feature enables the creation of
authorization chains with multiple webhooks with well-defined parameters that validate requests in a
particular order and allows fine-grained control such as explicit Deny on failures. The
configuration file approach even allows you to specify [CEL](/docs/reference/using-api/cel/) rules
to pre-filter requests before they are dispatched to webhooks, helping you to prevent unnecessary
invocations. The API server also automatically reloads the authorizer chain when the configuration
file is modified.
-->
对[结构化鉴权配置](/zh-cn/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file)的支持正在晋级到 Beta 版本,并将默认启用。
这个特性支持创建具有明确参数定义的多个 Webhook 所构成的鉴权链;这些 Webhook 按特定顺序验证请求,
并允许进行细粒度的控制,例如在失败时明确拒绝。
配置文件方法甚至允许你指定 [CEL](/zh-cn/docs/reference/using-api/cel/) 规则,以在将请求分派到 Webhook 之前对其进行预过滤,帮助你防止不必要的调用。
当配置文件被修改时API 服务器还会自动重新加载鉴权链。
<!--
You must specify the path to that authorization configuration using the `--authorization-config`
command line argument. If you want to keep using command line flags instead of a
configuration file, those will continue to work as-is. To gain access to new authorization webhook
capabilities like multiple webhooks, failure policy, and pre-filter rules, switch to putting options
in an `--authorization-config` file. From Kubernetes 1.30, the configuration file format is
beta-level, and only requires specifying `--authorization-config` since the feature gate is enabled by
default. An example configuration with all possible values is provided in the [Authorization
docs](/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file).
For more details, read the [Authorization
docs](/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file).
-->
你必须使用 `--authorization-config` 命令行参数指定鉴权配置的路径。
如果你想继续使用命令行标志而不是配置文件,命令行方式没有变化。
要访问新的 Webhook 功能,例如多 Webhook 支持、失败策略和预过滤规则,需要切换到将选项放在 `--authorization-config` 文件中。
从 Kubernetes 1.30 开始,配置文件格式约定是 beta 级别的,只需要指定 `--authorization-config`,因为特性门控默认启用。
[鉴权文档](/zh-cn/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file)
中提供了一个包含所有可能值的示例配置。
有关更多详细信息,请阅读[鉴权文档](/zh-cn/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file)。
<!--
### Container resource based pod autoscaling ([KEP-1610](https://kep.k8s.io/1610))
-->
### 基于容器资源指标的 Pod 自动扩缩容 ([KEP-1610](https://kep.k8s.io/1610))
<!--
Horizontal pod autoscaling based on `ContainerResource` metrics will graduate to stable in v1.30.
This new behavior for HorizontalPodAutoscaler allows you to configure automatic scaling based on the
resource usage for individual containers, rather than the aggregate resource use over a Pod. See our
[previous article](/blog/2023/05/02/hpa-container-resource-metric/) for further details, or read
[container resource metrics](/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics).
-->
基于 `ContainerResource` 指标的 Pod 水平自动扩缩容将在 v1.30 中升级为稳定版。
HorizontalPodAutoscaler 的这一新行为允许你根据各个容器的资源使用情况而不是 Pod 的聚合资源使用情况来配置自动伸缩。
有关更多详细信息,请参阅我们的[先前文章](/zh-cn/blog/2023/05/02/hpa-container-resource-metric/)
或阅读[容器资源指标](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics)。
<!--
### CEL for admission control ([KEP-3488](https://kep.k8s.io/3488))
-->
### 在准入控制中使用 CEL ([KEP-3488](https://kep.k8s.io/3488))
<!--
Integrating Common Expression Language (CEL) for admission control in Kubernetes introduces a more
dynamic and expressive way of evaluating admission requests. This feature allows complex,
fine-grained policies to be defined and enforced directly through the Kubernetes API, enhancing
security and governance capabilities without compromising performance or flexibility.
-->
Kubernetes 为准入控制集成了 Common Expression Language (CEL) 。
这一集成引入了一种更动态、表达能力更强的方式来判定准入请求。
这个特性允许通过 Kubernetes API 直接定义和执行复杂的、细粒度的策略,同时增强了安全性和治理能力,而不会影响性能或灵活性。
<!--
CEL's addition to Kubernetes admission control empowers cluster administrators to craft intricate
rules that can evaluate the content of API requests against the desired state and policies of the
cluster without resorting to Webhook-based access controllers. This level of control is crucial for
maintaining the integrity, security, and efficiency of cluster operations, making Kubernetes
environments more robust and adaptable to various use cases and requirements. For more information
on using CEL for admission control, see the [API
documentation](/docs/reference/access-authn-authz/validating-admission-policy/) for
ValidatingAdmissionPolicy.
-->
将 CEL 引入到 Kubernetes 的准入控制后,集群管理员就具有了制定复杂规则的能力,
这些规则可以根据集群的期望状态和策略来评估 API 请求的内容,而无需使用基于 Webhook 的访问控制器。
这种控制水平对于维护集群操作的完整性、安全性和效率至关重要,使 Kubernetes 环境更加健壮,更适应各种用例和需求。
有关使用 CEL 进行准入控制的更多信息,请参阅 [API 文档](/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/)中的 ValidatingAdmissionPolicy。
<!--
We hope you're as excited for this release as we are. Keep an eye out for the official release
blog in a few weeks for more highlights!
-->
我们希望你和我们一样对这个版本的发布感到兴奋。请在未来几周内密切关注官方发布博客,以了解其他亮点!

View File

@ -166,21 +166,17 @@ kubelet 通过 Kubernetes API 的特殊功能将日志提供给客户端访问
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
<!--
You can configure the kubelet to rotate logs automatically.
If you configure rotation, the kubelet is responsible for rotating container logs and managing the
The kubelet is responsible for rotating container logs and managing the
logging directory structure.
The kubelet sends this information to the container runtime (using CRI),
and the runtime writes the container logs to the given location.
-->
你可以配置 kubelet 令其自动轮转日志。
如果配置轮转kubelet 负责轮转容器日志并管理日志目录结构。
kubelet 负责轮换容器日志并管理日志目录结构。
kubelet使用 CRI将此信息发送到容器运行时而运行时则将容器日志写到给定位置。
<!--
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/),
`containerLogMaxSize` and `containerLogMaxFiles`,
`containerLogMaxSize` (default 10Mi) and `containerLogMaxFiles` (default 5),
using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
These settings let you configure the maximum size for each log file and the maximum number of
files allowed for each container respectively.
@ -191,7 +187,7 @@ reads directly from the log file. The kubelet returns the content of the log fil
-->
你可以使用 [kubelet 配置文件](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/)配置两个
kubelet [配置选项](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)、
`containerLogMaxSize``containerLogMaxFiles`
`containerLogMaxSize` (默认 10Mi`containerLogMaxFiles` (默认 5
这些设置分别允许你分别配置每个日志文件大小的最大值和每个容器允许的最大文件数。
当类似于基本日志示例一样运行 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) 时,

View File

@ -369,7 +369,7 @@ allow-list:
<!--
Additionally, the `cardinality_enforcement_unexpected_categorizations_total` meta-metric records the
count of unexpected categorizations during cardinality enforcement, that is, whenever a label value
is encountered that is not allowed with respect to the allow-list contraints.
is encountered that is not allowed with respect to the allow-list constraints.
-->
此外,`cardinality_enforcement_unexpected_categorizations_total`
元指标记录基数执行期间意外分类的计数,即每当遇到允许列表约束不允许的标签值时。

View File

@ -156,7 +156,7 @@ For more information about the `TracingConfiguration` struct, see
-->
### kubelet 追踪 {#kubelet-traces}
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
{{< feature-state feature_gate_name="KubeletTracing" >}}
<!--
The kubelet CRI interface and authenticated http servers are instrumented to generate

View File

@ -53,7 +53,7 @@ Kubernetes project in 2014. Kubernetes combines
[over 15 years of Google's experience](/blog/2015/04/borg-predecessor-to-kubernetes/) running
production workloads at scale with best-of-breed ideas and practices from the community.
-->
**Kubernetes** 这个名字源于希腊语,意为“舵手”或“飞行员”。k8s 这个缩写是因为 k 和 s 之间有八个字符的关系。
**Kubernetes** 这个名字源于希腊语,意为“舵手”或“飞行员”。K8s 这个缩写是因为 K 和 s 之间有 8 个字符的关系。
Google 在 2014 年开源了 Kubernetes 项目。
Kubernetes 建立在 [Google 大规模运行生产工作负载十几年经验](https://research.google/pubs/pub43438)的基础上,
结合了社区中最优秀的想法和实践。

View File

@ -63,14 +63,14 @@ ResourceClass
driver.
ResourceClaim
: Defines a particular resource instances that is required by a
: Defines a particular resource instance that is required by a
workload. Created by a user (lifecycle managed manually, can be shared
between different Pods) or for individual Pods by the control plane based on
a ResourceClaimTemplate (automatic lifecycle, typically used by just one
Pod).
ResourceClaimTemplate
: Defines the spec and some meta data for creating
: Defines the spec and some metadata for creating
ResourceClaims. Created by a user when deploying a workload.
PodSchedulingContext

View File

@ -84,8 +84,8 @@ Within the `scoringStrategy` field, you can configure two parameters: `requested
`resources`. The `shape` in the `requestedToCapacityRatio`
parameter allows the user to tune the function as least requested or most
requested based on `utilization` and `score` values. The `resources` parameter
consists of `name` of the resource to be considered during scoring and `weight`
specify the weight of each resource.
comprises both the `name` of the resource to be considered during scoring and
its corresponding `weight`, which specifies the weight of each resource.
-->
## 使用 RequestedToCapacityRatio 策略来启用资源装箱 {#enabling-bin-packing-using-requestedtocapacityratio}
@ -97,7 +97,8 @@ specify the weight of each resource.
字段来控制。在 `scoringStrategy` 字段中,你可以配置两个参数:
`requestedToCapacityRatio``resources`。`requestedToCapacityRatio` 参数中的 `shape`
设置使得用户能够调整函数的算法,基于 `utilization``score` 值计算最少请求或最多请求。
`resources` 参数中包含计分过程中需要考虑的资源的 `name`,以及用来设置每种资源权重的 `weight`
`resources` 参数中包含计分过程中需要考虑的资源的 `name`,以及对应的 `weight`
后者指定了每个资源的权重。
<!--
Below is an example configuration that sets

View File

@ -150,7 +150,7 @@ for a 5000-node cluster. The lower bound for the automatic value is 5%.
下取 50%,在 5000-节点的集群下取 10%。这个自动设置的参数的最低值是 5%。
<!--
This means that, the kube-scheduler always scores at least 5% of your cluster no
This means that the kube-scheduler always scores at least 5% of your cluster no
matter how large the cluster is, unless you have explicitly set
`percentageOfNodesToScore` to be smaller than 5.
-->

View File

@ -189,6 +189,7 @@ To submit a blog post, follow these steps:
有些时候,某个 CNCF 项目的主要功能特性或者里程碑的变化可能是用户有兴趣在
Kubernetes 博客上阅读的内容。
- 关于为 Kubernetes 项目做贡献的博客内容应该放在 [Kubernetes 贡献者站点](https://kubernetes.dev)上。
<!--
- Blog posts should be original content
- The official blog is not for repurposing existing content from a third party as new content.
@ -202,7 +203,7 @@ To submit a blog post, follow these steps:
- Consider concentrating the long technical content as a call to action of the blog post, and
focus on the problem space or why readers should care.
-->
- 博客文章应该是原创内容。
- 博客文章是原创内容。
- 官方博客的目的不是将某第三方已发表的内容重新作为新内容发表。
- 博客的[授权协议](https://github.com/kubernetes/website/blob/main/LICENSE)
的确允许出于商业目的来使用博客内容;但并不是所有可以商用的内容都适合在这里发表。
@ -321,9 +322,6 @@ SIG Docs
之后,机器人会将你的博文合并并发表。
<!--
- The blog team will then review your PR and give you comments on things you might need to fix.
After that the bot will merge your PR and your blog post will be published.
- If the content of the blog post contains only content that is not expected to require updates
to stay accurate for the reader, it can be marked as evergreen and exempted from the automatic
warning about outdated content added to blog posts older than one year.
@ -351,6 +349,31 @@ SIG Docs
- 仅适用于特定发行版或版本而不是所有未来版本的**教程**
- 对非正式发行Pre-GAAPI 或功能特性的引用
<!--
### Mirroring from the Kubernetes Contributor Blog
To mirror a blog post from the [Kubernetes contributor blog](https://www.kubernetes.dev/blog/), follow these guidelines:
-->
### 制作 Kubernetes 贡献者博客的镜像 {#mirroring-from-the-kubernetes-contributor-blog}
要从 [Kubernetes 贡献者博客](https://www.kubernetes.dev/blog/)制作某篇博文的镜像,遵循以下指导原则:
<!--
- Keep the blog content the same. If there are changes, they should be made to the original article first, and then to the mirrored article.
- The mirrored blog should have a `canonicalUrl`, that is, essentially the url of the original blog after it has been published.
- [Kubernetes contributor blogs](https://kubernetes.dev/blog) have their authors mentioned in the YAML header, while the Kubernetes blog posts mention authors in the blog content itself. This should be changed when mirroring the content.
- Publication dates stay the same as the original blog.
All of the other guidelines and expectations detailed above apply as well.
-->
- 保持博客内容不变。如有变更,应该先在原稿上进行更改,然后再更改到镜像的文章上。
- 镜像博客应该有一个 `canonicalUrl`,即基本上是原始博客发布后的网址。
- [Kubernetes 贡献者博客](https://kubernetes.dev/blog)在 YAML 头部中提及作者,
而 Kubernetes 博文在博客内容中提及作者。你在镜像内容时应修改这一点。
- 发布日期与原博客保持一致。
在制作镜像博客时,你也需遵守本文所述的所有其他指导原则和期望。
<!--
## Submit a case study
@ -374,4 +397,3 @@ Kubernetes 市场化团队和 {{< glossary_tooltip text="CNCF" term_id="cncf" >}
参考[案例分析指南](https://github.com/cncf/foundation/blob/master/case-study-guidelines.md)
根据指南中的注意事项提交你的 PR 请求。

File diff suppressed because it is too large Load Diff

View File

@ -515,9 +515,9 @@ sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${
sudo chmod +x {kubeadm,kubelet}
RELEASE_VERSION="v0.16.2"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubelet/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubelet/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /usr/lib/systemd/system/kubelet.service
sudo mkdir -p /usr/lib/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
```
{{< note >}}

View File

@ -297,12 +297,28 @@ This configuration file installed by the `kubeadm`
[package](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf) is written to
`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` and is used by systemd.
It augments the basic
[`kubelet.service`](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubelet/kubelet.service):
[`kubelet.service`](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubelet/kubelet.service).
-->
通过 `kubeadm` [](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf)
安装的配置文件被写入 `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
安装的配置文件被写入 `/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf`
并由 systemd 使用。它对原来的
[`kubelet.service`](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubelet/kubelet.service) 作了增强:
[`kubelet.service`](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubelet/kubelet.service) 作了增强。
<!--
If you want to override that further, you can make a directory `/etc/systemd/system/kubelet.service.d/`
(not `/usr/lib/systemd/system/kubelet.service.d/`) and put your own customizations into a file there.
For example, you might add a new local file `/etc/systemd/system/kubelet.service.d/local-overrides.conf`
to override the unit settings configured by `kubeadm`.
Here is what you are likely to find in `/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf`:
-->
如果你想进一步覆盖它,可以创建一个目录`/etc/systemd/system/kubelet.service.d/`
(而不是`/usr/lib/systemd/kubelet.service.d/`),并将你自己的自定义配置写入到该目录下的一个文件中。
例如,您可以添加一个新的本地文件`/etc/systemd/system/kubelet.service.d/local-overrides.conf`
以覆盖“kubeadm”配置的单元设置。
以下是你可能在“/usr/lib/systemd/kubelet.service.d/10 kubeadm.conf”中找到的内容
{{< note >}}
<!--

View File

@ -52,7 +52,7 @@ with your new arguments.
<!--
The `command` field corresponds to `entrypoint` in some container runtimes.
-->
在有些容器运行时中,`command` 字段对应 `entrypoint`,请参阅下面的[说明事项](#notes)
在有些容器运行时中,`command` 字段对应 `entrypoint`
{{< /note >}}
<!--

View File

@ -16,7 +16,7 @@ weight: 30
{{< feature-state for_k8s_version="v1.4" state="beta" >}}
<!--
AppArmor is a Linux kernel security module that supplements the standard Linux user and group based
[AppArmor](https://apparmor.net/) is a Linux kernel security module that supplements the standard Linux user and group based
permissions to confine programs to a limited set of resources. AppArmor can be configured for any
application to reduce its potential attack surface and provide greater in-depth defense. It is
configured through profiles tuned to allow the access needed by a specific program or container,
@ -24,7 +24,7 @@ such as Linux capabilities, network access, file permissions, etc. Each profile
*enforcing* mode, which blocks access to disallowed resources, or *complain* mode, which only reports
violations.
-->
AppArmor 是一个 Linux 内核安全模块,
[AppArmor](https://apparmor.net/) 是一个 Linux 内核安全模块,
它补充了基于标准 Linux 用户和组的权限,将程序限制在一组有限的资源中。
AppArmor 可以配置为任何应用程序减少潜在的攻击面,并且提供更加深入的防御。
它通过调整配置文件进行配置,以允许特定程序或容器所需的访问,
@ -33,13 +33,13 @@ AppArmor 可以配置为任何应用程序减少潜在的攻击面,并且提
模式(阻止访问不允许的资源)或 **投诉complain** 模式(仅报告冲突)下运行。
<!--
AppArmor can help you to run a more secure deployment by restricting what containers are allowed to
On Kubernetes, AppArmor can help you to run a more secure deployment by restricting what containers are allowed to
do, and/or provide better auditing through system logs. However, it is important to keep in mind
that AppArmor is not a silver bullet and can only do so much to protect against exploits in your
application code. It is important to provide good, restrictive profiles, and harden your
applications and cluster from other angles as well.
-->
AppArmor 可以通过限制允许容器执行的操作,
在 Kubernetes 中,AppArmor 可以通过限制允许容器执行的操作,
和/或通过系统日志提供更好的审计来帮助你运行更安全的部署。
但是,重要的是要记住 AppArmor 不是灵丹妙药,
只能做部分事情来防止应用程序代码中的漏洞。
@ -48,7 +48,7 @@ AppArmor 可以通过限制允许容器执行的操作,
## {{% heading "objectives" %}}
<!--
* See an example of how to load a profile on a node
* See an example of how to load a profile on a Node
* Learn how to enforce the profile on a Pod
* Learn how to check that the profile is loaded
* See what happens when a profile is violated
@ -63,38 +63,18 @@ AppArmor 可以通过限制允许容器执行的操作,
## {{% heading "prerequisites" %}}
<!--
Make sure:
AppArmor is an optional kernel module and Kubernetes feature, so verify it is supported on your
Nodes before proceeding:
-->
确保
AppArmor 是一个可选的内核模块和 Kubernetes 特性,因此请在继续之前验证你的节点是否支持它
<!--
1. Kubernetes version is at least v1.4 -- Kubernetes support for AppArmor was added in
v1.4. Kubernetes components older than v1.4 are not aware of the new AppArmor annotations, and
will **silently ignore** any AppArmor settings that are provided. To ensure that your Pods are
receiving the expected protections, it is important to verify the Kubelet version of your nodes:
-->
1. Kubernetes 版本至少是 v1.4 —— AppArmor 在 Kubernetes v1.4 版本中才添加了对 AppArmor 的支持。
早于 v1.4 版本的 Kubernetes 组件不知道新的 AppArmor
注解并且将会 **默认忽略** 提供的任何 AppArmor 设置。
为了确保你的 Pod 能够得到预期的保护,必须验证节点的 Kubelet 版本:
```shell
kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.kubeletVersion}\n{end}'
```
```
gke-test-default-pool-239f5d02-gyn2: v1.4.0
gke-test-default-pool-239f5d02-x1kf: v1.4.0
gke-test-default-pool-239f5d02-xwux: v1.4.0
```
<!--
2. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
1. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
AppArmor kernel module must be installed and enabled. Several distributions enable the module by
default, such as Ubuntu and SUSE, and many others provide optional support. To check whether the
module is enabled, check the `/sys/module/apparmor/parameters/enabled` file:
-->
2. AppArmor 内核模块已启用 —— 要使 Linux 内核强制执行 AppArmor 配置文件,
1. AppArmor 内核模块已启用 —— 要使 Linux 内核强制执行 AppArmor 配置文件,
必须安装并且启动 AppArmor 内核模块。默认情况下,有几个发行版支持该模块,
如 Ubuntu 和 SUSE还有许多发行版提供可选支持。要检查模块是否已启用请检查
`/sys/module/apparmor/parameters/enabled` 文件:
@ -105,43 +85,30 @@ Make sure:
```
<!--
If the Kubelet contains AppArmor support (>= v1.4), it will refuse to run a Pod with AppArmor
options if the kernel module is not enabled.
The Kubelet verifies that AppArmor is enabled on the host before admitting a pod with AppArmor
explicitly configured.
-->
如果 Kubelet 包含 AppArmor 支持(>= v1.4
但是内核模块未启用,它将拒绝运行带有 AppArmor 选项的 Pod。
{{< note >}}
<!--
Ubuntu carries many AppArmor patches that have not been merged into the upstream Linux
kernel, including patches that add additional hooks and features. Kubernetes has only been
tested with the upstream version, and does not promise support for other features.
-->
Ubuntu 携带了许多没有合并到上游 Linux 内核中的 AppArmor 补丁,
包括添加附加钩子和特性的补丁。Kubernetes 只在上游版本中测试过,不承诺支持其他特性。
{{< /note >}}
kubelet 会先验证主机上是否已启用 AppArmor然后再接纳显式配置了 AppArmor 的 Pod。
<!--
3. Container runtime supports AppArmor -- Currently all common Kubernetes-supported container
runtimes should support AppArmor, like {{< glossary_tooltip term_id="docker">}},
{{< glossary_tooltip term_id="cri-o" >}} or {{< glossary_tooltip term_id="containerd" >}}.
Please refer to the corresponding runtime documentation and verify that the cluster fulfills
the requirements to use AppArmor.
3. Container runtime supports AppArmor -- All common Kubernetes-supported container
runtimes should support AppArmor, including {{< glossary_tooltip term_id="cri-o" >}} and
{{< glossary_tooltip term_id="containerd" >}}. Please refer to the corresponding runtime
documentation and verify that the cluster fulfills the requirements to use AppArmor.
-->
3. 容器运行时支持 AppArmor —— 目前所有常见的 Kubernetes 支持的容器运行时都应该支持 AppArmor
像 {{< glossary_tooltip term_id="docker">}}、{{< glossary_tooltip term_id="cri-o" >}}
或 {{< glossary_tooltip term_id="containerd" >}}。
2. 容器运行时支持 AppArmor —— 所有常见的 Kubernetes 支持的容器运行时都应该支持 AppArmor
包括 {{< glossary_tooltip term_id="cri-o" >}} 和 {{< glossary_tooltip term_id="containerd" >}}。
请参考相应的运行时文档并验证集群是否满足使用 AppArmor 的要求。
<!--
4. Profile is loaded -- AppArmor is applied to a Pod by specifying an AppArmor profile that each
container should be run with. If any of the specified profiles is not already loaded in the
kernel, the Kubelet (>= v1.4) will reject the Pod. You can view which profiles are loaded on a
3. Profile is loaded -- AppArmor is applied to a Pod by specifying an AppArmor profile that each
container should be run with. If any of the specified profiles is not loaded in the
kernel, the Kubelet will reject the Pod. You can view which profiles are loaded on a
node by checking the `/sys/kernel/security/apparmor/profiles` file. For example:
-->
4. 配置文件已加载 —— 通过指定每个容器应使用的 AppArmor 配置文件,
AppArmor 会被应用到 Pod 上。如果指定的任何配置文件未加载到内核,
Kubelet>= v1.4将拒绝 Pod。
3. 配置文件已加载 —— 通过指定每个容器应使用的 AppArmor 配置文件,
AppArmor 会被应用到 Pod 上。如果指定的配置文件未加载到内核,
kubelet 将拒绝 Pod。
通过检查 `/sys/kernel/security/apparmor/profiles` 文件,
可以查看节点加载了哪些配置文件。例如:
@ -163,26 +130,6 @@ Make sure:
有关在节点上加载配置文件的详细信息,请参见[使用配置文件设置节点](#setting-up-nodes-with-profiles)。
<!--
As long as the Kubelet version includes AppArmor support (>= v1.4), the Kubelet will reject a Pod
with AppArmor options if any of the prerequisites are not met. You can also verify AppArmor support
on nodes by checking the node ready condition message (though this is likely to be removed in a
later release):
-->
只要 Kubelet 版本包含 AppArmor 支持(>=v1.4)
如果不满足这些先决条件Kubelet 将拒绝带有 AppArmor 选项的 Pod。
你还可以通过检查节点就绪状况消息来验证节点上的 AppArmor 支持(尽管这可能会在以后的版本中删除):
```shell
kubectl get nodes -o=jsonpath='{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}{"\n"}{end}'
```
```
gke-test-default-pool-239f5d02-gyn2: kubelet is posting ready status. AppArmor enabled
gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready status. AppArmor enabled
gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled
```
<!-- lessoncontent -->
<!--
@ -212,7 +159,7 @@ container.apparmor.security.beta.kubernetes.io/<container_name>: <profile_ref>
<!--
Where `<container_name>` is the name of the container to apply the profile to, and `<profile_ref>`
specifies the profile to apply. The `profile_ref` can be one of:
specifies the profile to apply. The `<profile_ref>` can be one of:
-->
`<container_name>` 的名称是配置文件所针对的容器的名称,`<profile_def>` 则设置要应用的配置文件。
`<profile_ref>` 可以是以下取值之一:
@ -232,26 +179,24 @@ See the [API Reference](#api-reference) for the full details on the annotation a
有关注解和配置文件名称格式的详细信息,请参阅 [API 参考](#api-reference)。
<!--
Kubernetes AppArmor enforcement works by first checking that all the prerequisites have been
met, and then forwarding the profile selection to the container runtime for enforcement. If the
prerequisites have not been met, the Pod will be rejected, and will not run.
To verify that the profile was applied, you can check that the container's root process is
running with the correct profile by examining its proc attr:
-->
Kubernetes AppArmor 强制执行机制首先检查所有先决条件都已满足,
然后将所选的配置文件转发到容器运行时进行强制执行。
如果未满足先决条件Pod 将被拒绝,并且不会运行。
<!--
To verify that the profile was applied, you can look for the AppArmor security option listed in the container created event:
-->
要验证是否应用了配置文件,可以在容器创建事件中查找所列出的 AppArmor 安全选项:
要验证是否应用了配置文件,
你可以通过检查容器根进程的进程属性来检查该进程是否正在使用正确的配置文件运行:
```shell
kubectl get events | grep Created
kubectl exec <pod_name> -- cat /proc/1/attr/current
```
```
22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-node-pool-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
```
<!--
The output should look something like this:
-->
输出应如下所示:
```
k8s-apparmor-example-deny-write (enforce)
```
<!--
You can also verify directly that the container's root process is running with the correct profile by checking its proc attr:
@ -277,11 +222,11 @@ k8s-apparmor-example-deny-write (enforce)
**本例假设你已经设置了一个集群使用 AppArmor 支持。**
<!--
First, we need to load the profile we want to use onto our nodes. This profile denies all file writes:
First, load the profile you want to use onto your Nodes. This profile denies all file writes:
-->
首先,我们需要将要使用的配置文件加载到节点上。配置文件拒绝所有文件写入:
首先,将要使用的配置文件加载到节点上,此配置文件拒绝所有文件写入:
```shell
```
#include <tunables/global>
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
@ -295,20 +240,20 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
```
<!--
Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our
nodes. For this example we'll use SSH to install the profiles, but other approaches are
The profile needs to loaded onto all nodes, since you don't know where the pod will be scheduled.
For this example we'll use SSH to install the profiles, but other approaches are
discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
-->
由于我们不知道 Pod 将被调度到哪里,我们需要在所有节点上加载配置文件
由于不知道 Pod 将被调度到哪里,该配置文件需要加载到所有节点上
在本例中,我们将使用 SSH 来安装概要文件,
但是在[使用配置文件设置节点](#setting-up-nodes-with-profiles)中讨论了其他方法。
<!--
# This example assumes that node names match host names, and are reachable via SSH.
-->
```shell
NODES=(
# 你的节点的可通过 SSH 访问的域名
gke-test-default-pool-239f5d02-gyn2.us-central1-a.my-k8s
gke-test-default-pool-239f5d02-x1kf.us-central1-a.my-k8s
gke-test-default-pool-239f5d02-xwux.us-central1-a.my-k8s)
# 此示例假设节点名称与主机名称匹配,并且可通过 SSH 访问。
NODES=($(kubectl get nodes -o name))
for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
#include <tunables/global>
@ -325,52 +270,38 @@ done
```
<!--
Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile:
Next, run a simple "Hello AppArmor" pod with the deny-write profile:
-->
接下来,我们将运行一个带有拒绝写入配置文件的简单 “Hello AppArmor” Pod
接下来,运行一个带有拒绝写入配置文件的简单 “Hello AppArmor” Pod
{{% code_sample file="pods/security/hello-apparmor.yaml" %}}
```shell
kubectl create -f ./hello-apparmor.yaml
kubectl create -f hello-apparmor.yaml
```
<!--
If we look at the pod events, we can see that the Pod container was created with the AppArmor
profile "k8s-apparmor-example-deny-write":
You can verify that the container is actually running with that profile by checking its `/proc/1/attr/current`:
-->
如果我们查看 Pod 事件,我们可以看到 Pod 容器是用 AppArmor
配置文件 “k8s-apparmor-example-deny-write” 所创建的:
```shell
kubectl get events | grep hello-apparmor
```
```
14s 14s 1 hello-apparmor Pod Normal Scheduled {default-scheduler } Successfully assigned hello-apparmor to gke-test-default-pool-239f5d02-gyn2
14s 14s 1 hello-apparmor Pod spec.containers{hello} Normal Pulling {kubelet gke-test-default-pool-239f5d02-gyn2} pulling image "busybox"
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Pulled {kubelet gke-test-default-pool-239f5d02-gyn2} Successfully pulled image "busybox"
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet gke-test-default-pool-239f5d02-gyn2} Created container with docker id 06b6cd1c0989; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Started {kubelet gke-test-default-pool-239f5d02-gyn2} Started container with docker id 06b6cd1c0989
```
<!--
We can verify that the container is actually running with that profile by checking its proc attr:
-->
我们可以通过检查该配置文件的 proc attr 来验证容器是否实际使用该配置文件运行:
你可以通过检查其 `/proc/1/attr/current` 来验证容器是否确实使用该配置文件运行:
```shell
kubectl exec hello-apparmor -- cat /proc/1/attr/current
```
<!--
The output should be:
-->
输出应该是:
```
k8s-apparmor-example-deny-write (enforce)
```
<!--
Finally, we can see what happens if we try to violate the profile by writing to a file:
Finally, you can see what happens if you violate the profile by writing to a file:
-->
最后,我们可以看到,如果我们尝试通过写入文件来违反配置文件会发生什么:
最后,你可以看到,如果你通过写入文件来违反配置文件会发生什么:
```shell
kubectl exec hello-apparmor -- touch /tmp/test
@ -382,15 +313,12 @@ error: error executing remote command: command terminated with non-zero exit cod
```
<!--
To wrap up, let's look at what happens if we try to specify a profile that hasn't been loaded:
To wrap up, see what happens if you try to specify a profile that hasn't been loaded:
-->
最后,让我们看看如果我们试图指定一个尚未加载的配置文件会发生什么:
最后,看看如果你尝试指定尚未加载的配置文件会发生什么:
```shell
kubectl create -f /dev/stdin <<EOF
```
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -403,9 +331,16 @@ spec:
image: busybox:1.28
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
EOF
```
```
pod/hello-apparmor-2 created
```
<!--
Although the Pod was created successfully, further examination will show that it is stuck in pending:
-->
虽然 Pod 创建成功,但进一步检查会发现它陷入 pending 状态:
```shell
kubectl describe pod hello-apparmor-2
```
@ -413,59 +348,30 @@ kubectl describe pod hello-apparmor-2
```
Name: hello-apparmor-2
Namespace: default
Node: gke-test-default-pool-239f5d02-x1kf/
Node: gke-test-default-pool-239f5d02-x1kf/10.128.0.27
Start Time: Tue, 30 Aug 2016 17:58:56 -0700
Labels: <none>
Annotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-example-allow-write
Status: Pending
Reason: AppArmor
Message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
IP:
Controllers: <none>
Containers:
hello:
Container ID:
Image: busybox
Image ID:
Port:
Command:
sh
-c
echo 'Hello AppArmor!' && sleep 1h
State: Waiting
Reason: Blocked
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dnz7v (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-dnz7v:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dnz7v
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
...
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-node-pool-t1f5
23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned default/hello-apparmor to gke-test-default-pool-239f5d02-x1kf
Normal Pulled 8s kubelet Successfully pulled image "busybox:1.28" in 370.157088ms (370.172701ms including waiting)
Normal Pulling 7s (x2 over 9s) kubelet Pulling image "busybox:1.28"
Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found k8s-apparmor-example-allow-write
Normal Pulled 7s kubelet Successfully pulled image "busybox:1.28" in 90.980331ms (91.005869ms including waiting)
```
<!--
Note the pod status is Pending, with a helpful error message: `Pod Cannot enforce AppArmor: profile
"k8s-apparmor-example-allow-write" is not loaded`. An event was also recorded with the same message.
An Event provides the error message with the reason, the specific wording is runtime-dependent:
-->
注意 Pod 呈现 Pending 状态,并且显示一条有用的错误信息:
`Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded`
还用相同的消息记录了一个事件。
事件提供错误消息及其原因,具体措辞与运行时相关:
```
Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found
```
<!--
## Administration
@ -473,74 +379,31 @@ Note the pod status is Pending, with a helpful error message: `Pod Cannot enforc
## 管理 {#administration}
<!--
### Setting up nodes with profiles
### Setting up Nodes with profiles
-->
### 使用配置文件设置节点 {#setting-up-nodes-with-profiles}
<!--
Kubernetes does not currently provide any native mechanisms for loading AppArmor profiles onto
nodes. There are lots of ways to set up the profiles though, such as:
Kubernetes does not currently provide any built-in mechanisms for loading AppArmor profiles onto
Nodes. Profiles can be loaded through custom infrastructure or tools like the
[Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator).
-->
Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载到节点上。
有很多方法可以设置配置文件,例如:
可以通过自定义基础设施或工具(例如 [Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator)
加载配置文件。
<!--
* Through a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) that runs a Pod on each node to
ensure the correct profiles are loaded. An example implementation can be found
[here](https://git.k8s.io/kubernetes/test/images/apparmor-loader).
* At node initialization time, using your node initialization scripts (e.g. Salt, Ansible, etc.) or
image.
* By copying the profiles to each node and loading them through SSH, as demonstrated in the
[Example](#example).
-->
* 通过在每个节点上运行 Pod 的
[DaemonSet](/zh-cn/docs/concepts/workloads/controllers/daemonset/) 来确保加载了正确的配置文件。
可以在[这里](https://git.k8s.io/kubernetes/test/images/apparmor-loader)找到实现示例。
* 在节点初始化时,使用节点初始化脚本(例如 Salt、Ansible 等)或镜像。
* 通过将配置文件复制到每个节点并通过 SSH 加载它们,如[示例](#example)。
<!--
The scheduler is not aware of which profiles are loaded onto which node, so the full set of profiles
must be loaded onto every node. An alternative approach is to add a node label for each profile (or
class of profiles) on the node, and use a
The scheduler is not aware of which profiles are loaded onto which Node, so the full set of profiles
must be loaded onto every Node. An alternative approach is to add a Node label for each profile (or
class of profiles) on the Node, and use a
[node selector](/docs/concepts/scheduling-eviction/assign-pod-node/) to ensure the Pod is run on a
node with the required profile.
Node with the required profile.
-->
调度程序不知道哪些配置文件加载到哪个节点上,因此必须将全套配置文件加载到每个节点上。
另一种方法是为节点上的每个配置文件(或配置文件类)添加节点标签,
并使用[节点选择器](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)确保
Pod 在具有所需配置文件的节点上运行。
<!--
### Disabling AppArmor
-->
### 禁用 AppArmor {#disabling-apparmor}
<!--
If you do not want AppArmor to be available on your cluster, it can be disabled by a command-line flag:
-->
如果你不希望 AppArmor 在集群上可用,可以通过命令行标志禁用它:
```
--feature-gates=AppArmor=false
```
<!--
When disabled, any Pod that includes an AppArmor profile will fail validation with a "Forbidden"
error.
-->
禁用时,任何包含 AppArmor 配置文件的 Pod 都将导致验证失败,且返回 “Forbidden” 错误。
{{<note>}}
<!--
Even if the Kubernetes feature is disabled, runtimes may still enforce the default profile. The
option to disable the AppArmor feature will be removed when AppArmor graduates to general
availability (GA).
-->
即使此 Kubernetes 特性被禁用,运行时仍可能强制执行默认配置文件。
当 AppArmor 升级为正式版 (GA) 时,禁用 AppArmor 功能的选项将被删除。
{{</note>}}
<!--
## Authoring Profiles
-->
@ -609,8 +472,7 @@ Specifying the profile a container will run with:
<!--
- `runtime/default`: Refers to the default runtime profile.
- Equivalent to not specifying a profile, except it still
requires AppArmor to be enabled.
- Equivalent to not specifying a profile, except it still requires AppArmor to be enabled.
- In practice, many container runtimes use the same OCI default profile, defined here:
https://github.com/containers/common/blob/main/pkg/apparmor/apparmor_linux_template.go
- `localhost/<profile_name>`: Refers to a profile loaded on the node (localhost) by name.

View File

@ -1,5 +1,8 @@
# i18n strings for the Ukrainian (main) site.
[blog_post_show_more]
other = "Показати більше…"
[caution]
# other = "Caution:"
other = "Увага:"

View File

@ -24,6 +24,9 @@
{{ partial "navbar-lang-selector.html" . }}
</li>
{{ end }}
<li class="search-item nav-item mr-n4 mr-lg-0">
{{ partial "search-input.html" . }}
</li>
</ul>
</div>
<button id="hamburger" onclick="kub.toggleMenu()" data-auto-burger-exclude><div></div></button>