Merge pull request #26404 from reylejano/merged-master-dev-1.21

Merge master into dev 1.21 - 2/5/2021
pull/26261/head
Kubernetes Prow Robot 2021-02-05 19:09:11 -08:00 committed by GitHub
commit 7f0610d9c2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
94 changed files with 1288 additions and 713 deletions

View File

@ -1,76 +1,137 @@
# A documentação do Kubernetes
[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website)
[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
Bem vindos! Este repositório abriga todos os recursos necessários para criar o [site e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
Bem vindos! Este repositório abriga todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
## Contribuindo com os documentos
# Utilizando este repositório
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie uma nova solicitação de pull para nos informar sobre isso.
Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável usar um container runtime, pois garante a consistência na implantação do website real.
Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para abordar o feedback que foi fornecido a você pelo revisor do Kubernetes.** Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um revisor do Kubernetes que é diferente daquele originalmente designado para lhe fornecer feedback. Além disso, em alguns casos, um de seus revisores pode solicitar uma revisão técnica de um [revisor de tecnologia Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) quando necessário. Os revisores farão o melhor para fornecer feedback em tempo hábil, mas o tempo de resposta pode variar de acordo com as circunstâncias.
## Pré-requisitos
Para usar este repositório, você precisa instalar:
- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo (versão Extended)](https://gohugo.io/)
- Um container runtime, por exemplo [Docker](https://www.docker.com/).
Antes de você iniciar, instale as dependências, clone o repositório e navegue até o diretório:
```
git clone https://github.com/kubernetes/website.git
cd website
```
O website do Kubernetes utiliza o [tema Docsy Hugo](https://github.com/google/docsy#readme). Mesmo se você planeje executar o website em um container, é altamente recomendado baixar os submódulos e outras dependências executando o seguinte comando:
```
# Baixar o submódulo Docsy
git submodule update --init --recursive --depth 1
```
## Executando o website usando um container
Para executar o build do website em um container, execute o comando abaixo para criar a imagem do container e executa-lá:
```
make container-image
make container-serve
```
Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, Hugo atualiza o website e força a atualização do navegador.
## Executando o website localmente utilizando o Hugo
Consulte a [documentação oficial do Hugo](https://gohugo.io/getting-started/installing/) para instruções de instalação do Hugo. Certifique-se de instalar a versão do Hugo especificada pela variável de ambiente `HUGO_VERSION` no arquivo [`netlify.toml`](netlify.toml#L9).
Para executar o build e testar o website localmente, execute:
```bash
# instalar dependências
npm ci
make serve
```
Isso iniciará localmente o Hugo na porta 1313. Abra o seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força uma atualização no navegador.
## Troubleshooting
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
Por motivos técnicos, o Hugo é disponibilizado em dois conjuntos de binários. O website atual funciona apenas na versão **Hugo Extended**. Na [página de releases](https://github.com/gohugoio/hugo/releases) procure por arquivos com `extended` no nome. Para confirmar, execute `hugo version` e procure pela palavra `extended`.
### Troubleshooting macOS for too many open files
Se você executar o comando `make serve` no macOS e retornar o seguinte erro:
```
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
make: *** [serve] Error 1
```
Verifique o limite atual para arquivos abertos:
`launchctl limit maxfiles`
Em seguida, execute os seguintes comandos (adaptado de https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c):
```shell
#!/bin/sh
# Esse são os links do gist original, vinculados ao meu gists agora.
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
sudo mv limit.maxproc.plist /Library/LaunchDaemons
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
```
Esta solução funciona tanto para o MacOS Catalina quanto para o MacOS Mojave.
# Comunidade, discussão, contribuição e apoio
Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/).
Você também pode entrar em contato com os mantenedores deste projeto em:
- [Slack](https://kubernetes.slack.com/messages/sig-docs) ([Obter o convide para o este slack](https://slack.k8s.io/))
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
# Contribuindo com os documentos
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie um novo **pull request** para nos informar sobre isso.
Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para atender ao feedback que foi fornecido a você pelo revisor do Kubernetes.**
Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um outro revisor do Kubernetes diferente daquele originalmente designado para lhe fornecer o feedback.
Além disso, em alguns casos, um de seus revisores pode solicitar uma revisão técnica de um [revisor técnico do Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) quando necessário. Os revisores farão o melhor para fornecer feedbacks em tempo hábil, mas o tempo de resposta pode variar de acordo com as circunstâncias.
Para mais informações sobre como contribuir com a documentação do Kubernetes, consulte:
* [Comece a contribuir](https://kubernetes.io/docs/contribute/start/)
* [Preparando suas alterações na documentação](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Usando Modelos de Página](http://kubernetes.io/docs/contribute/style/page-templates/)
* [Contribua com a documentação do Kubernetes](https://kubernetes.io/docs/contribute/)
* [Tipos de conteúdo de página](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Guia de Estilo da Documentação](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Localizando documentação do Kubernetes](https://kubernetes.io/docs/contribute/localization/)
Você pode contactar os mantenedores da localização em Português em:
Você pode contatar os mantenedores da localização em Português em:
* Felipe ([GitHub - @femrtnz](https://github.com/femrtnz))
* [Slack channel](https://kubernetes.slack.com/messages/kubernetes-docs-pt)
## Executando o site localmente usando o Docker
A maneira recomendada de executar o site do Kubernetes localmente é executar uma imagem especializada do [Docker](https://docker.com) que inclui o gerador de site estático [Hugo](https://gohugo.io).
> Se você está rodando no Windows, você precisará de mais algumas ferramentas que você pode instalar com o [Chocolatey](https://chocolatey.org). `choco install make`
> Se você preferir executar o site localmente sem o Docker, consulte [Executando o site localmente usando o Hugo](#executando-o-site-localmente-usando-o-hugo) abaixo.
Se você tiver o Docker [em funcionamento](https://www.docker.com/get-started), crie a imagem do Docker do `kubernetes-hugo` localmente:
```bash
make container-image
```
Depois que a imagem foi criada, você pode executar o site localmente:
```bash
make container-serve
```
Abra seu navegador para http://localhost:1313 para visualizar o site. Conforme você faz alterações nos arquivos de origem, Hugo atualiza o site e força a atualização do navegador.
## Executando o site localmente usando o Hugo
Veja a [documentação oficial do Hugo](https://gohugo.io/getting-started/installing/) para instruções de instalação do Hugo. Certifique-se de instalar a versão do Hugo especificada pela variável de ambiente `HUGO_VERSION` no arquivo [`netlify.toml`](netlify.toml#L9).
Para executar o site localmente quando você tiver o Hugo instalado:
```bash
make serve
```
Isso iniciará o servidor Hugo local na porta 1313. Abra o navegador para http://localhost:1313 para visualizar o site. Conforme você faz alterações nos arquivos de origem, Hugo atualiza o site e força a atualização do navegador.
## Comunidade, discussão, contribuição e apoio
Aprenda a se envolver com a comunidade do Kubernetes na [página da comunidade](http://kubernetes.io/community/).
Você pode falar com os mantenedores deste projeto:
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
### Código de conduta
# Código de conduta
A participação na comunidade Kubernetes é regida pelo [Código de Conduta da Kubernetes](code-of-conduct.md).
## Obrigado!
# Obrigado!
O Kubernetes conta com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso site e nossa documentação!
O Kubernetes conta com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso website e nossa documentação!

View File

@ -102,7 +102,7 @@ Other control loops can observe that reported data and take their own actions.
In the thermostat example, if the room is very cold then a different controller
might also turn on a frost protection heater. With Kubernetes clusters, the control
plane indirectly works with IP address management tools, storage services,
cloud provider APIS, and other services by
cloud provider APIs, and other services by
[extending Kubernetes](/docs/concepts/extend-kubernetes/) to implement that.
## Desired versus current state {#desired-vs-current}

View File

@ -11,9 +11,10 @@ weight: 10
Kubernetes runs your workload by placing containers into Pods to run on _Nodes_.
A node may be a virtual or physical machine, depending on the cluster. Each node
contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}}, managed by the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
is managed by the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
and contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}}
Typically you have several nodes in a cluster; in a learning or resource-limited
environment, you might have just one.

View File

@ -26,12 +26,12 @@ See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and
Before choosing a guide, here are some considerations:
- Do you just want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
- Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
latter, choose an actively-developed distro. Some distros only use binary releases, but
offer a greater variety of choices.
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.

View File

@ -9,23 +9,22 @@ weight: 60
<!-- overview -->
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, a pod is evicted, or a node dies, you'll usually still want to access your application's logs. As such, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level-logging_. Cluster-level logging requires a separate backend to store, analyze, and query logs. Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster.
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution.
For example, you may want access your application's logs if a container crashes; a pod gets evicted; or a node dies.
In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level logging_.
<!-- body -->
Cluster-level logging architectures are described in assumption that
a logging backend is present inside or outside of your cluster. If you're
not interested in having cluster-level logging, you might still find
the description of how logs are stored and handled on the node to be useful.
Cluster-level logging architectures require a separate backend to store, analyze, and query logs. Kubernetes
does not provide a native storage solution for log data. Instead, there are many logging solutions that
integrate with Kubernetes. The following sections describe how to handle and store logs on nodes.
## Basic logging in Kubernetes
In this section, you can see an example of basic logging in Kubernetes that
outputs data to the standard output stream. This demonstration uses
a pod specification with a container that writes some text to standard output
once per second.
This example uses a `Pod` specification with a container
to write text to the standard output stream once per second.
{{< codenew file="debug/counter-pod.yaml" >}}
@ -34,8 +33,10 @@ To run this pod, use the following command:
```shell
kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml
```
The output is:
```
```console
pod/counter created
```
@ -44,73 +45,73 @@ To fetch the logs, use the `kubectl logs` command, as follows:
```shell
kubectl logs counter
```
The output is:
```
```console
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
...
```
You can use `kubectl logs` to retrieve logs from a previous instantiation of a container with `--previous` flag, in case the container has crashed. If your pod has multiple containers, you should specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
You can use `kubectl logs --previous` to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
## Logging at the node level
![Node level logging](/images/docs/user-guide/logging/logging-node-level.png)
Everything a containerized application writes to `stdout` and `stderr` is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in json format.
A container engine handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams.
For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in JSON format.
{{< note >}}
The Docker json logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.
The Docker JSON logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.
{{< /note >}}
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
An important consideration in node-level logging is implementing log rotation,
so that logs don't consume all available storage on the node. Kubernetes
currently is not responsible for rotating logs, but rather a deployment tool
is not responsible for rotating logs, but rather a deployment tool
should set up a solution to address that.
For example, in Kubernetes clusters, deployed by the `kube-up.sh` script,
there is a [`logrotate`](https://linux.die.net/man/8/logrotate)
tool configured to run each hour. You can also set up a container runtime to
rotate application's logs automatically, for example by using Docker's `log-opt`.
In the `kube-up.sh` script, the latter approach is used for COS image on GCP,
and the former approach is used in any other environment. In both cases, by
default rotation is configured to take place when log file exceeds 10MB.
rotate an application's logs automatically.
As an example, you can find detailed information about how `kube-up.sh` sets
up logging for COS image on GCP in the corresponding
[script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
the basic logging example, the kubelet on the node handles the request and
reads directly from the log file, returning the contents in the response.
reads directly from the log file. The kubelet returns the content of the log file.
{{< note >}}
Currently, if some external system has performed the rotation,
If an external system has performed the rotation,
only the contents of the latest log file will be available through
`kubectl logs`. E.g. if there's a 10MB file, `logrotate` performs
the rotation and there are two files, one 10MB in size and one empty,
`kubectl logs` will return an empty response.
`kubectl logs`. For example, if there's a 10MB file, `logrotate` performs
the rotation and there are two files: one file that is 10MB in size and a second file that is empty.
`kubectl logs` returns the latest log file which in this example is an empty response.
{{< /note >}}
[cosConfigureHelper]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh
### System component logs
There are two types of system components: those that run in a container and those
that do not run in a container. For example:
* The Kubernetes scheduler and kube-proxy run in a container.
* The kubelet and container runtime, for example Docker, do not run in containers.
* The kubelet and container runtime do not run in containers.
On machines with systemd, the kubelet and container runtime write to journald. If
systemd is not present, they write to `.log` files in the `/var/log` directory.
System components inside containers always write to the `/var/log` directory,
bypassing the default logging mechanism. They use the [klog](https://github.com/kubernetes/klog)
systemd is not present, the kubelet and container runtime write to `.log` files
in the `/var/log` directory. System components inside containers always write
to the `/var/log` directory, bypassing the default logging mechanism.
They use the [`klog`](https://github.com/kubernetes/klog)
logging library. You can find the conventions for logging severity for those
components in the [development docs on logging](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md).
Similarly to the container logs, system component logs in the `/var/log`
Similar to the container logs, system component logs in the `/var/log`
directory should be rotated. In Kubernetes clusters brought up by
the `kube-up.sh` script, those logs are configured to be rotated by
the `logrotate` tool daily or once the size exceeds 100MB.
@ -129,13 +130,14 @@ While Kubernetes does not provide a native solution for cluster-level logging, t
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
Because the logging agent must run on every node, it's common to implement it as either a DaemonSet replica, a manifest pod, or a dedicated native process on the node. However the latter two approaches are deprecated and highly discouraged.
Because the logging agent must run on every node, it is recommended to run the agent
as a `DaemonSet`.
Using a node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, because it creates only one agent per node, and it doesn't require any changes to the applications running on the node. However, node-level logging _only works for applications' standard output and standard error_.
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver/) for use with Google Cloud Platform, and [Elasticsearch](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/). You can find more information and instructions in the dedicated documents. Both use [fluentd](https://www.fluentd.org/) with custom configuration as an agent on the node.
Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
### Using a sidecar container with the logging agent
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}
You can use a sidecar container in one of the following ways:
@ -146,28 +148,27 @@ You can use a sidecar container in one of the following ways:
![Sidecar container with a streaming container](/images/docs/user-guide/logging/logging-with-streaming-sidecar.png)
By having your sidecar containers stream to their own `stdout` and `stderr`
By having your sidecar containers write to their own `stdout` and `stderr`
streams, you can take advantage of the kubelet and the logging agent that
already run on each node. The sidecar containers read logs from a file, a socket,
or the journald. Each individual sidecar container prints log to its own `stdout`
or `stderr` stream.
or journald. Each sidecar container prints a log to its own `stdout` or `stderr` stream.
This approach allows you to separate several log streams from different
parts of your application, some of which can lack support
for writing to `stdout` or `stderr`. The logic behind redirecting logs
is minimal, so it's hardly a significant overhead. Additionally, because
is minimal, so it's not a significant overhead. Additionally, because
`stdout` and `stderr` are handled by the kubelet, you can use built-in tools
like `kubectl logs`.
Consider the following example. A pod runs a single container, and the container
writes to two different log files, using two different formats. Here's a
For example, a pod runs a single container, and the container
writes to two different log files using two different formats. Here's a
configuration file for the Pod:
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
It would be a mess to have log entries of different formats in the same log
It is not recommended to write log entries with different formats to the same log
stream, even if you managed to redirect both components to the `stdout` stream of
the container. Instead, you could introduce two sidecar containers. Each sidecar
the container. Instead, you can create two sidecar containers. Each sidecar
container could tail a particular log file from a shared volume and then redirect
the logs to its own `stdout` stream.
@ -181,7 +182,10 @@ running the following commands:
```shell
kubectl logs counter count-log-1
```
```
The output is:
```console
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
@ -191,7 +195,10 @@ kubectl logs counter count-log-1
```shell
kubectl logs counter count-log-2
```
```
The output is:
```console
Mon Jan 1 00:00:00 UTC 2001 INFO 0
Mon Jan 1 00:00:01 UTC 2001 INFO 1
Mon Jan 1 00:00:02 UTC 2001 INFO 2
@ -202,16 +209,15 @@ The node-level agent installed in your cluster picks up those log streams
automatically without any further configuration. If you like, you can configure
the agent to parse log lines depending on the source container.
Note, that despite low CPU and memory usage (order of couple of millicores
Note, that despite low CPU and memory usage (order of a couple of millicores
for cpu and order of several megabytes for memory), writing logs to a file and
then streaming them to `stdout` can double disk usage. If you have
an application that writes to a single file, it's generally better to set
`/dev/stdout` as destination rather than implementing the streaming sidecar
an application that writes to a single file, it's recommended to set
`/dev/stdout` as the destination rather than implement the streaming sidecar
container approach.
Sidecar containers can also be used to rotate log files that cannot be
rotated by the application itself. An example
of this approach is a small container running logrotate periodically.
rotated by the application itself. An example of this approach is a small container running `logrotate` periodically.
However, it's recommended to use `stdout` and `stderr` directly and leave rotation
and retention policies to the kubelet.
@ -226,21 +232,17 @@ configured specifically to run with your application.
{{< note >}}
Using a logging agent in a sidecar container can lead
to significant resource consumption. Moreover, you won't be able to access
those logs using `kubectl logs` command, because they are not controlled
those logs using `kubectl logs` because they are not controlled
by the kubelet.
{{< /note >}}
As an example, you could use [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/),
which uses fluentd as a logging agent. Here are two configuration files that
you can use to implement this approach. The first file contains
a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd.
Here are two configuration files that you can use to implement a sidecar container with a logging agent. The first file contains
a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd.
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
{{< note >}}
The configuration of fluentd is beyond the scope of this article. For
information about configuring fluentd, see the
[official fluentd documentation](https://docs.fluentd.org/).
For information about configuring fluentd, see the [fluentd documentation](https://docs.fluentd.org/).
{{< /note >}}
The second file describes a pod that has a sidecar container running fluentd.
@ -248,18 +250,10 @@ The pod mounts a volume where fluentd can pick up its configuration data.
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
After some time you can find log messages in the Stackdriver interface.
Remember, that this is just an example and you can actually replace fluentd
with any logging agent, reading from any source inside an application
container.
In the sample configurations, you can replace fluentd with any logging agent, reading from any source inside an application container.
### Exposing logs directly from the application
![Exposing logs directly from the application](/images/docs/user-guide/logging/logging-from-application.png)
You can implement cluster-level logging by exposing or pushing logs directly from
every application; however, the implementation for such a logging mechanism
is outside the scope of Kubernetes.
Cluster-logging that exposes or pushes logs directly from every application is outside the scope of Kubernetes.

View File

@ -70,7 +70,7 @@ deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax:
In the case of two resources, you can specify both resources on the command line using the resource/name syntax:
```shell
kubectl delete deployments/my-nginx services/my-nginx-svc
@ -87,10 +87,11 @@ deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
Because `kubectl` outputs resource names in the same syntax it accepts, you can chain operations using `$()` or `xargs`:
```shell
kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service)
kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service | xargs -i kubectl get {}
```
```shell
@ -301,6 +302,7 @@ Sometimes you would want to attach annotations to resources. Annotations are arb
kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
kubectl get pods my-nginx-v4-9gw19 -o yaml
```
```shell
apiVersion: v1
kind: pod
@ -314,11 +316,12 @@ For more information, please see [annotations](/docs/concepts/overview/working-w
## Scaling your application
When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to decrease the number of nginx replicas from 3 to 1, do:
When load on your application grows or shrinks, use `kubectl` to scale your application. For instance, to decrease the number of nginx replicas from 3 to 1, do:
```shell
kubectl scale deployment/my-nginx --replicas=1
```
```shell
deployment.apps/my-nginx scaled
```
@ -328,6 +331,7 @@ Now you only have one pod managed by the deployment.
```shell
kubectl get pods -l app=nginx
```
```shell
NAME READY STATUS RESTARTS AGE
my-nginx-2035384211-j5fhi 1/1 Running 0 30m
@ -338,6 +342,7 @@ To have the system automatically choose the number of nginx replicas as needed,
```shell
kubectl autoscale deployment/my-nginx --min=1 --max=3
```
```shell
horizontalpodautoscaler.autoscaling/my-nginx autoscaled
```
@ -411,6 +416,7 @@ In some cases, you may need to update resource fields that cannot be updated onc
```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
```
```shell
deployment.apps/my-nginx deleted
deployment.apps/my-nginx replaced
@ -427,14 +433,17 @@ Let's say you were running version 1.14.2 of nginx:
```shell
kubectl create deployment my-nginx --image=nginx:1.14.2
```
```shell
deployment.apps/my-nginx created
```
with 3 replicas (so the old and new revisions can coexist):
```shell
kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
```
```
deployment.apps/my-nginx scaled
```

View File

@ -31,22 +31,24 @@ I1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
{{<warning>}}
{{< warning >}}
Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.
Log formatting and value serialization are subject to change.
{{< /warning>}}
Structured logging is a effort to introduce a uniform structure in log messages allowing for easy extraction of information, making logs easier and cheaper to store and process.
Structured logging introduces a uniform structure in log messages allowing for programmatic extraction of information. You can store and process structured logs with less effort and cost.
New message format is backward compatible and enabled by default.
Format of structured logs:
```
```ini
<klog header> "<message>" <key1>="<value1>" <key2>="<value2>" ...
```
Example:
```
```ini
I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod="kube-system/kubedns" status="ready"
```

View File

@ -59,13 +59,13 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
- Use [headless Services](/docs/concepts/services-networking/service/#headless-services) (which have a `ClusterIP` of `None`) for easy service discovery when you don't need `kube-proxy` load balancing.
- Use [headless Services](/docs/concepts/services-networking/service/#headless-services) (which have a `ClusterIP` of `None`) for service discovery when you don't need `kube-proxy` load balancing.
## Using Labels
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) app for examples of this approach.
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. [Deployments](/docs/concepts/workloads/controllers/deployment/) make it easy to update a running service without downtime.
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).
A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate.

View File

@ -116,7 +116,7 @@ In this case, `0` means we have just created an empty Secret.
A `kubernetes.io/service-account-token` type of Secret is used to store a
token that identifies a service account. When using this Secret type, you need
to ensure that the `kubernetes.io/service-account.name` annotation is set to an
existing service account name. An Kubernetes controller fills in some other
existing service account name. A Kubernetes controller fills in some other
fields such as the `kubernetes.io/service-account.uid` annotation and the
`token` key in the `data` field set to actual token content.
@ -801,11 +801,6 @@ field set to that of the service account.
See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
for a detailed explanation of that process.
### Automatic mounting of manually created Secrets
Manually created secrets (for example, one containing a token for accessing a GitHub account)
can be automatically attached to pods based on their service account.
## Details
### Restrictions

View File

@ -43,7 +43,7 @@ Each VM is a full machine running all the components, including its own operatin
Containers have become popular because they provide extra benefits, such as:
* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and efficient rollbacks (due to image immutability).
* Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
* Observability not only surfaces OS-level information and metrics, but also application health and other signals.
* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.

View File

@ -42,7 +42,7 @@ Example labels:
* `"partition" : "customerA"`, `"partition" : "customerB"`
* `"track" : "daily"`, `"track" : "weekly"`
These are just examples of commonly used labels; you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
These are examples of commonly used labels; you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
## Syntax and character set

View File

@ -31,7 +31,7 @@ When using imperative commands, a user operates directly on live objects
in a cluster. The user provides operations to
the `kubectl` command as arguments or flags.
This is the simplest way to get started or to run a one-off task in
This is the recommended way to get started or to run a one-off task in
a cluster. Because this technique operates directly on live
objects, it provides no history of previous configurations.
@ -47,7 +47,7 @@ kubectl create deployment nginx --image nginx
Advantages compared to object configuration:
- Commands are simple, easy to learn and easy to remember.
- Commands are expressed as a single action word.
- Commands require only a single step to make changes to the cluster.
Disadvantages compared to object configuration:

View File

@ -10,11 +10,9 @@ weight: 70
{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
The scheduling framework is a pluggable architecture for Kubernetes Scheduler
that makes scheduler customizations easy. It adds a new set of "plugin" APIs to
the existing scheduler. Plugins are compiled into the scheduler. The APIs
allow most scheduling features to be implemented as plugins, while keeping the
scheduling "core" simple and maintainable. Refer to the [design proposal of the
The scheduling framework is a pluggable architecture for the Kubernetes scheduler.
It adds a new set of "plugin" APIs to the existing scheduler. Plugins are compiled into the scheduler. The APIs allow most scheduling features to be implemented as plugins, while keeping the
scheduling "core" lightweight and maintainable. Refer to the [design proposal of the
scheduling framework][kep] for more technical information on the design of the
framework.

View File

@ -74,7 +74,7 @@ An example of an IPv6 CIDR: `fdXY:IJKL:MNOP:15::/64` (this shows the format but
If your cluster has dual-stack enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} which can use IPv4, IPv6, or both.
The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager).
The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver).
When you define a Service you can optionally configure it as dual stack. To specify the behavior you want, you
set the `.spec.ipFamilyPolicy` field to one of the following values:

View File

@ -31,14 +31,15 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
* The [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) works with
Citrix Application Delivery Controller.
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
* [EnRoute](https://getenroute.io/) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller.
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
lets you use an Ingress to configure F5 BIG-IP virtual servers.
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),
which offers API gateway functionality.
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for
[HAProxy](http://www.haproxy.org/#desc).
[HAProxy](https://www.haproxy.org/#desc).
* The [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme)
is also an ingress controller for [HAProxy](http://www.haproxy.org/#desc).
is also an ingress controller for [HAProxy](https://www.haproxy.org/#desc).
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
is an [Istio](https://istio.io/) based ingress controller.
* The [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller#readme)
@ -49,7 +50,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
[HAProxy](http://www.haproxy.org/#desc).
[HAProxy](https://www.haproxy.org/#desc).
## Using multiple Ingress controllers

View File

@ -151,9 +151,9 @@ spec:
targetPort: 9376
```
Because this Service has no selector, the corresponding Endpoint object is not
Because this Service has no selector, the corresponding Endpoints object is not
created automatically. You can manually map the Service to the network address and port
where it's running, by adding an Endpoint object manually:
where it's running, by adding an Endpoints object manually:
```yaml
apiVersion: v1

View File

@ -629,6 +629,11 @@ spec:
PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with "Many" modes (`ROX`, `RWX`) is only possible within one namespace.
### PersistentVolumes typed `hostPath`
A `hostPath` PersistentVolume uses a file or directory on the Node to emulate network-attached storage.
See [an example of `hostPath` typed volume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume).
## Raw Block Volume Support
{{< feature-state for_k8s_version="v1.18" state="stable" >}}

View File

@ -210,8 +210,8 @@ spec:
The `CSIMigration` feature for Cinder, when enabled, redirects all plugin operations
from the existing in-tree plugin to the `cinder.csi.openstack.org` Container
Storage Interface (CSI) Driver. In order to use this feature, the [Openstack Cinder CSI
Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md)
Storage Interface (CSI) Driver. In order to use this feature, the [OpenStack Cinder CSI
Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
must be installed on the cluster and the `CSIMigration` and `CSIMigrationOpenStack`
beta features must be enabled.

View File

@ -147,8 +147,8 @@ the related features.
| ---------------------------------------- | ---------- | ------- | ----------- |
| `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | |
| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | |
| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | DaemonSet pods tolerate disk-pressure attributes by default scheduler. |
| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | DaemonSet pods tolerate memory-pressure attributes by default scheduler. |
| `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | DaemonSet pods tolerate unschedulable attributes by default scheduler. |
| `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | DaemonSet pods, who uses host network, tolerate network-unavailable attributes by default scheduler. |

View File

@ -208,7 +208,8 @@ As mentioned above, whether you have 1 pod you want to keep running, or 1000, a
### Scaling
The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
The ReplicationController scales the number of replicas up or down by setting the `replicas` field.
You can configure the ReplicationController to manage the replicas manually or by an auto-scaling control agent.
### Rolling updates

View File

@ -78,7 +78,7 @@ sharing](/docs/tasks/configure-pod-container/share-process-namespace/) so
you can view processes in other containers.
See [Debugging with Ephemeral Debug Container](
/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)
/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container)
for examples of troubleshooting using ephemeral containers.
## Ephemeral containers API

View File

@ -462,8 +462,6 @@ and the [example of Limit Range](/docs/tasks/administer-cluster/manage-resources
### MutatingAdmissionWebhook {#mutatingadmissionwebhook}
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
This admission controller calls any mutating webhooks which match the request. Matching
webhooks are called in serial; each one may modify the object if it desires.
@ -474,7 +472,7 @@ If a webhook called by this has side effects (for example, decrementing quota) i
webhooks or validating admission controllers will permit the request to finish.
If you disable the MutatingAdmissionWebhook, you must also disable the
`MutatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1beta1`
`MutatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1`
group/version via the `--runtime-config` flag (both are on by default in
versions >= 1.9).
@ -486,8 +484,6 @@ versions >= 1.9).
different when read back.
* Setting originally unset fields is less likely to cause problems than
overwriting fields set in the original request. Avoid doing the latter.
* This is a beta feature. Future versions of Kubernetes may restrict the types of
mutations these webhooks can make.
* Future changes to control loops for built-in resources or third-party resources
may break webhooks that work well today. Even when the webhook installation API
is finalized, not all possible webhook behaviors will be guaranteed to be supported
@ -766,8 +762,6 @@ This admission controller {{< glossary_tooltip text="taints" term_id="taint" >}}
### ValidatingAdmissionWebhook {#validatingadmissionwebhook}
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
This admission controller calls any validating webhooks which match the request. Matching
webhooks are called in parallel; if any of them rejects the request, the request
fails. This admission controller only runs in the validation phase; the webhooks it calls may not
@ -778,7 +772,7 @@ If a webhook called by this has side effects (for example, decrementing quota) i
webhooks or other validating admission controllers will permit the request to finish.
If you disable the ValidatingAdmissionWebhook, you must also disable the
`ValidatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1beta1`
`ValidatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1`
group/version via the `--runtime-config` flag (both are on by default in
versions 1.9 and later).

View File

@ -68,8 +68,8 @@ when interpreted by an [authorizer](/docs/reference/access-authn-authz/authoriza
You can enable multiple authentication methods at once. You should usually use at least two methods:
- service account tokens for service accounts
- at least one other method for user authentication.
- service account tokens for service accounts
- at least one other method for user authentication.
When multiple authenticator modules are enabled, the first module
to successfully authenticate the request short-circuits evaluation.
@ -321,13 +321,11 @@ sequenceDiagram
9. `kubectl` provides feedback to the user
Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to
"phone home" to the identity provider. In a model where every request is stateless this provides a very scalable
solution for authentication. It does offer a few challenges:
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes.
3. There's no easy way to authenticate to the Kubernetes dashboard without using the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
"phone home" to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication. It does offer a few challenges:
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes.
3. To authenticate to the Kubernetes dashboard, you must the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
#### Configuring the API Server
@ -1004,14 +1002,12 @@ RFC3339 timestamp. Presence or absence of an expiry has the following impact:
}
}
```
The plugin can optionally be called with an environment variable, `KUBERNETES_EXEC_INFO`,
that contains information about the cluster for which this plugin is obtaining
credentials. This information can be used to perform cluster-specific credential
acquisition logic. In order to enable this behavior, the `provideClusterInfo` field must
be set on the exec user field in the
[kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/). Here is an
example of the aforementioned `KUBERNETES_EXEC_INFO` environment variable.
To enable the exec plugin to obtain cluster-specific information, set `provideClusterInfo` on the `user.exec`
field in the [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
The plugin will then be supplied with an environment variable, `KUBERNETES_EXEC_INFO`.
Information from this environment variable can be used to perform cluster-specific
credential acquisition logic.
The following `ExecCredential` manifest describes a cluster information sample.
```json
{

View File

@ -104,6 +104,9 @@ a given action, and works regardless of the authorization mode used.
```bash
kubectl auth can-i create deployments --namespace dev
```
The output is similar to this:
```
yes
```
@ -111,6 +114,9 @@ yes
```shell
kubectl auth can-i create deployments --namespace prod
```
The output is similar to this:
```
no
```
@ -121,6 +127,9 @@ to determine what action other users can perform.
```bash
kubectl auth can-i list secrets --namespace dev --as dave
```
The output is similar to this:
```
no
```
@ -150,7 +159,7 @@ EOF
```
The generated `SelfSubjectAccessReview` is:
```
```yaml
apiVersion: authorization.k8s.io/v1
kind: SelfSubjectAccessReview
metadata:

View File

@ -1093,8 +1093,8 @@ be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is
risky unless you take great care to run this webhook on all hosts
which run an apiserver which might need to make calls to this
webhook. Such installs are likely to be non-portable, i.e., not easy
to turn up in a new cluster.
webhook. Such installations are likely to be non-portable or not readily
run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".

View File

@ -2,7 +2,7 @@
title: API Group
id: api-group
date: 2019-09-02
full_link: /docs/concepts/overview/kubernetes-api/#api-groups
full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
short_description: >
A set of related paths in the Kubernetes API.

View File

@ -12,9 +12,8 @@ tags:
---
Facilitates the discussion and/or implementation of a short-lived, narrow, or decoupled project for a committee, {{< glossary_tooltip text="SIG" term_id="sig" >}}, or cross-SIG effort.
<!--more-->
<!--more-->
Working groups are a way of organizing people to accomplish a discrete task, and are relatively easy to create and deprecate when inactive.
Working groups are a way of organizing people to accomplish a discrete task.
For more information, see the [kubernetes/community](https://github.com/kubernetes/community) repo and the current list of [SIGs and working groups](https://github.com/kubernetes/community/blob/master/sig-list.md).

View File

@ -195,7 +195,7 @@ JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.ty
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
# Output decoded secrets without external tools
kubectl get secret ${secret_name} -o go-template='{{range $k,$v := .data}}{{$k}}={{$v|base64decode}}{{"\n"}}{{end}}'
kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'
# List all Secrets currently in use by a pod
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
@ -337,7 +337,7 @@ kubectl taint nodes foo dedicated=special-user:NoSchedule
### Resource types
List all supported resource types along with their shortnames, [API group](/docs/concepts/overview/kubernetes-api/#api-groups), whether they are [namespaced](/docs/concepts/overview/working-with-objects/namespaces), and [Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects):
List all supported resource types along with their shortnames, [API group](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning), whether they are [namespaced](/docs/concepts/overview/working-with-objects/namespaces), and [Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects):
```bash
kubectl api-resources

View File

@ -250,15 +250,15 @@ CustomResourceDefinitionSpec describes how a user wants their resource to appear
- **conversion.webhook.clientConfig.url** (string)
url gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
- **preserveUnknownFields** (boolean)

View File

@ -82,15 +82,15 @@ MutatingWebhookConfiguration describes the configuration of and admission webhoo
- **webhooks.clientConfig.url** (string)
`url` gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
- **webhooks.name** (string), required

View File

@ -82,15 +82,15 @@ ValidatingWebhookConfiguration describes the configuration of and admission webh
- **webhooks.clientConfig.url** (string)
`url` gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
- **webhooks.name** (string), required

View File

@ -28,7 +28,7 @@ The cluster that `kubeadm init` and `kubeadm join` set up should be:
- lock-down the kubelet API
- locking down access to the API for system components like the kube-proxy and CoreDNS
- locking down what a Bootstrap Token can access
- **Easy to use**: The user should not have to run anything more than a couple of commands:
- **User-friendly**: The user should not have to run anything more than a couple of commands:
- `kubeadm init`
- `export KUBECONFIG=/etc/kubernetes/admin.conf`
- `kubectl apply -f <network-of-choice.yaml>`

View File

@ -108,7 +108,7 @@ if the `kubeadm init` command was called with `--upload-certs`.
control-plane node even if other worker nodes or the network are compromised.
- Convenient to execute manually since all of the information required fits
into a single `kubeadm join` command that is easy to copy and paste.
into a single `kubeadm join` command.
**Disadvantages:**

View File

@ -20,8 +20,8 @@ Kubernetes contains several built-in tools to help you work with the Kubernetes
## Minikube
[`minikube`](https://minikube.sigs.k8s.io/docs/) is a tool that makes it
easy to run a single-node Kubernetes cluster locally on your workstation for
[`minikube`](https://minikube.sigs.k8s.io/docs/) is a tool that
runs a single-node Kubernetes cluster locally on your workstation for
development and testing purposes.
## Dashboard
@ -51,4 +51,3 @@ Use Kompose to:
* Translate a Docker Compose file into Kubernetes objects
* Go from local Docker development to managing your application via Kubernetes
* Convert v1 or v2 Docker Compose `yaml` files or [Distributed Application Bundles](https://docs.docker.com/compose/bundles/)

View File

@ -297,7 +297,7 @@ is not what the user wants to happen, even temporarily.
There are two solutions:
- (easy) Leave `replicas` in the configuration; when HPA eventually writes to that
- (basic) Leave `replicas` in the configuration; when HPA eventually writes to that
field, the system gives the user a conflict over it. At that point, it is safe
to remove from the configuration.

View File

@ -39,7 +39,7 @@ kops is an automated provisioning system:
#### Installation
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source):
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also convenient to build from source):
{{< tabs name="kops_installation" >}}
{{% tab name="macOS" %}}
@ -147,7 +147,7 @@ You must then set up your NS records in the parent domain, so that records in th
you would create NS records in `example.com` for `dev`. If it is a root domain name you would configure the NS
records at your domain registrar (e.g. `example.com` would need to be configured where you bought `example.com`).
This step is easy to mess up (it is the #1 cause of problems!) You can double-check that
Verify your route53 domain setup (it is the #1 cause of problems!). You can double-check that
your cluster is configured correctly if you have the dig tool by running:
`dig NS dev.example.com`

View File

@ -8,7 +8,7 @@ weight: 30
<!-- overview -->
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Creating a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
`kubeadm` also supports other cluster
lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.

View File

@ -236,8 +236,8 @@ curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_
Define the directory to download command files
{{< note >}}
The DOWNLOAD_DIR variable must be set to a writable directory.
If you are running Flatcar Container Linux, set DOWNLOAD_DIR=/opt/bin.
The `DOWNLOAD_DIR` variable must be set to a writable directory.
If you are running Flatcar Container Linux, set `DOWNLOAD_DIR=/opt/bin`.
{{< /note >}}
```bash

View File

@ -363,7 +363,7 @@ kubectl taint nodes NODE_NAME node-role.kubernetes.io/master:NoSchedule-
## `/usr` is mounted read-only on nodes {#usr-mounted-read-only}
On Linux distributions such as Fedora CoreOS, the directory `/usr` is mounted as a read-only filesystem.
On Linux distributions such as Fedora CoreOS or Flatcar Container Linux, the directory `/usr` is mounted as a read-only filesystem.
For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md),
Kubernetes components like the kubelet and kube-controller-manager use the default path of
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_

View File

@ -15,7 +15,7 @@ Windows applications constitute a large portion of the services and applications
## Windows containers in Kubernetes
To enable the orchestration of Windows containers in Kubernetes, simply include Windows nodes in your existing Linux cluster. Scheduling Windows containers in {{< glossary_tooltip text="Pods" term_id="pod" >}} on Kubernetes is as simple and easy as scheduling Linux-based containers.
To enable the orchestration of Windows containers in Kubernetes, include Windows nodes in your existing Linux cluster. Scheduling Windows containers in {{< glossary_tooltip text="Pods" term_id="pod" >}} on Kubernetes is similar to scheduling Linux-based containers.
In order to run Windows containers, your Kubernetes cluster must include multiple operating systems, with control plane nodes running Linux and workers running either Windows or Linux depending on your workload needs. Windows Server 2019 is the only Windows operating system supported, enabling [Kubernetes Node](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) on Windows (including kubelet, [container runtime](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd), and kube-proxy). For a detailed explanation of Windows distribution channels see the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19).

View File

@ -92,7 +92,7 @@ We expect this implementation to progress from alpha to beta and GA in coming re
### go1.15.5
go1.15.5 has been integrated to Kubernets project as of this release, [including other infrastructure related updates on this effort](https://github.com/kubernetes/kubernetes/pull/95776).
go1.15.5 has been integrated to Kubernetes project as of this release, [including other infrastructure related updates on this effort](https://github.com/kubernetes/kubernetes/pull/95776).
### CSI Volume Snapshot graduates to General Availability
@ -190,7 +190,7 @@ Currently, cadvisor_stats_provider provides AcceleratorStats but cri_stats_provi
PodSubnet validates against the corresponding cluster "--node-cidr-mask-size" of the kube-controller-manager, it fail if the values are not compatible.
kubeadm no longer sets the node-mask automatically on IPv6 deployments, you must check that your IPv6 service subnet mask is compatible with the default node mask /64 or set it accordenly.
Previously, for IPv6, if the podSubnet had a mask lower than /112, kubeadm calculated a node-mask to be multiple of eight and splitting the available bits to maximise the number used for nodes. ([#95723](https://github.com/kubernetes/kubernetes/pull/95723), [@aojea](https://github.com/aojea)) [SIG Cluster Lifecycle]
- The deprecated flag --experimental-kustomize is now removed from kubeadm commands. Use --experimental-patches instead, which was introduced in 1.19. Migration infromation available in --help description for --exprimental-patches. ([#94871](https://github.com/kubernetes/kubernetes/pull/94871), [@neolit123](https://github.com/neolit123))
- The deprecated flag --experimental-kustomize is now removed from kubeadm commands. Use --experimental-patches instead, which was introduced in 1.19. Migration information available in --help description for --experimental-patches. ([#94871](https://github.com/kubernetes/kubernetes/pull/94871), [@neolit123](https://github.com/neolit123))
- Windows hyper-v container featuregate is deprecated in 1.20 and will be removed in 1.21 ([#95505](https://github.com/kubernetes/kubernetes/pull/95505), [@wawa0210](https://github.com/wawa0210)) [SIG Node and Windows]
- The kube-apiserver ability to serve on an insecure port, deprecated since v1.10, has been removed. The insecure address flags `--address` and `--insecure-bind-address` have no effect in kube-apiserver and will be removed in v1.24. The insecure port flags `--port` and `--insecure-port` may only be set to 0 and will be removed in v1.24. ([#95856](https://github.com/kubernetes/kubernetes/pull/95856), [@knight42](https://github.com/knight42), [SIG API Machinery, Node, Testing])
- Add dual-stack Services (alpha). This is a BREAKING CHANGE to an alpha API.
@ -2138,4 +2138,4 @@ filename | sha512 hash
- github.com/godbus/dbus: [ade71ed](https://github.com/godbus/dbus/tree/ade71ed)
- github.com/xlab/handysort: [fb3537e](https://github.com/xlab/handysort/tree/fb3537e)
- sigs.k8s.io/structured-merge-diff/v3: v3.0.0
- vbom.ml/util: db5cfe1
- vbom.ml/util: db5cfe1

View File

@ -163,7 +163,7 @@ Backing up an etcd cluster can be accomplished in two ways: etcd built-in snapsh
### Built-in snapshot
etcd supports built-in snapshot, so backing up an etcd cluster is easy. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. Taking the snapshot will normally not affect the performance of the member.
etcd supports built-in snapshot. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. Taking the snapshot will normally not affect the performance of the member.
Below is an example for taking a snapshot of the keyspace served by `$ENDPOINT` to the file `snapshotdb`:

View File

@ -72,7 +72,7 @@ Once you have a Linux-based Kubernetes control-plane node you are ready to choos
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"VNI" : 4096,
"VNI": 4096,
"Port": 4789
}
}

View File

@ -5,7 +5,7 @@ content_type: task
<!-- overview -->
This example demonstrates an easy way to limit the amount of storage consumed in a namespace.
This example demonstrates how to limit the amount of storage consumed in a namespace.
The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/),
[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/),

View File

@ -117,9 +117,10 @@ The `kubelet` has the following default hard eviction threshold:
* `memory.available<100Mi`
* `nodefs.available<10%`
* `nodefs.inodesFree<5%`
* `imagefs.available<15%`
On a Linux node, the default value also includes `nodefs.inodesFree<5%`.
### Eviction Monitoring Interval
The `kubelet` evaluates eviction thresholds per its configured housekeeping interval.
@ -140,6 +141,7 @@ The following node conditions are defined that correspond to the specified evict
|-------------------|---------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|
| `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold |
| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold |
| `PIDPressure` | `pid.available` | Available processes identifiers on the (Linux) node has fallen below an eviction threshold | |
The `kubelet` continues to report node status updates at the frequency specified by
`--node-status-update-frequency` which defaults to `10s`.

View File

@ -23,7 +23,7 @@ dynamically, you need a strong understanding of how that change will affect your
cluster's behavior. Always carefully test configuration changes on a small set
of nodes before rolling them out cluster-wide. Advice on configuring specific
fields is available in the inline `KubeletConfiguration`
[type documentation](https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go).
[type documentation (for v1.20)](https://github.com/kubernetes/kubernetes/blob/release-1.20/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
{{< /warning >}}

View File

@ -187,7 +187,7 @@ Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
To delete the Secret you have just created:
```shell
kubectl delete secret db-user-pass
kubectl delete secret mysecret
```
## {{% heading "whatsnext" %}}

View File

@ -22,6 +22,7 @@ The kubelet automatically tries to create a {{< glossary_tooltip text="mirror Po
on the Kubernetes API server for each static Pod.
This means that the Pods running on a node are visible on the API server,
but cannot be controlled from there.
The Pod names will suffixed with the node hostname with a leading hyphen
{{< note >}}
If you are running clustered Kubernetes and are using static
@ -237,4 +238,3 @@ CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
```

View File

@ -1,94 +0,0 @@
---
reviewers:
- piosz
- x13n
content_type: concept
title: Events in Stackdriver
---
<!-- overview -->
Kubernetes events are objects that provide insight into what is happening
inside a cluster, such as what decisions were made by scheduler or why some
pods were evicted from the node. You can read more about using events
for debugging your application in the [Application Introspection and Debugging
](/docs/tasks/debug-application-cluster/debug-application-introspection/)
section.
Since events are API objects, they are stored in the apiserver on master. To
avoid filling up master's disk, a retention policy is enforced: events are
removed one hour after the last occurrence. To provide longer history
and aggregation capabilities, a third party solution should be installed
to capture events.
This article describes a solution that exports Kubernetes events to
Stackdriver Logging, where they can be processed and analyzed.
{{< note >}}
It is not guaranteed that all events happening in a cluster will be
exported to Stackdriver. One possible scenario when events will not be
exported is when event exporter is not running (e.g. during restart or
upgrade). In most cases it's fine to use events for purposes like setting up
[metrics](https://cloud.google.com/logging/docs/logs-based-metrics/) and [alerts](https://cloud.google.com/logging/docs/logs-based-metrics/charts-and-alerts), but you should be aware
of the potential inaccuracy.
{{< /note >}}
<!-- body -->
## Deployment
### Google Kubernetes Engine
In Google Kubernetes Engine, if cloud logging is enabled, event exporter
is deployed by default to the clusters with master running version 1.7 and
higher. To prevent disturbing your workloads, event exporter does not have
resources set and is in the best effort QOS class, which means that it will
be the first to be killed in the case of resource starvation. If you want
your events to be exported, make sure you have enough resources to facilitate
the event exporter pod. This may vary depending on the workload, but on
average, approximately 100Mb RAM and 100m CPU is needed.
### Deploying to the Existing Cluster
Deploy event exporter to your cluster using the following command:
```shell
kubectl apply -f https://k8s.io/examples/debug/event-exporter.yaml
```
Since event exporter accesses the Kubernetes API, it requires permissions to
do so. The following deployment is configured to work with RBAC
authorization. It sets up a service account and a cluster role binding
to allow event exporter to read events. To make sure that event exporter
pod will not be evicted from the node, you can additionally set up resource
requests. As mentioned earlier, 100Mb RAM and 100m CPU should be enough.
{{< codenew file="debug/event-exporter.yaml" >}}
## User Guide
Events are exported to the `GKE Cluster` resource in Stackdriver Logging.
You can find them by selecting an appropriate option from a drop-down menu
of available resources:
<img src="/images/docs/stackdriver-event-exporter-resource.png" alt="Events location in the Stackdriver Logging interface" width="500">
You can filter based on the event object fields using Stackdriver Logging
[filtering mechanism](https://cloud.google.com/logging/docs/view/advanced_filters).
For example, the following query will show events from the scheduler
about pods from deployment `nginx-deployment`:
```
resource.type="gke_cluster"
jsonPayload.kind="Event"
jsonPayload.source.component="default-scheduler"
jsonPayload.involvedObject.name:"nginx-deployment"
```
{{< figure src="/images/docs/stackdriver-event-exporter-filter.png" alt="Filtered events in the Stackdriver Logging interface" width="500" >}}

View File

@ -1,126 +0,0 @@
---
reviewers:
- piosz
- x13n
content_type: concept
title: Logging Using Elasticsearch and Kibana
---
<!-- overview -->
On the Google Compute Engine (GCE) platform, the default logging support targets
[Stackdriver Logging](https://cloud.google.com/logging/), which is described in detail
in the [Logging With Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver).
This article describes how to set up a cluster to ingest logs into
[Elasticsearch](https://www.elastic.co/products/elasticsearch) and view
them using [Kibana](https://www.elastic.co/products/kibana), as an alternative to
Stackdriver Logging when running on GCE.
{{< note >}}
You cannot automatically deploy Elasticsearch and Kibana in the Kubernetes cluster hosted on Google Kubernetes Engine. You have to deploy them manually.
{{< /note >}}
<!-- body -->
To use Elasticsearch and Kibana for cluster logging, you should set the
following environment variable as shown below when creating your cluster with
kube-up.sh:
```shell
KUBE_LOGGING_DESTINATION=elasticsearch
```
You should also ensure that `KUBE_ENABLE_NODE_LOGGING=true` (which is the default for the GCE platform).
Now, when you create a cluster, a message will indicate that the Fluentd log
collection daemons that run on each node will target Elasticsearch:
```shell
cluster/kube-up.sh
```
```
...
Project: kubernetes-satnam
Zone: us-central1-b
... calling kube-up
Project: kubernetes-satnam
Zone: us-central1-b
+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d)
+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0)
Looking for already existing resources
Starting master and configuring firewalls
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd].
NAME ZONE SIZE_GB TYPE STATUS
kubernetes-master-pd us-central1-b 20 pd-ssd READY
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
+++ Logging using Fluentd to elasticsearch
```
The per-node Fluentd pods, the Elasticsearch pods, and the Kibana pods should
all be running in the kube-system namespace soon after the cluster comes to
life.
```shell
kubectl get pods --namespace=kube-system
```
```
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
kibana-logging-v1-bhpo8 1/1 Running 0 2h
kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
```
The `fluentd-elasticsearch` pods gather logs from each node and send them to
the `elasticsearch-logging` pods, which are part of a
[service](/docs/concepts/services-networking/service/) named `elasticsearch-logging`. These
Elasticsearch pods store the logs and expose them via a REST API.
The `kibana-logging` pod provides a web UI for reading the logs stored in
Elasticsearch, and is part of a service named `kibana-logging`.
The Elasticsearch and Kibana services are both in the `kube-system` namespace
and are not directly exposed via a publicly reachable IP address. To reach them,
follow the instructions for
[Accessing services running in a cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster).
If you try accessing the `elasticsearch-logging` service in your browser, you'll
see a status page that looks something like this:
![Elasticsearch Status](/images/docs/es-browser.png)
You can now type Elasticsearch queries directly into the browser, if you'd
like. See [Elasticsearch's documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html)
for more details on how to do so.
Alternatively, you can view your cluster's logs using Kibana (again using the
[instructions for accessing a service running in the cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster)).
The first time you visit the Kibana URL you will be presented with a page that
asks you to configure your view of the ingested logs. Select the option for
timeseries values and select `@timestamp`. On the following page select the
`Discover` tab and then you should be able to see the ingested logs.
You can set the refresh interval to 5 seconds to have the logs
regularly refreshed.
Here is a typical view of ingested logs from the Kibana viewer:
![Kibana logs](/images/docs/kibana-logs.png)
## {{% heading "whatsnext" %}}
Kibana opens up all sorts of powerful options for exploring your logs! For some
ideas on how to dig into it, check out [Kibana's documentation](https://www.elastic.co/guide/en/kibana/current/discover.html).

View File

@ -583,14 +583,13 @@ and can optionally include a custom CA bundle to use to verify the TLS connectio
The `host` should not refer to a service running in the cluster; use
a service reference by specifying the `service` field instead.
The host might be resolved via external DNS in some apiservers
(i.e., `kube-apiserver` cannot resolve in-cluster DNS as that would
(i.e., `kube-apiserver` cannot resolve in-cluster DNS as that would
be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is
risky unless you take great care to run this webhook on all hosts
which run an apiserver which might need to make calls to this
webhook. Such installs are likely to be non-portable, i.e., not easy
to turn up in a new cluster.
webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".

View File

@ -35,7 +35,7 @@ non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/).
## Starting a message queue service
This example uses RabbitMQ, but it should be easy to adapt to another AMQP-type message service.
This example uses RabbitMQ, however, you can adapt the example to use another AMQP-type message service.
In practice you could set up a message queue service once in a
cluster and reuse it for many jobs, as well as for long-running services.

View File

@ -191,7 +191,7 @@ We can create a new autoscaler using `kubectl create` command.
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
Finally, we can delete an autoscaler using `kubectl delete hpa`.
In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
In addition, there is a special `kubectl autoscale` command for creating a HorizontalPodAutoscaler object.
For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%`
and the number of replicas between 2 and 5.
@ -221,9 +221,9 @@ the global HPA settings exposed as flags for the `kube-controller-manager` compo
Starting from v1.12, a new algorithmic update removes the need for the
upscale delay.
- `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a
duration that specifies how long the autoscaler has to wait before another
downscale operation can be performed after the current one has completed.
- `--horizontal-pod-autoscaler-downscale-stabilization`: Specifies the duration of the
downscale stabilization time window. Horizontal Pod Autoscaler remembers
the historical recommended sizes and only acts on the largest size within this time window.
The default value is 5 minutes (`5m0s`).
{{< note >}}

View File

@ -41,7 +41,7 @@ card:
<div class="row">
<div class="col-md-9">
<h2>What can Kubernetes do for you?</h2>
<p>With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day. Containerization helps package software to serve these goals, enabling applications to be released and updated in an easy and fast way without downtime. Kubernetes helps you make sure those containerized applications run where and when you want, and helps them find the resources and tools they need to work. Kubernetes is a production-ready, open source platform designed with Google's accumulated experience in container orchestration, combined with best-of-breed ideas from the community.</p>
<p>With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day. Containerization helps package software to serve these goals, enabling applications to be released and updated without downtime. Kubernetes helps you make sure those containerized applications run where and when you want, and helps them find the resources and tools they need to work. Kubernetes is a production-ready, open source platform designed with Google's accumulated experience in container orchestration, combined with best-of-breed ideas from the community.</p>
</div>
</div>

View File

@ -552,7 +552,7 @@ In another terminal, watch the Pods in the StatefulSet:
```shell
kubectl get pod -l app=nginx -w
```
The output is simular to:
The output is similar to:
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 7m

View File

@ -21,39 +21,37 @@ and [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affini
## {{% heading "prerequisites" %}}
Before starting this tutorial, you should be familiar with the following
Kubernetes concepts.
Kubernetes concepts:
- [Pods](/docs/concepts/workloads/pods/)
- [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
- [Headless Services](/docs/concepts/services-networking/service/#headless-services)
- [PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
- [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/)
- [StatefulSets](/docs/concepts/workloads/controllers/statefulset/)
- [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget)
- [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
- [kubectl CLI](/docs/reference/kubectl/kubectl/)
- [Pods](/docs/concepts/workloads/pods/)
- [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
- [Headless Services](/docs/concepts/services-networking/service/#headless-services)
- [PersistentVolumes](/docs/concepts/storage/volumes/)
- [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/)
- [StatefulSets](/docs/concepts/workloads/controllers/statefulset/)
- [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget)
- [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
- [kubectl CLI](/docs/reference/kubectl/kubectl/)
You will require a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and drain the cluster's nodes. **This means that the cluster will terminate and evict all Pods on its nodes, and the nodes will temporarily become unschedulable.** You should use a dedicated cluster for this tutorial, or you should ensure that the disruption you cause will not interfere with other tenants.
You must have a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and drain the cluster's nodes. **This means that the cluster will terminate and evict all Pods on its nodes, and the nodes will temporarily become unschedulable.** You should use a dedicated cluster for this tutorial, or you should ensure that the disruption you cause will not interfere with other tenants.
This tutorial assumes that you have configured your cluster to dynamically provision
PersistentVolumes. If your cluster is not configured to do so, you
will have to manually provision three 20 GiB volumes before starting this
tutorial.
## {{% heading "objectives" %}}
After this tutorial, you will know the following.
- How to deploy a ZooKeeper ensemble using StatefulSet.
- How to consistently configure the ensemble.
- How to spread the deployment of ZooKeeper servers in the ensemble.
- How to use PodDisruptionBudgets to ensure service availability during planned maintenance.
- How to deploy a ZooKeeper ensemble using StatefulSet.
- How to consistently configure the ensemble.
- How to spread the deployment of ZooKeeper servers in the ensemble.
- How to use PodDisruptionBudgets to ensure service availability during planned maintenance.
<!-- lessoncontent -->
### ZooKeeper Basics
### ZooKeeper
[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a
distributed, open-source coordination service for distributed applications.
@ -68,7 +66,7 @@ The ensemble uses the Zab protocol to elect a leader, and the ensemble cannot wr
ZooKeeper servers keep their entire state machine in memory, and write every mutation to a durable WAL (Write Ahead Log) on storage media. When a server crashes, it can recover its previous state by replaying the WAL. To prevent the WAL from growing without bound, ZooKeeper servers will periodically snapshot them in memory state to storage media. These snapshots can be loaded directly into memory, and all WAL entries that preceded the snapshot may be discarded.
## Creating a ZooKeeper Ensemble
## Creating a ZooKeeper ensemble
The manifest below contains a
[Headless Service](/docs/concepts/services-networking/service/#headless-services),
@ -127,7 +125,7 @@ zk-2 1/1 Running 0 40s
The StatefulSet controller creates three Pods, and each Pod has a container with
a [ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) server.
### Facilitating Leader Election
### Facilitating leader election
Because there is no terminating algorithm for electing a leader in an anonymous network, Zab requires explicit membership configuration to perform leader election. Each server in the ensemble needs to have a unique identifier, all servers need to know the global set of identifiers, and each identifier needs to be associated with a network address.
@ -211,7 +209,7 @@ server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
```
### Achieving Consensus
### Achieving consensus
Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processes in the system to agree on which processes have committed which data. If two Pods are launched with the same ordinal, two ZooKeeper servers would both identify themselves as the same server.
@ -260,7 +258,7 @@ server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the value (if leader election has succeeded and at least two of the Pods are Running and Ready), or they will fail to do so (if either of the conditions are not met). No state will arise where one server acknowledges a write on behalf of another.
### Sanity Testing the Ensemble
### Sanity testing the ensemble
The most basic sanity test is to write data to one ZooKeeper server and
to read the data from another.
@ -270,6 +268,7 @@ The command below executes the `zkCli.sh` script to write `world` to the path `/
```shell
kubectl exec zk-0 zkCli.sh create /hello world
```
```
WATCHER::
@ -304,7 +303,7 @@ dataLength = 5
numChildren = 0
```
### Providing Durable Storage
### Providing durable storage
As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section,
ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots
@ -445,8 +444,8 @@ The `volumeMounts` section of the `StatefulSet`'s container `template` mounts th
```shell
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
- name: datadir
mountPath: /var/lib/zookeeper
```
When a Pod in the `zk` `StatefulSet` is (re)scheduled, it will always have the
@ -454,7 +453,7 @@ same `PersistentVolume` mounted to the ZooKeeper server's data directory.
Even when the Pods are rescheduled, all the writes made to the ZooKeeper
servers' WALs, and all their snapshots, remain durable.
## Ensuring Consistent Configuration
## Ensuring consistent configuration
As noted in the [Facilitating Leader Election](#facilitating-leader-election) and
[Achieving Consensus](#achieving-consensus) sections, the servers in a
@ -469,6 +468,7 @@ Get the `zk` StatefulSet.
```shell
kubectl get sts zk -o yaml
```
```
command:
@ -497,7 +497,7 @@ command:
The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.
### Configuring Logging
### Configuring logging
One of the files generated by the `zkGenConfig.sh` script controls ZooKeeper's logging.
ZooKeeper uses [Log4j](https://logging.apache.org/log4j/2.x/), and, by default,
@ -558,13 +558,11 @@ You can view application logs written to standard out or standard error using `k
2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)
```
Kubernetes supports more powerful, but more complex, logging integrations
with [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/)
and [Elasticsearch and Kibana](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/).
For cluster level log shipping and aggregation, consider deploying a [sidecar](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns)
container to rotate and ship your logs.
Kubernetes integrates with many logging solutions. You can choose a logging solution
that best fits your cluster and applications. For cluster-level logging and aggregation,
consider deploying a [sidecar container](/docs/concepts/cluster-administration/logging#sidecar-container-with-logging-agent) to rotate and ship your logs.
### Configuring a Non-Privileged User
### Configuring a non-privileged user
The best practices to allow an application to run as a privileged
user inside of a container are a matter of debate. If your organization requires
@ -612,7 +610,7 @@ Because the `fsGroup` field of the `securityContext` object is set to 1000, the
drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
```
## Managing the ZooKeeper Process
## Managing the ZooKeeper process
The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision)
mentions that "You will want to have a supervisory process that
@ -622,7 +620,7 @@ common pattern. When deploying an application in Kubernetes, rather than using
an external utility as a supervisory process, you should use Kubernetes as the
watchdog for your application.
### Updating the Ensemble
### Updating the ensemble
The `zk` `StatefulSet` is configured to use the `RollingUpdate` update strategy.
@ -631,6 +629,7 @@ You can use `kubectl patch` to update the number of `cpus` allocated to the serv
```shell
kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]'
```
```
statefulset.apps/zk patched
```
@ -640,6 +639,7 @@ Use `kubectl rollout status` to watch the status of the update.
```shell
kubectl rollout status sts/zk
```
```
waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...
Waiting for 1 pods to be ready...
@ -678,7 +678,7 @@ kubectl rollout undo sts/zk
statefulset.apps/zk rolled back
```
### Handling Process Failure
### Handling process failure
[Restart Policies](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) control how
Kubernetes handles process failures for the entry point of the container in a Pod.
@ -731,7 +731,7 @@ that implements the application's business logic, the script must terminate with
child process. This ensures that Kubernetes will restart the application's
container when the process implementing the application's business logic fails.
### Testing for Liveness
### Testing for liveness
Configuring your application to restart failed processes is not enough to
keep a distributed system healthy. There are scenarios where
@ -795,7 +795,7 @@ zk-0 0/1 Running 1 1h
zk-0 1/1 Running 1 1h
```
### Testing for Readiness
### Testing for readiness
Readiness is not the same as liveness. If a process is alive, it is scheduled
and healthy. If a process is ready, it is able to process input. Liveness is
@ -824,7 +824,7 @@ Even though the liveness and readiness probes are identical, it is important
to specify both. This ensures that only healthy servers in the ZooKeeper
ensemble receive network traffic.
## Tolerating Node Failure
## Tolerating Node failure
ZooKeeper needs a quorum of servers to successfully commit mutations
to data. For a three server ensemble, two servers must be healthy for
@ -879,10 +879,10 @@ as `zk` in the domain defined by the `topologyKey`. The `topologyKey`
different rules, labels, and selectors, you can extend this technique to spread
your ensemble across physical, network, and power failure domains.
## Surviving Maintenance
## Surviving maintenance
**In this section you will cordon and drain nodes. If you are using this tutorial
on a shared cluster, be sure that this will not adversely affect other tenants.**
In this section you will cordon and drain nodes. If you are using this tutorial
on a shared cluster, be sure that this will not adversely affect other tenants.
The previous section showed you how to spread your Pods across nodes to survive
unplanned node failures, but you also need to plan for temporary node failures
@ -1017,6 +1017,7 @@ Continue to watch the Pods of the stateful set, and drain the node on which
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
```
```
node "kubernetes-node-i4c4" cordoned
@ -1059,6 +1060,7 @@ Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#unc
```shell
kubectl uncordon kubernetes-node-pb41
```
```
node "kubernetes-node-pb41" uncordoned
```
@ -1068,6 +1070,7 @@ node "kubernetes-node-pb41" uncordoned
```shell
kubectl get pods -w -l app=zk
```
```
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 2 1h
@ -1130,9 +1133,7 @@ You should always allocate additional capacity for critical services so that the
## {{% heading "cleanup" %}}
- Use `kubectl uncordon` to uncordon all the nodes in your cluster.
- You will need to delete the persistent storage media for the PersistentVolumes
used in this tutorial. Follow the necessary steps, based on your environment,
storage configuration, and provisioning method, to ensure that all storage is
reclaimed.
- You must delete the persistent storage media for the PersistentVolumes used in this tutorial.
Follow the necessary steps, based on your environment, storage configuration,
and provisioning method, to ensure that all storage is reclaimed.

View File

@ -272,7 +272,7 @@ If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` y
## Scale the Web Frontend
Scaling up or down is easy because your servers are defined as a Service that uses a Deployment controller.
You can scale up or down as needed because your servers are defined as a Service that uses a Deployment controller.
1. Run the following command to scale up the number of frontend Pods:
@ -370,4 +370,3 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)
* Read more about [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)

View File

@ -21,7 +21,7 @@ rules:
- certificates.k8s.io
resources:
- signers
resourceName:
resourceNames:
- example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain
verbs:
- sign

View File

@ -40,7 +40,7 @@ Pour plus d'informations sur la création d'un cluster avec kubeadm, une fois qu
## Vérifiez que les adresses MAC et product_uuid sont uniques pour chaque nœud {#verify-mac-address}
* Vous pouvez obtenir l'adresse MAC des interfaces réseau en utilisant la commande `ip link` ou` ifconfig -a`
* Le product_uuid peut être vérifié en utilisant la commande `sudo cat/sys/class/dmi/id/product_uuid`
* Le product_uuid peut être vérifié en utilisant la commande `sudo cat /sys/class/dmi/id/product_uuid`
Il est très probable que les périphériques matériels aient des adresses uniques, bien que
certaines machines virtuelles puissent avoir des valeurs identiques. Kubernetes utilise ces valeurs pour identifier de manière unique les nœuds du cluster.

View File

@ -114,7 +114,7 @@ cpu-demo 974m <something>
Souvenez-vous qu'en réglant `-cpu "2"`, vous avez configuré le conteneur pour faire en sorte qu'il utilise 2 CPU, mais que le conteneur ne peut utiliser qu'environ 1 CPU. L'utilisation du CPU du conteneur est entravée, car le conteneur tente d'utiliser plus de ressources CPU que sa limite.
{{< note >}}
Une autre explication possible de la la restriction du CPU est que le Nœud pourrait ne pas avoir
Une autre explication possible de la restriction du CPU est que le Nœud pourrait ne pas avoir
suffisamment de ressources CPU disponibles. Rappelons que les conditions préalables à cet exercice exigent que chacun de vos Nœuds doit avoir au moins 1 CPU.
Si votre conteneur fonctionne sur un nœud qui n'a qu'un seul CPU, le conteneur ne peut pas utiliser plus que 1 CPU, quelle que soit la limite de CPU spécifiée pour le conteneur.
{{< /note >}}

View File

@ -25,7 +25,7 @@ CNCF コミュニティ行動規範 v1.0
Kubernetesで虐待的、嫌がらせ、または許されない行動があった場合には、<conduct@kubernetes.io>から[Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct)行動規範委員会にご連絡ください。その他のプロジェクトにつきましては、CNCFプロジェクト管理者または仲介者<mishi@linux.com>にご連絡ください。
本行動規範は、コントリビューターの合意 (http://contributor-covenant.org) バージョン 1.2.0 http://contributor-covenant.org/version/1/2/0/ から適応されています。
本行動規範は、コントリビューターの合意 (https://contributor-covenant.org) バージョン 1.2.0 https://contributor-covenant.org/version/1/2/0/ から適応されています。
### CNCF イベント行動規範

View File

@ -52,4 +52,4 @@ Kubernetesにおいてタイムスキューを避けるために、全てのNode
* [Jobの自動クリーンアップ](/ja/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
* [設計ドキュメント](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
* [設計ドキュメント](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md)

View File

@ -0,0 +1,258 @@
---
title: Namespaceに対する最小および最大メモリー制約の構成
content_type: task
weight: 30
---
<!-- overview -->
このページでは、Namespaceで実行されるコンテナが使用するメモリーの最小値と最大値を設定する方法を説明します。
[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core) で最小値と最大値のメモリー値を指定します。
PodがLimitRangeによって課される制約を満たさない場合、そのNamespaceではPodを作成できません。
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
クラスター内の各ードには、少なくとも1GiBのメモリーが必要です。
<!-- steps -->
## Namespaceの作成
この演習で作成したリソースがクラスターの他の部分から分離されるように、Namespaceを作成します。
```shell
kubectl create namespace constraints-mem-example
```
## LimitRangeとPodを作成
LimitRangeの設定ファイルです。
{{< codenew file="admin/resource/memory-constraints.yaml" >}}
LimitRangeを作成します。
```shell
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints.yaml --namespace=constraints-mem-example
```
LimitRangeの詳細情報を表示します。
```shell
kubectl get limitrange mem-min-max-demo-lr --namespace=constraints-mem-example --output=yaml
```
出力されるのは、予想通りメモリー制約の最小値と最大値を示しています。
しかし、LimitRangeの設定ファイルでデフォルト値を指定していないにもかかわらず、
自動的に作成されていることに気づきます。
```
limits:
- default:
memory: 1Gi
defaultRequest:
memory: 1Gi
max:
memory: 1Gi
min:
memory: 500Mi
type: Container
```
constraints-mem-exampleNamespaceにコンテナが作成されるたびに、
Kubernetesは以下の手順を実行するようになっています。
* コンテナが独自のメモリー要求と制限を指定しない場合は、デフォルトのメモリー要求と制限をコンテナに割り当てます。
* コンテナに500MiB以上のメモリー要求があることを確認します。
* コンテナのメモリー制限が1GiB以下であることを確認します。
以下は、1つのコンテナを持つPodの設定ファイルです。設定ファイルのコンテナ(containers)では、600MiBのメモリー要求と800MiBのメモリー制限が指定されています。これらはLimitRangeによって課される最小と最大のメモリー制約を満たしています。
{{< codenew file="admin/resource/memory-constraints-pod.yaml" >}}
Podの作成
```shell
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example
```
Podのコンテナが実行されていることを確認します。
```shell
kubectl get pod constraints-mem-demo --namespace=constraints-mem-example
```
Podの詳細情報を見ます
```shell
kubectl get pod constraints-mem-demo --output=yaml --namespace=constraints-mem-example
```
出力は、コンテナが600MiBのメモリ要求と800MiBのメモリー制限になっていることを示しています。これらはLimitRangeによって課される制約を満たしています。
```yaml
resources:
limits:
memory: 800Mi
requests:
memory: 600Mi
```
Podを消します。
```shell
kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example
```
## 最大メモリ制約を超えるPodの作成の試み
これは、1つのコンテナを持つPodの設定ファイルです。コンテナは800MiBのメモリー要求と1.5GiBのメモリー制限を指定しています。
{{< codenew file="admin/resource/memory-constraints-pod-2.yaml" >}}
Podを作成してみます。
```shell
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example
```
出力は、コンテナが大きすぎるメモリー制限を指定しているため、Podが作成されないことを示しています。
```
Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-2.yaml":
pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container is 1Gi, but limit is 1536Mi.
```
## 最低限のメモリ要求を満たさないPodの作成の試み
これは、1つのコンテナを持つPodの設定ファイルです。コンテナは100MiBのメモリー要求と800MiBのメモリー制限を指定しています。
{{< codenew file="admin/resource/memory-constraints-pod-3.yaml" >}}
Podを作成してみます。
```shell
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example
```
出力は、コンテナが小さすぎるメモリー要求を指定しているため、Podが作成されないことを示しています。
```
Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-3.yaml":
pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container is 500Mi, but request is 100Mi.
```
## メモリ要求や制限を指定しないPodの作成
これは、1つのコンテナを持つPodの設定ファイルです。コンテナはメモリー要求を指定しておらず、メモリー制限も指定していません。
{{< codenew file="admin/resource/memory-constraints-pod-4.yaml" >}}
Podを作成します。
```shell
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-4.yaml --namespace=constraints-mem-example
```
Podの詳細情報を見ます
```
kubectl get pod constraints-mem-demo-4 --namespace=constraints-mem-example --output=yaml
```
出力を見ると、Podのコンテナのメモリ要求は1GiB、メモリー制限は1GiBであることがわかります。
コンテナはどのようにしてこれらの値を取得したのでしょうか?
```
resources:
limits:
memory: 1Gi
requests:
memory: 1Gi
```
コンテナが独自のメモリー要求と制限を指定していなかったため、LimitRangeから与えられのです。
コンテナが独自のメモリー要求と制限を指定していなかったため、LimitRangeから[デフォルトのメモリー要求と制限](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)が与えられたのです。
この時点で、コンテナは起動しているかもしれませんし、起動していないかもしれません。このタスクの前提条件は、ードが少なくとも1GiBのメモリーを持っていることであることを思い出してください。それぞれのードが1GiBのメモリーしか持っていない場合、どのードにも1GiBのメモリー要求に対応するのに十分な割り当て可能なメモリーがありません。たまたま2GiBのメモリーを持つードを使用しているのであれば、おそらく1GiBのメモリーリクエストに対応するのに十分なスペースを持っていることになります。
Podを削除します。
```
kubectl delete pod constraints-mem-demo-4 --namespace=constraints-mem-example
```
## 最小および最大メモリー制約の強制
LimitRangeによってNamespaceに課される最大および最小のメモリー制約は、Podが作成または更新されたときにのみ適用されます。LimitRangeを変更しても、以前に作成されたPodには影響しません。
## 最小・最大メモリー制約の動機
クラスター管理者としては、Podが使用できるメモリー量に制限を課したいと思うかもしれません。
例:
* クラスター内の各ードは2GBのメモリーを持っています。クラスタ内のどのードもその要求をサポートできないため、2GB以上のメモリーを要求するPodは受け入れたくありません。
* クラスターは運用部門と開発部門で共有されています。 本番用のワークロードでは最大8GBのメモリーを消費しますが、開発用のワークロードでは512MBに制限したいとします。本番用と開発用に別々のNamespaceを作成し、それぞれのNamespaceにメモリー制限を適用します。
## クリーンアップ
Namespaceを削除します。
```shell
kubectl delete namespace constraints-mem-example
```
## {{% heading "whatsnext" %}}
### クラスター管理者向け
* [名前空間に対するデフォルトのメモリー要求と制限の構成](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [名前空間に対するデフォルトのCPU要求と制限の構成](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [名前空間に対する最小および最大CPU制約の構成](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [名前空間に対するメモリーとCPUのクォータの構成](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [名前空間に対するPodクォータの設定](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
* [APIオブジェクトのクォータの設定](/docs/tasks/administer-cluster/quota-api-object/)
### アプリケーション開発者向け
* [コンテナとPodへのメモリーリソースの割り当て](/docs/tasks/configure-pod-container/assign-memory-resource/)
* [コンテナとPodへのCPUリソースの割り当て](/docs/tasks/configure-pod-container/assign-cpu-resource/)
* [PodのQoS(サービス品質)を設定](/docs/tasks/configure-pod-container/quality-service-pod/)

View File

@ -7,10 +7,10 @@ weight: 10
<!-- overview -->
쿠버네티스는 컨테이너를 파드내에 배치하고 _노드_ 에서 실행함으로 워크로드를 구동한다.
노드는 클러스터에 따라 가상 또는 물리적 머신일 수 있다. 각 노드
{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}이라는
노드는 클러스터에 따라 가상 또는 물리적 머신일 수 있다. 각 노드는
{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}에 의해 관리되며
{{< glossary_tooltip text="파드" term_id="pod" >}}를
실행하는데 필요한 서비스가 포함되어 있다.
실행하는데 필요한 서비스를 포함한다.
일반적으로 클러스터에는 여러 개의 노드가 있으며, 학습 또는 리소스가 제한되는
환경에서는 하나만 있을 수도 있다.

View File

@ -69,7 +69,7 @@ VXLAN/오버레이 네트워킹을 사용하는 경우 [KB4489899](https://suppo
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"VNI" : 4096,
"VNI": 4096,
"Port": 4789
}
}

View File

@ -939,10 +939,10 @@ For background information on design details for API priority and fairness, see
[enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md).
You can make suggestions and feature requests via
[SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery)
or the feature's [slack channel](http://kubernetes.slack.com/messages/api-priority-and-fairness).
or the feature's [slack channel](https://kubernetes.slack.com/messages/api-priority-and-fairness).
-->
有关API优先级和公平性的设计细节的背景信息
请参阅[增强建议](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md)。
你可以通过 [SIG APIMachinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery)
或特性的 [Slack 频道](http://kubernetes.slack.com/messages/api-priority-and-fairness)
你可以通过 [SIG APIMachinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery/)
或特性的 [Slack 频道](https://kubernetes.slack.com/messages/api-priority-and-fairness/)
提出建议和特性请求。

View File

@ -153,6 +153,55 @@ List of components currently supporting JSON format:
* {{< glossary_tooltip term_id="kube-scheduler" text="kube-scheduler" >}}
* {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}
<!--
### Log sanitization
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
{{<warning >}}
Log sanitization might incur significant computation overhead and therefore should not be enabled in production.
{{< /warning >}}
-->
### 日志清理
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
{{<warning >}}
日志清理可能会导致大量的计算开销,因此不应启用在生产环境中。
{{< /warning >}}
<!--
The `--experimental-logging-sanitization` flag enables the klog sanitization filter.
If enabled all log arguments are inspected for fields tagged as sensitive data (e.g. passwords, keys, tokens) and logging of these fields will be prevented.
-->
`--experimental-logging-sanitization` 参数可用来启用 klog 清理过滤器。
如果启用后,将检查所有日志参数中是否有标记为敏感数据的字段(比如:密码,密钥,令牌),并且将阻止这些字段的记录。
<!--
List of components currently supporting log sanitization:
* kube-controller-manager
* kube-apiserver
* kube-scheduler
* kubelet
{{< note >}}
The Log sanitization filter does not prevent user workload logs from leaking sensitive data.
{{< /note >}}
-->
当前支持日志清理的组件列表:
* kube-controller-manager
* kube-apiserver
* kube-scheduler
* kubelet
{{< note >}}
日志清理过滤器不会阻止用户工作负载日志泄漏敏感数据。
{{< /note >}}
<!--
### Log verbosity level

View File

@ -204,7 +204,7 @@ kubelet 在驱动程序上保持打开状态。这意味着为了执行基础结
现在,收集加速器指标的责任属于供应商,而不是 kubelet。供应商必须提供一个收集指标的容器
并将其公开给指标服务(例如 Prometheus
[`DisableAcceleratorUsageMetrics` 特性门控](/zh/docs/references/command-line-tools-reference/feature-gates/)
[`DisableAcceleratorUsageMetrics` 特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
禁止由 kubelet 收集的指标。
关于[何时会在默认情况下启用此功能也有一定规划](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria)。

View File

@ -90,6 +90,6 @@ to disable the timeout restriction. This deprecated feature gate will be removed
了解如何在自己的环境中启用聚合器。
* 接下来,了解[安装扩展 API 服务器](/zh/docs/tasks/extend-kubernetes/setup-extension-api-server/)
开始使用聚合层。
* 也可以学习怎样[使用自定义资源定义扩展 Kubernetes API](zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。
* 也可以学习怎样[使用自定义资源定义扩展 Kubernetes API](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。
* 阅读 [APIService](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io) 的规范

View File

@ -97,7 +97,7 @@ Extensions are software components that extend and deeply integrate with Kuberne
They adapt it to support new types and new kinds of hardware.
Most cluster administrators will use a hosted or distribution
instance of Kubernetes. As a result, most Kubernetes users will need to
instance of Kubernetes. As a result, most Kubernetes users will not need to
install extensions and fewer will need to author new ones.
-->
## 扩展程序 {#extension}
@ -105,7 +105,7 @@ install extensions and fewer will need to author new ones.
扩展程序是指对 Kubernetes 进行扩展和深度集成的软件组件。它们适合用于支持新的类型和新型硬件。
大多数集群管理员会使用托管的或统一分发的 Kubernetes 实例。
因此,大多数 Kubernetes 用户需要安装扩展程序,而且还有少部分用户甚至需要编写新的扩展程序。
因此,大多数 Kubernetes 用户需要安装扩展程序,而且还有少部分用户甚至需要编写新的扩展程序。
<!--
## Extension Patterns
@ -145,21 +145,20 @@ failure.
<!--
In the webhook model, Kubernetes makes a network request to a remote service.
In the *Binary Plugin* model, Kubernetes executes a binary (program).
Binary plugins are used by the kubelet (e.g. [Flex Volume
Plugins](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md)
and [Network
Plugins](/docs/concepts/cluster-administration/network-plugins/))
Binary plugins are used by the kubelet (e.g.
[Flex Volume Plugins](/docs/concepts/storage/volumes/#flexvolume)
and [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/))
and by kubectl.
-->
在 webhook 模型里Kubernetes 向远程服务发送一个网络请求。
*可执行文件插件* 模型里Kubernetes 执行一个可执行文件(程序)。
可执行文件插件被 kubelet
[Flex 卷插件](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md)和
[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
`kubectl` 所使用。
[Flex 卷插件](/zh/docs/concepts/storage/volumes/#flexvolume)
[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
`kubectl` 所使用。
<!--
Below is a diagram showing how the extensions points interact with the
Below is a diagram showing how the extension points interact with the
Kubernetes control plane.
-->
下图显示了扩展点如何与 Kubernetes 控制平面进行交互。
@ -184,13 +183,14 @@ This diagram shows the extension points in a Kubernetes system.
<!-- image source diagrams: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view -->
<!--
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/extend-kubernetes/#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/extend-kubernetes/#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section.
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/overview/extending#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/overview/extending#storage-plugins).
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/extend-kubernetes/#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/extend-kubernetes/#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/extend-kubernetes/#scheduler-extensions) section.
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/extend-kubernetes/#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/extend-kubernetes/#storage-plugins).
-->
1. 用户通常使用 `kubectl` 与 Kubernetes API 进行交互。
@ -209,9 +209,9 @@ This diagram shows the extension points in a Kubernetes system.
5. Kubernetes 的大部分行为都是由称为控制器Controllers的程序实现的这些程序是 API 服务器的客户端。
控制器通常与自定义资源一起使用。
6. `kubelet` 在主机上运行,并帮助 Pod 看起来就像在集群网络上拥有自己的 IP 的虚拟服务器。
[网络插件](/zh/docs/concepts/extend-kubernetes/#network-plugins/)让你可以实现不同的 pod 网络。
[网络插件](/zh/docs/concepts/extend-kubernetes/#network-plugins)让你可以实现不同的 pod 网络。
7. `kubelet` 也负责为容器挂载和卸载卷。新的存储类型可以通过
[存储插件](/zh/docs/concepts/extend-kubernetes/#storage-plugins/)支持。
[存储插件](/zh/docs/concepts/extend-kubernetes/#storage-plugins)支持。
<!--
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
@ -230,7 +230,7 @@ Consider adding a Custom Resource to Kubernetes if you want to define new contro
Do not use a Custom Resource as data storage for application, user, or monitoring data.
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/api-extension/custom-resources/).
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
-->
## API 扩展 {#api-extensions}
@ -343,24 +343,23 @@ After a request is authorized, if it is a write operation, it also goes through
### Storage Plugins
[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md
) allow users to mount volume types without built-in support by having the
[Flex Volumes](/docs/concepts/storage/volumes/#flexvolume)
allow users to mount volume types without built-in support by having the
Kubelet call a Binary Plugin to mount the volume.
-->
## 基础设施扩展
### 存储插件
[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md
)
[Flex Volumes](/zh/docs/concepts/storage/volumes/#flexvolume)
允许用户挂载无内置插件支持的卷类型,它通过 Kubelet 调用一个可执行文件插件来挂载卷。
<!--
### Device Plugins
Device plugins allow a node to discover new Node resources (in addition to the
builtin ones like cpu and memory) via a [Device
Plugin](/docs/concepts/cluster-administration/device-plugins/).
builtin ones like cpu and memory) via a
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
-->
### 设备插件 {#device-plugins}
@ -371,7 +370,8 @@ Plugin](/docs/concepts/cluster-administration/device-plugins/).
<!--
### Network Plugins
Different networking fabrics can be supported via node-level [Network Plugins](/docs/admin/network-plugins/).
Different networking fabrics can be supported via node-level
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).
-->
### 网络插件 {#network-plugins}
@ -408,17 +408,21 @@ the nodes chosen for a pod.
[Webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md)
它允许使用一个 Webhook 后端(调度器扩展程序)为 Pod 筛选节点和确定节点的优先级。
## {{% heading "whatsnext" %}}
<!--
* Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/)
## {{% heading "whatsnext" %}}
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
* Learn more about Infrastructure extensions
* [Network Plugins](/docs/concepts/cluster-administration/network-plugins/)
* [Device Plugins](/docs/concepts/cluster-administration/device-plugins/)
* [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
* [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)
-->
## {{% heading "whatsnext" %}}
* 详细了解[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* 了解[动态准入控制](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
* 详细了解基础设施扩展

View File

@ -35,7 +35,7 @@ Here's the diagram of a Kubernetes cluster with all the components tied together
-->
<!-- overview -->
当你部署完 Kubernetes, 即拥有了一个完整的集群。
{{< glossary_definition term_id="cluster" length="all" prepend="一个 Kubernetes 集群包含">}}
{{< glossary_definition term_id="cluster" length="all" prepend="一个 Kubernetes">}}
本文档概述了交付正常运行的 Kubernetes 集群所需的各种组件。

View File

@ -1230,7 +1230,7 @@ By default, all safe sysctls are allowed.
- Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.
-->
- 参阅[Pod 安全标准](zh/docs/concepts/security/pod-security-standards/)
- 参阅[Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)
了解策略建议。
- 阅读 [Pod 安全策略参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy)了解 API 细节。

View File

@ -316,7 +316,7 @@ The rest of this section will assume you have a Service with a long lived IP
所以可以通过标准做法,使在集群中的任何 Pod 都能与该 Service 通信(例如:`gethostbyname()`)。
如果 CoreDNS 没有在运行,你可以参照
[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) 或者
[安装 CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns) 来启用它。
[安装 CoreDNS](/zh/docs/tasks/administer-cluster/coredns/#installing-coredns) 来启用它。
让我们运行另一个 curl 应用来进行测试:
```shell

View File

@ -49,6 +49,7 @@ Kubernetes 作为一个项目,目前支持和维护
* The [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) works with
Citrix Application Delivery Controller.
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
* [EnRoute](https://getenroute.io/) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller.
-->
* [AKS 应用程序网关 Ingress 控制器](https://azure.github.io/application-gateway-kubernetes-ingress/)
是一个配置 [Azure 应用程序网关](https://docs.microsoft.com/azure/application-gateway/overview)
@ -62,27 +63,29 @@ Kubernetes 作为一个项目,目前支持和维护
* [Citrix Ingress 控制器](https://github.com/citrix/citrix-k8s-ingress-controller#readme)
可以用来与 Citrix Application Delivery Controller 一起使用。
* [Contour](https://projectcontour.io/) 是一个基于 [Envoy](https://www.envoyproxy.io/) 的 Ingress 控制器。
* [EnRoute](https://getenroute.io/) 是一个基于 [Envoy](https://www.envoyproxy.io) API 网关,
可以作为 Ingress 控制器来执行。
<!--
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
lets you use an Ingress to configure F5 BIG-IP virtual servers.
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),
which offers API gateway functionality.
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for
[HAProxy](http://www.haproxy.org/#desc).
[HAProxy](https://www.haproxy.org/#desc).
* The [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme)
is also an ingress controller for [HAProxy](http://www.haproxy.org/#desc).
is also an ingress controller for [HAProxy](https://www.haproxy.org/#desc).
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
is an [Istio](https://istio.io/) based ingress controller.
-->
* F5 BIG-IP 的
[用于 Kubernetes 的容器 Ingress 服务](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest)
[用于 Kubernetes 的容器 Ingress 服务](https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest)
让你能够使用 Ingress 来配置 F5 BIG-IP 虚拟服务器。
* [Gloo](https://gloo.solo.io) 是一个开源的、基于 [Envoy](https://www.envoyproxy.io) 的
Ingress 控制器,能够提供 API 网关功能,
* [HAProxy Ingress](https://haproxy-ingress.github.io/) 针对 [HAProxy](http://www.haproxy.org/#desc)
* [HAProxy Ingress](https://haproxy-ingress.github.io/) 针对 [HAProxy](https://www.haproxy.org/#desc)
的 Ingress 控制器。
* [用于 Kubernetes 的 HAProxy Ingress 控制器](https://github.com/haproxytech/kubernetes-ingress#readme)
也是一个针对 [HAProxy](http://www.haproxy.org/#desc) 的 Ingress 控制器。
也是一个针对 [HAProxy](https://www.haproxy.org/#desc) 的 Ingress 控制器。
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
是一个基于 [Istio](https://istio.io/) 的 Ingress 控制器。
<!--
@ -94,7 +97,7 @@ Kubernetes 作为一个项目,目前支持和维护
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
[HAProxy](http://www.haproxy.org/#desc).
[HAProxy](https://www.haproxy.org/#desc).
-->
* [用于 Kubernetes 的 Kong Ingress 控制器](https://github.com/Kong/kubernetes-ingress-controller#readme)
是一个用来驱动 [Kong Gateway](https://konghq.com/kong/) 的 Ingress 控制器。
@ -106,7 +109,7 @@ Kubernetes 作为一个项目,目前支持和维护
设计用来作为构造你自己的定制代理的库。
* [Traefik Kubernetes Ingress 提供程序](https://doc.traefik.io/traefik/providers/kubernetes-ingress/)
是一个用于 [Traefik](https://traefik.io/traefik/) 代理的 Ingress 控制器。
* [Voyager](https://appscode.com/products/voyager) 是一个针对 [HAProxy](http://www.haproxy.org/#desc)
* [Voyager](https://appscode.com/products/voyager) 是一个针对 [HAProxy](https://www.haproxy.org/#desc)
的 Ingress 控制器。
<!--

View File

@ -65,7 +65,7 @@ Pod 是非永久性资源。
每个 Pod 都有自己的 IP 地址,但是在 Deployment 中,在同一时刻运行的 Pod 集合可能与稍后运行该应用程序的 Pod 集合不同。
这导致了一个问题: 如果一组 Pod称为“后端”为集群内的其他 Pod称为“前端”提供功能
那么前端如何找出并跟踪要连接的 IP 地址,以便前端可以使用工作量的后端部分?
那么前端如何找出并跟踪要连接的 IP 地址,以便前端可以使用提供工作负载的后端部分?
进入 _Services_
@ -348,7 +348,7 @@ each Service port. The value of this field is mirrored by the corresponding
Endpoints and EndpointSlice objects.
This field follows standard Kubernetes label syntax. Values should either be
[IANA standard service names](http://www.iana.org/assignments/service-names) or
[IANA standard service names](https://www.iana.org/assignments/service-names) or
domain prefixed names such as `mycompany.com/my-custom-protocol`.
-->
### 应用程序协议 {#application-protocol}
@ -358,8 +358,8 @@ domain prefixed names such as `mycompany.com/my-custom-protocol`.
此字段的取值会被映射到对应的 Endpoints 和 EndpointSlices 对象。
该字段遵循标准的 Kubernetes 标签语法。
其值可以是 [IANA 标准服务名称](http://www.iana.org/assignments/service-names)或以域名前缀的名称,
`mycompany.com/my-custom-protocol`
其值可以是 [IANA 标准服务名称](https://www.iana.org/assignments/service-names)
或以域名为前缀的名称,`mycompany.com/my-custom-protocol`
<!--
## Virtual IPs and service proxies

View File

@ -691,7 +691,7 @@ vSphere 存储类有两种制备器
[弃用](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi)。
更多关于 CSI 制备器的详情,请参阅
[Kubernetes vSphere CSI 驱动](https://vsphere-csi-driver.sigs.k8s.io/)
和 [vSphereVolume CSI 迁移](/docs/concepts/storage/volumes/#csi-migration-5)。
和 [vSphereVolume CSI 迁移](/zh/docs/concepts/storage/volumes/#csi-migration-5)。
<!--
#### CSI Provisioner {#vsphere-provisioner-csi}
@ -1356,4 +1356,4 @@ scheduling constraints when choosing an appropriate PersistentVolume for a
PersistentVolumeClaim.
-->
延迟卷绑定使得调度器在为 PersistentVolumeClaim 选择一个合适的
PersistentVolume 时能考虑到所有 Pod 的调度限制。
PersistentVolume 时能考虑到所有 Pod 的调度限制。

View File

@ -22,7 +22,7 @@ weight: 50
<!-- overview -->
<!--
A Job creates one or more Pods and ensures that a specified number of them successfully terminate.
A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
As pods successfully complete, the Job tracks the successful completions. When a specified number
of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
the Pods it created.
@ -33,7 +33,8 @@ due to a node hardware failure or a node reboot).
You can also use a Job to run multiple Pods in parallel.
-->
Job 会创建一个或者多个 Pods并确保指定数量的 Pods 成功终止。
Job 会创建一个或者多个 Pods并将继续重试 Pods 的执行,直到指定数量的 Pods 成功终止。
随着 Pods 成功结束Job 跟踪记录成功完成的 Pods 个数。
当数量达到指定的成功个数阈值时,任务(即 Job结束。
删除 Job 的操作会清除所创建的全部 Pods。
@ -383,8 +384,8 @@ other Pods for the Job failing around that time.
可能意味着遇到了配置错误。
为了实现这点,可以将 `.spec.backoffLimit` 设置为视 Job 为失败之前的重试次数。
失效回退的限制值默认为 6。
与 Job 相关的失效的 Pod 会被 Job 控制器重建,并且以指数型回退计算重试延迟
(从 10 秒、20 秒到 40 秒,最多 6 分钟)
与 Job 相关的失效的 Pod 会被 Job 控制器重建,回退重试时间将会按指数增长
(从 10 秒、20 秒到 40 秒)最多至 6 分钟
当 Job 的 Pod 被删除时,或者 Pod 成功时没有其它 Pod 处于失败状态,失效回退的次数也会被重置(为 0
<!--

View File

@ -765,7 +765,7 @@ subjects:
<!--
For all service accounts in the "dev" group in the "development" namespace:
-->
对于 "dev" 名称空间中 "development" 组中的所有服务帐户:
对于 "development" 名称空间中 "dev" 组中的所有服务帐户:
```yaml
subjects:

View File

@ -76,7 +76,7 @@ Here's a summary of each level:
-->
下面是每个级别的摘要:
<--
<!--
- Alpha:
- The version names contain `alpha` (for example, `v1alpha1`).
- The software may contain bugs. Enabling a feature may expose bugs. A

View File

@ -17,11 +17,12 @@ weight: 30
<!-- overview -->
<!--
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Creating a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
`kubeadm` also supports other cluster
lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
-->
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">创建一个符合最佳实践的最小化 Kubernetes 集群。事实上,你可以使用 `kubeadm` 配置一个通过 [Kubernetes 一致性测试](https://kubernetes.io/blog/2017/10/software-conformance-certification) 的集群。
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">使用 `kubeadm`,你
能创建一个符合最佳实践的最小化 Kubernetes 集群。事实上,你可以使用 `kubeadm` 配置一个通过 [Kubernetes 一致性测试](https://kubernetes.io/blog/2017/10/software-conformance-certification) 的集群。
`kubeadm` 还支持其他集群生命周期功能,
例如 [启动引导令牌](/zh/docs/reference/access-authn-authz/bootstrap-tokens/) 和集群升级。
@ -265,9 +266,9 @@ kubeadm 不支持将没有 `--control-plane-endpoint` 参数的单个控制平
### 更多信息
<!--
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm-init/).
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
-->
有关 `kubeadm init` 参数的更多信息,请参见 [kubeadm 参考指南](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/)。
有关 `kubeadm init` 参数的更多信息,请参见 [kubeadm 参考指南](/zh/docs/reference/setup-tools/kubeadm/kubeadm/)。
<!--
To configure `kubeadm init` with a configuration file see [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).

View File

@ -320,6 +320,71 @@ To make such deployment secure, communication between etcd instances is authoriz
为了允许 etcd 组建集群,需开放 etcd 实例之间通信所需的端口(用于集群内部通信)。
为了使这种部署安全etcd 实例之间的通信使用 SSL 进行鉴权。
<!--
### API server identity
-->
### API 服务器标识
{{< feature-state state="alpha" for_k8s_version="v1.20" >}}
<!--
The API Server Identity feature is controlled by a
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
and is not enabled by default. You can activate API Server Identity by enabling
the feature gate named `APIServerIdentity` when you start the
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}:
-->
使用 API 服务器标识功能需要启用[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
该功能默认不启用。
你可以在启动 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}} 的时候启用特性门控 `APIServerIdentity` 来激活 API 服务器标识:
<!--
```shell
kube-apiserver \
--feature-gates=APIServerIdentity=true \
# …and other flags as usual
```
-->
```shell
kube-apiserver \
--feature-gates=APIServerIdentity=true \
# …其他标记照常
```
<!--
During bootstrap, each kube-apiserver assigns a unique ID to itself. The ID is
in the format of `kube-apiserver-{UUID}`. Each kube-apiserver creates a
[Lease](/docs/reference/generated/kubernetes-api/{{< param "version" >}}//#lease-v1-coordination-k8s-io)
in the _kube-system_ {{< glossary_tooltip text="namespaces" term_id="namespace">}}.
-->
在启动引导过程中,每个 kube-apiserver 会给自己分配一个唯一 ID。
该 ID 的格式是 `kube-apiserver-{UUID}`
每个 kube-apiserver 会在 _kube-system_ {{< glossary_tooltip text="名字空间" term_id="namespace">}} 里创建一个 [`Lease` 对象](/docs/reference/generated/kubernetes-api/{{< param "version" >}}//#lease-v1-coordination-k8s-io)。
<!--
The Lease name is the unique ID for the kube-apiserver. The Lease contains a
label `k8s.io/component=kube-apiserver`. Each kube-apiserver refreshes its
Lease every `IdentityLeaseRenewIntervalSeconds` (defaults to 10s). Each
kube-apiserver also checks all the kube-apiserver identity Leases every
`IdentityLeaseDurationSeconds` (defaults to 3600s), and deletes Leases that
hasn't got refreshed for more than `IdentityLeaseDurationSeconds`.
`IdentityLeaseRenewIntervalSeconds` and `IdentityLeaseDurationSeconds` can be
configured by kube-apiserver flags `identity-lease-renew-interval-seconds`
and `identity-lease-duration-seconds`.
-->
`Lease` 对象的名字是 kube-apiserver 的唯一 ID。
`Lease` 对象包含一个标签 `k8s.io/component=kube-apiserver`
每个 kube-apiserver 每过 `IdentityLeaseRenewIntervalSeconds`(默认是 10 秒)就会刷新它的 `Lease` 对象。
每个 kube-apiserver 每过 `IdentityLeaseDurationSeconds`(默认是 3600 秒)也会检查所有 kube-apiserver 的标识 `Lease` 对象,
并且会删除超过 `IdentityLeaseDurationSeconds` 时间还没被刷新的 `Lease` 对象。
可以在 kube-apiserver 的 `identity-lease-renew-interval-seconds`
`identity-lease-duration-seconds` 标记里配置 `IdentityLeaseRenewIntervalSeconds``IdentityLeaseDurationSeconds`
<!--
Enabling this feature is a prerequisite for using features that involve HA API
server coordination (for example, the `StorageVersionAPI` feature gate).
-->
启用该功能是使用 HA API 服务器协调相关功能(例如,`StorageVersionAPI` 特性门控)的前提条件。
<!--
## Additional reading

View File

@ -108,7 +108,7 @@ Once you have a Linux-based Kubernetes control-plane node you are ready to choos
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"VNI" : 4096,
"VNI": 4096,
"Port": 4789
}
}
@ -136,7 +136,7 @@ Once you have a Linux-based Kubernetes control-plane node you are ready to choos
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"VNI" : 4096,
"VNI": 4096,
"Port": 4789
}
}

View File

@ -0,0 +1,42 @@
---
title: "从 dockershim 迁移"
weight: 10
content_type: task
---
<!--
title: "Migrating from dockershim"
weight: 10
content_type: task
-->
<!-- overview -->
<!--
This section presents information you need to know when migrating from
dockershim to other container runtimes.
-->
本节提供从 dockershim 迁移到其他容器运行时的必备知识。
<!--
Since the announcement of [dockershim deprecation](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)
in Kubernetes 1.20, there were questions on how this will affect various workloads and Kubernetes
installations. You can find this blog post useful to understand the problem better: [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)
-->
自从 Kubernetes 1.20 宣布
[弃用 dockershim](/zh/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)
各类疑问随之而来:这对各类工作负载和 Kubernetes 部署会产生什么影响。
你会发现这篇博文对于更好地理解此问题非常有用:
[弃用 Dockershim 常见问题](/zh/blog/2020/12/02/dockershim-faq/)
<!-- It is recommended to migrate from dockershim to alternative container runtimes.
Check out [container runtimes](/docs/setup/production-environment/container-runtimes/)
section to know your options. Make sure to
[report issues](https://github.com/kubernetes/kubernetes/issues) you encountered
with the migration. So the issue can be fixed in a timely manner and your cluster would be
ready for dockershim removal.
-->
建议从 dockershim 迁移到其他替代的容器运行时。
请参阅[容器运行时](/zh/docs/setup/production-environment/container-runtimes/)
一节以了解可用的备选项。
当在迁移过程中遇到麻烦,请[上报问题](https://github.com/kubernetes/kubernetes/issues)。
那么问题就可以及时修复,你的集群也可以进入移除 dockershim 前的就绪状态。

View File

@ -219,25 +219,28 @@ The `kubelet` has the following default hard eviction threshold:
* `memory.available<100Mi`
* `nodefs.available<10%`
* `nodefs.inodesFree<5%`
* `imagefs.available<15%`
On a Linux node, the default value also includes `nodefs.inodesFree<5%`.
-->
#### 硬驱逐阈值
硬驱逐阈值没有宽限期,一旦察觉,`kubelet`将立即采取行动回收关联的短缺资源。
如果满足硬驱逐阈值,`kubelet`将立即结束 pod 而不是优雅终止
硬驱逐阈值没有宽限期,一旦察觉,`kubelet` 将立即采取行动回收关联的短缺资源。
如果满足硬驱逐阈值,`kubelet` 将立即结束 Pod 而不是体面地终止它们
硬驱逐阈值的配置支持下列标记:
* `eviction-hard` 描述了驱逐阈值的集合(例如 `memory.available<1Gi`),如果满足条件将触发 pod 驱逐。
* `eviction-hard` 描述了驱逐阈值的集合(例如 `memory.available<1Gi`),如果满足条件将触发 Pod 驱逐。
`kubelet` 有如下所示的默认硬驱逐阈值:
* `memory.available<100Mi`
* `nodefs.available<10%`
* `nodefs.inodesFree<5%`
* `imagefs.available<15%`
在Linux节点上默认值还包括 `nodefs.inodesFree<5`
<!--
### Eviction Monitoring Interval
@ -273,6 +276,7 @@ The following node conditions are defined that correspond to the specified evict
|-------------------------|-------------------------------|--------------------------------------------|
| `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold |
| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold |
| `PIDPressure` | `pid.available` | Available processes identifiers on the (Linux) node has fallen below an eviction threshold |
The `kubelet` continues to report node status updates at the frequency specified by
`--node-status-update-frequency` which defaults to `10s`.
@ -283,6 +287,7 @@ The `kubelet` continues to report node status updates at the frequency specified
|-------------------------|-------------------------------|--------------------------------------------|
| `MemoryPressure` | `memory.available` | 节点上可用内存量达到逐出阈值 |
| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, 或 `imagefs.inodesFree` | 节点或者节点的根文件系统或镜像文件系统上可用磁盘空间和 i 节点个数达到逐出阈值 |
| `PIDPressure` | `pid.available` | 在Linux节点上的可用进程标识符已降至驱逐阈值以下 |
`kubelet` 将以 `--node-status-update-frequency` 指定的频率连续报告节点状态更新,其默认值为 `10s`

View File

@ -252,7 +252,7 @@ data:
删除你刚才创建的 Secret
```shell
kubectl delete secret db-user-pass
kubectl delete secret mysecret
```
## {{% heading "whatsnext" %}}

View File

@ -454,3 +454,124 @@ for more information.
-->
更多信息请参考 [kubernetes-sigs/cri-tools](https://github.com/kubernetes-sigs/cri-tools)。
<!--
## Mapping from docker cli to crictl
-->
## Docker CLI 和 crictl 的映射
<!--
The exact versions for below mapping table are for docker cli v1.40 and crictl v1.19.0. Please note that the list is not exhaustive. For example, it doesn't include experimental commands of docker cli.
-->
以下的映射表格只适用于 Docker CLI v1.40 和 crictl v1.19.0 版本。
请注意该表格并不详尽。例如,其中不包含 Docker CLI 的实验性命令。
<!--
{{< note >}}
The output format of CRICTL is similar to Docker CLI, despite some missing columns for some CLI. Make sure to check output for the specific command if your script output parsing.
{{< /note >}}
-->
{{< note >}}
尽管有些命令的输出缺少了一些数据列CRICTL 的输出格式与 Docker CLI 是类似的。
如果你的脚本程序需要解析命令的输出,请确认检查该特定命令的输出。
{{< /note >}}
<!--
### Retrieve Debugging Information
{{< table caption="mapping from docker cli to crictl - retrieve debugging information" >}}
-->
### 获取调试信息
{{< table caption="Docker CLI 和 crictl 的映射 - 获取调试信息" >}}
<!--
docker cli | crictl | Description | Unsupported Features
-- | -- | -- | --
`attach` | `attach` | Attach to a running container | `--detach-keys`, `--sig-proxy`
`exec` | `exec` | Run a command in a running container | `--privileged`, `--user`, `--detach-keys`
`images` | `images` | List images |  
`info` | `info` | Display system-wide information |  
`inspect` | `inspect`, `inspecti` | Return low-level information on a container, image or task |  
`logs` | `logs` | Fetch the logs of a container | `--details`
`ps` | `ps` | List containers |  
`stats` | `stats` | Display a live stream of container(s) resource usage statistics | Column: NET/BLOCK I/O, PIDs
`version` | `version` | Show the runtime (Docker, ContainerD, or others) version information |  
{{< /table >}}
-->
docker cli | crictl | 描述 | 不支持的功能
-- | -- | -- | --
`attach` | `attach` | 连接到一个运行中的容器 | `--detach-keys`, `--sig-proxy`
`exec` | `exec` | 在运行中的容器里运行一个命令 | `--privileged`, `--user`, `--detach-keys`
`images` | `images` | 列举镜像 |  
`info` | `info` | 显示系统级的信息 |  
`inspect` | `inspect`, `inspecti` | 返回容器、镜像或者任务的详细信息 |  
`logs` | `logs` | 获取容器的日志 | `--details`
`ps` | `ps` | 列举容器 |  
`stats` | `stats` | 实时显示容器的资源使用统计信息 | 列NET/BLOCK I/O, PIDs
`version` | `version` | 显示运行时Docker、ContainerD、或者其他) 的版本信息 |  
{{< /table >}}
<!--
### Perform Changes
{{< table caption="mapping from docker cli to crictl - perform changes" >}}
-->
### 进行改动
{{< table caption="Docker CLI 和 crictl 的映射 - 进行改动" >}}
<!--
docker cli | crictl | Description | Unsupported Features
-- | -- | -- | --
`create` | `create` | Create a new container |  
`kill` | `stop` (timeout = 0) | Kill one or more running container | `--signal`
`pull` | `pull` | Pull an image or a repository from a registry | `--all-tags`, `--disable-content-trust`
`rm` | `rm` | Remove one or more containers |  
`rmi` | `rmi` | Remove one or more images |  
`run` | `run` | Run a command in a new container |  
`start` | `start` | Start one or more stopped containers | `--detach-keys`
`stop` | `stop` | Stop one or more running containers |  
`update` | `update` | Update configuration of one or more containers | `--restart`, `--blkio-weight` and some other resource limit not supported by CRI.
{{< /table >}}
-->
docker cli | crictl | 描述 | 不支持的功能
-- | -- | -- | --
`create` | `create` | 创建一个新的容器 |  
`kill` | `stop` (timeout=0) | 杀死一个或多个正在运行的容器 | `--signal`
`pull` | `pull` | 从镜像仓库拉取镜像或者代码仓库 | `--all-tags`, `--disable-content-trust`
`rm` | `rm` | 移除一个或多个容器 |  
`rmi` | `rmi` | 移除一个或多个镜像 |  
`run` | `run` | 在新容器里运行一个命令 |  
`start` | `start` | 启动一个或多个停止的容器 | `--detach-keys`
`stop` | `stop` | 停止一个或多个正运行的容器 |  
`update` | `update` | 更新一个或多个容器的配置 | CRI 不支持 `--restart`、`--blkio-weight` 以及一些其他的资源限制选项。
{{< /table >}}
<!--
### Supported only in crictl
{{< table caption="mapping from docker cli to crictl - supported only in crictl" >}}
-->
### 仅 crictl 支持
{{< table caption="Docker CLI 和 crictl 的映射 - 仅 crictl 支持" >}}
<!--
crictl | Description
-- | --
`imagefsinfo` | Return image filesystem info
`inspectp` | Display the status of one or more pods
`port-forward` | Forward local port to a pod
`pods` | List pods
`runp` | Run a new pod
`rmp` | Remove one or more pods
`stopp` | Stop one or more running pods
{{< /table >}}
-->
crictl | 描述
-- | --
`imagefsinfo` | 返回镜像的文件系统信息
`inspectp` | 显示一个或多个 Pod 的状态
`port-forward` | 转发本地端口到 Pod
`pods` | 列举 Pod
`runp` | 运行一个新的 Pod
`rmp` | 移除一个或多个 Pod
`stopp` | 停止一个或多个正运行的 Pod
{{< /table >}}

View File

@ -85,7 +85,7 @@ of deploying Redis scalably and redundantly.
<!--
You could also download the following files directly:
-->
你可以直接下载如下文件:
可以直接下载如下文件:
- [`redis-pod.yaml`](/examples/application/job/redis/redis-pod.yaml)
- [`redis-service.yaml`](/examples/application/job/redis/redis-service.yaml)
@ -121,7 +121,7 @@ Now hit enter, start the redis CLI, and create a list with some work items in it
-->
现在按回车键,启动 redis 命令行界面,然后创建一个存在若干个工作项的列表。
```
```shell
# redis-cli -h redis
redis:6379> rpush job2 "apple"
(integer) 1
@ -214,7 +214,7 @@ your username and push to the Hub with the below commands. Replace
### Push 镜像
对于 [Docker Hub](https://hub.docker.com/),请先用你的用户名给镜像打上标签,
然后使用下面的命令 push 你的镜像到仓库。请将 `<username>` 替换为你自己的用户名。
然后使用下面的命令 push 你的镜像到仓库。请将 `<username>` 替换为你自己的 Hub 用户名。
```shell
docker tag job-wq-2 <username>/job-wq-2
@ -270,7 +270,7 @@ too.
-->
在这个例子中,每个 pod 处理了队列中的多个项目,直到队列中没有项目时便退出。
因为是由工作程序自行检测工作队列是否为空,并且 Job 控制器不知道工作队列的存在,
所以依赖于工作程序在完成工作时发出信号。
依赖于工作程序在完成工作时发出信号。
工作程序以成功退出的形式发出信号表示工作队列已经为空。
所以,只要有任意一个工作程序成功退出,控制器就知道工作已经完成了,所有的 Pod 将很快会退出。
因此,我们将 Job 的完成计数Completion Count设置为 1 。
@ -360,11 +360,10 @@ want to consider one of the other [job patterns](/docs/concepts/jobs/run-to-comp
<!--
If you have a continuous stream of background processing work to run, then
consider running your background workers with a `replicationController` instead,
consider running your background workers with a `ReplicaSet` instead,
and consider running a background processing library such as
[https://github.com/resque/resque](https://github.com/resque/resque).
-->
如果你有连续的后台处理业务,那么可以考虑使用 `replicationController` 来运行你的后台业务,
如果你有持续的后台处理业务,那么可以考虑使用 `ReplicaSet` 来运行你的后台业务,
和运行一个类似 [https://github.com/resque/resque](https://github.com/resque/resque)
的后台处理库。

View File

@ -22,7 +22,7 @@ Kubernetes includes **experimental** support for managing AMD and NVIDIA GPUs
This page describes how users can consume GPUs across different Kubernetes versions
and the current limitations.
-->
Kubernetes 支持对节点上的 AMD 和 NVIDA GPU (图形处理单元)进行管理,目前处于**实验**状态。
Kubernetes 支持对节点上的 AMD 和 NVIDIA GPU (图形处理单元)进行管理,目前处于**实验**状态。
本页介绍用户如何在不同的 Kubernetes 版本中使用 GPU以及当前存在的一些限制。

View File

@ -21,11 +21,14 @@ weight: 30
<!--
This page shows how to run a replicated stateful application using a
[StatefulSet](/docs/concepts/workloads/controllers/statefulset/) controller.
The example is a MySQL single-master topology with multiple slaves running
asynchronous replication.
This application is a replicated MySQL database. The example topology has a
single primary server and multiple replicas, using asynchronous row-based
replication.
-->
本页展示如何使用 [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/)
控制器运行一个有状态的应用程序。此例是一主多从、异步复制的 MySQL 集群。
控制器运行一个有状态的应用程序。此例是多副本的 MySQL 数据库。
示例应用的拓扑结构有一个主服务器和多个副本使用异步的基于行Row-Based
的数据复制。
<!--
**this is not a production configuration**.
@ -101,12 +104,12 @@ kubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml
<!--
This ConfigMap provides `my.cnf` overrides that let you independently control
configuration on the MySQL master and slaves.
In this case, you want the master to be able to serve replication logs to slaves
and you want slaves to reject any writes that don't come via replication.
configuration on the primary MySQL server and replicas.
In this case, you want the primary server to be able to serve replication logs to replicas
and you want repicas to reject any writes that don't come via replication.
-->
这个 ConfigMap 提供 `my.cnf` 覆盖设置,使你可以独立控制 MySQL 主服务器和从服务器的配置。
在这里,你希望主服务器能够将复制日志提供给从服务器,并且希望从服务器拒绝任何不是通过
在这里,你希望主服务器能够将复制日志提供给副本服务器,并且希望副本服务器拒绝任何不是通过
复制进行的写操作。
<!--
@ -147,17 +150,17 @@ cluster and namespace.
<!--
The Client Service, called `mysql-read`, is a normal Service with its own
cluster IP that distributes connections across all MySQL Pods that report
being Ready. The set of potential endpoints includes the MySQL master and all
slaves.
being Ready. The set of potential endpoints includes the primary MySQL server and all
replicas.
-->
客户端服务称为 `mysql-read`,是一种常规服务,具有其自己的集群 IP。
该集群 IP 在报告就绪的所有MySQL Pod 之间分配连接。
可能的端点集合包括 MySQL 主节点和所有节点。
可能的端点集合包括 MySQL 主节点和所有副本节点。
<!--
Note that only read queries can use the load-balanced Client Service.
Because there is only one MySQL master, clients should connect directly to the
MySQL master Pod (through its DNS entry within the Headless Service) to execute
Because there is only one primary MySQL server, clients should connect directly to the
primary MySQL Pod (through its DNS entry within the Headless Service) to execute
writes.
-->
请注意,只有读查询才能使用负载平衡的客户端服务。
@ -274,38 +277,37 @@ properties.
而这些 ID 也是需要唯一性、稳定性保证的。
<!--
The script in the `init-mysql` container also applies either `master.cnf` or
`slave.cnf` from the ConfigMap by copying the contents into `conf.d`.
Because the example topology consists of a single MySQL master and any number of
slaves, the script simply assigns ordinal `0` to be the master, and everyone
else to be slaves.
The script in the `init-mysql` container also applies either `primary.cnf` or
`replica.cnf` from the ConfigMap by copying the contents into `conf.d`.
Because the example topology consists of a single primary MySQL server and any number of
replicas, the script simply assigns ordinal `0` to be the primary server, and everyone
else to be replicas.
Combined with the StatefulSet controller's
[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/),
this ensures the MySQL master is Ready before creating slaves, so they can begin
this ensures the primary MySQL server is Ready before creating replicas, so they can begin
replicating.
-->
通过将内容复制到 conf.d 中,`init-mysql` 容器中的脚本也可以应用 ConfigMap 中的
`master.cnf` 或 `slave.cnf`。
由于示例部署结构由单个 MySQL 主节点和任意数量的节点组成,因此脚本仅将序数
`0` 指定为主节点,而将其他所有节点指定为节点。
`primary.cnf` 或 `replica.cnf`。
由于示例部署结构由单个 MySQL 主节点和任意数量的副本节点组成,因此脚本仅将序数
`0` 指定为主节点,而将其他所有节点指定为副本节点。
与 StatefulSet 控制器的
[部署顺序保证](/zh/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/)
相结合,
可以确保 MySQL 主服务器在创建服务器之前已准备就绪,以便它们可以开始复制。
可以确保 MySQL 主服务器在创建副本服务器之前已准备就绪,以便它们可以开始复制。
<!--
### Cloning existing data
In general, when a new Pod joins the set as a slave, it must assume the MySQL
master might already have data on it. It also must assume that the replication
In general, when a new Pod joins the set as a replica, it must assume the primary MySQL
server might already have data on it. It also must assume that the replication
logs might not go all the way back to the beginning of time.
-->
### 克隆现有数据
通常,当新 Pod 作为节点加入集合时,必须假定 MySQL 主节点可能已经有数据。
通常,当新 Pod 作为副本节点加入集合时,必须假定 MySQL 主节点可能已经有数据。
还必须假设复制日志可能不会一直追溯到时间的开始。
<!--
@ -316,11 +318,11 @@ to scale up and down over time, rather than being fixed at its initial size.
<!--
The second Init Container, named `clone-mysql`, performs a clone operation on
a slave Pod the first time it starts up on an empty PersistentVolume.
a replica Pod the first time it starts up on an empty PersistentVolume.
That means it copies all existing data from another running Pod,
so its local state is consistent enough to begin replicating from the master.
so its local state is consistent enough to begin replicating from the primary server.
-->
第二个名为 `clone-mysql` 的 Init 容器,第一次在带有空 PersistentVolume 的从属 Pod
第二个名为 `clone-mysql` 的 Init 容器,第一次在带有空 PersistentVolume 的副本 Pod
上启动时,会在从属 Pod 上执行克隆操作。
这意味着它将从另一个运行中的 Pod 复制所有现有数据,使此其本地状态足够一致,
从而可以开始从主服务器复制。
@ -329,14 +331,14 @@ so its local state is consistent enough to begin replicating from the master.
MySQL itself does not provide a mechanism to do this, so the example uses a
popular open-source tool called Percona XtraBackup.
During the clone, the source MySQL server might suffer reduced performance.
To minimize impact on the MySQL master, the script instructs each Pod to clone
To minimize impact on the primary MySQL server, the script instructs each Pod to clone
from the Pod whose ordinal index is one lower.
This works because the StatefulSet controller always ensures Pod `N` is
Ready before starting Pod `N+1`.
-->
MySQL 本身不提供执行此操作的机制,因此本示例使用了一种流行的开源工具 Percona XtraBackup。
在克隆期间,源 MySQL 服务器性能可能会受到影响。
为了最大程度地减少对 MySQL 主节点的影响,该脚本指示每个 Pod 从序号较低的 Pod 中克隆。
为了最大程度地减少对 MySQL 主服务器的影响,该脚本指示每个 Pod 从序号较低的 Pod 中克隆。
可以这样做的原因是 StatefulSet 控制器始终确保在启动 Pod N + 1 之前 Pod N 已准备就绪。
<!--
@ -356,25 +358,26 @@ MySQL Pod 由运行实际 `mysqld` 服务的 `mysql` 容器和充当
<!--
The `xtrabackup` sidecar looks at the cloned data files and determines if
it's necessary to initialize MySQL replication on the slave.
it's necessary to initialize MySQL replication on the replica.
If so, it waits for `mysqld` to be ready and then executes the
`CHANGE MASTER TO` and `START SLAVE` commands with replication parameters
extracted from the XtraBackup clone files.
-->
`xtrabackup` sidecar 容器查看克隆的数据文件,并确定是否有必要在服务器上初始化 MySQL 复制。
`xtrabackup` sidecar 容器查看克隆的数据文件,并确定是否有必要在副本服务器上初始化 MySQL 复制。
如果是这样,它将等待 `mysqld` 准备就绪,然后使用从 XtraBackup 克隆文件中提取的复制参数
执行 `CHANGE MASTER TO``START SLAVE` 命令。
<!--
Once a slave begins replication, it remembers its MySQL master and
Once a replica begins replication, it remembers its primary MySQL server and
reconnects automatically if the server restarts or the connection dies.
Also, because slaves look for the master at its stable DNS name
(`mysql-0.mysql`), they automatically find the master even if it gets a new
Also, because replicas look for the primary server at its stable DNS name
(`mysql-0.mysql`), they automatically find the primary server even if it gets a new
Pod IP due to being rescheduled.
-->
一旦从服务器开始复制后,它会记住其 MySQL 主服务器,并且如果服务器重新启动或连接中断也会自动重新连接。
另外,因为从服务器会以其稳定的 DNS 名称查找主服务器(`mysql-0.mysql`
即使由于重新调度而获得新的 Pod IP他们也会自动找到主服务器。
一旦副本服务器开始复制后,它会记住其 MySQL 主服务器,并且如果服务器重新启动或
连接中断也会自动重新连接。
另外,因为副本服务器会以其稳定的 DNS 名称查找主服务器(`mysql-0.mysql`
即使由于重新调度而获得新的 Pod IP它们也会自动找到主服务器。
<!--
Lastly, after starting replication, the `xtrabackup` container listens for
@ -389,7 +392,7 @@ case the next Pod loses its PersistentVolumeClaim and needs to redo the clone.
<!--
## Sending client traffic
You can send test queries to the MySQL master (hostname `mysql-0.mysql`)
You can send test queries to the primary MySQL server (hostname `mysql-0.mysql`)
by running a temporary container with the `mysql:5.7` image and running the
`mysql` client binary.
-->
@ -478,13 +481,13 @@ it running in another window so you can see the effects of the following steps.
<!--
## Simulating Pod and Node downtime
To demonstrate the increased availability of reading from the pool of slaves
To demonstrate the increased availability of reading from the pool of replicas
instead of a single server, keep the `SELECT @@server_id` loop from above
running while you force a Pod out of the Ready state.
-->
## 模拟 Pod 和 Node 的宕机时间
为了证明从节点缓存而不是单个服务器读取数据的可用性提高,请在使 Pod 退出 Ready
为了证明从副本节点缓存而不是单个服务器读取数据的可用性提高,请在使 Pod 退出 Ready
状态时,保持上述 `SELECT @@server_id` 循环一直运行。
<!--
@ -679,14 +682,14 @@ kubectl uncordon <节点名称>
```
<!--
## Scaling the number of slaves
## Scaling the number of replicas
With MySQL replication, you can scale your read query capacity by adding slaves.
With MySQL replication, you can scale your read query capacity by adding replicas.
With StatefulSet, you can do this with a single command:
-->
## 扩展节点数量
## 扩展副本节点数量
使用 MySQL 复制,你可以通过添加节点来扩展读取查询的能力。
使用 MySQL 复制,你可以通过添加副本节点来扩展读取查询的能力。
使用 StatefulSet你可以使用单个命令执行此操作
```shell

View File

@ -29,7 +29,7 @@ and view logs. For a complete list of kubectl operations, see
-->
使用 Kubernetes 命令行工具 [kubectl](/zh/docs/reference/kubectl/kubectl/)
你可以在 Kubernetes 上运行命令。
使用 kubectl你可以部署应用、检和管理集群资源、查看日志。
使用 kubectl你可以部署应用、检和管理集群资源、查看日志。
要了解 kubectl 操作的完整列表,请参阅
[kubectl 概览](/zh/docs/reference/kubectl/overview/)。
@ -60,44 +60,97 @@ Using the latest version of kubectl helps avoid unforeseen issues.
-->
1. 使用下面命令下载最新的发行版本:
```
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
```bash
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
```
<!--
To download a specific version, replace the `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` portion of the command with the specific version.
To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version.
For example, to download version {{< param "fullversion" >}} on Linux, type:
-->
要下载特定版本, `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)`
要下载特定版本,将命令中的 `$(curl -L -s https://dl.k8s.io/release/stable.txt)`
部分替换为指定版本。
例如,要下载 Linux 上的版本 {{< param "fullversion" >}},输入:
```
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
```
<!--
2. Make the kubectl binary executable.
1. Validate the binary (optional)
-->
2. 标记 kubectl 文件为可执行
2. 验证可执行文件(可选步骤)
<!--
Download the kubectl checksum file:
-->
下载 kubectl 校验和文件:
```bash
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
```
chmod +x ./kubectl
<!--
Validate the kubectl binary against the checksum file:
-->
使用校验和文件检查 kubectl 可执行二进制文件:
```bash
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
```
<!--
If valid, the output is:
-->
如果合法,则输出为:
```bash
kubectl: OK
```
<!--
If the check fails, `sha256` exits with nonzero status and prints output similar to:
-->
如果检查失败,则 `sha256` 退出且状态值非 0 并打印类似如下输出:
```bash
kubectl: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match
```
{{< note >}}
<!--
Download the same version of the binary and checksum.
-->
所下载的二进制可执行文件和校验和文件须是同一版本。
{{< /note >}}
<!--
3. Move the binary in to your PATH.
1. Install kubectl
-->
3. 将文件放到 PATH 路径下:
3. 安装 kubectl
```bash
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
```
sudo mv ./kubectl /usr/local/bin/kubectl
<!--
If you do not have root access on the target system, you can still install kubectl to the `~/.local/bin` directory:
-->
如果你并不拥有目标系统的 root 访问权限,你仍可以将 kubectl 安装到
`~/.local/bin` 目录下:
```bash
mkdir -p ~/.local/bin/kubectl
mv ./kubectl ~/.local/bin/kubectl
# 之后将 ~/.local/bin/kubectl 添加到环境变量 $PATH 中
```
<!--
4. Test to ensure the version you installed is up-to-date:
1. Test to ensure the version you installed is up-to-date:
-->
4. 测试你所安装的版本是最新的:
@ -118,6 +171,7 @@ echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/
sudo apt-get update
sudo apt-get install -y kubectl
{{< /tab >}}
{{< tab name="CentOS、RHEL 或 Fedora" codelang="bash" >}}
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
@ -186,42 +240,95 @@ kubectl version --client
1. 下载最新发行版本:
```bash
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
```
<!--
To download a specific version, replace the `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` portion of the command with the specific version.
To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version.
For example, to download version {{< param "fullversion" >}} on macOS, type:
-->
要下载特定版本,可将上面命令中的`$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)`
要下载特定版本,可将上面命令中的 `$(curl -L -s https://dl.k8s.io/release/stable.txt)`
部分替换成你想要的版本。
例如,要在 macOS 上安装版本 {{< param "fullversion" >}},输入:
```bash
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
```
<!-- Make the kubectl binary executable. -->
将二进制文件标记为可执行:
<!--
1. Validate the binary (optional)
-->
2. 检查二进制可执行文件(可选操作)
<!--
Download the kubectl checksum file:
-->
下载 kubectl 校验和文件:
```bash
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256"
```
<!--
Validate the kubectl binary against the checksum file:
-->
使用校验和文件检查 kubectl 二进制可执行文件:
```bash
echo "$(<kubectl.sha256) kubectl" | shasum -a 256 --check
```
<!--
If valid, the output is:
-->
如果合法,则输出为:
```bash
kubectl: OK
```
<!--
If the check fails, `shasum` exits with nonzero status and prints output similar to:
-->
如果检查失败,则 `shasum` 退出且状态值为非 0并打印类似如下的输出
```bash
kubectl: FAILED
shasum: WARNING: 1 computed checksum did NOT match
```
{{< note >}}
<!--
Download the same version of the binary and checksum.
-->
下载的二进制可执行文件和校验和文件须为同一版本。
{{< /note >}}
<!--
1. Make the kubectl binary executable.
-->
3. 设置 kubectl 二进制文件为可执行模式
```bash
chmod +x ./kubectl
```
<!--
3. Move the binary in to your PATH.
1. Move the kubectl binary to a file location on your system `PATH`.
-->
3. 将二进制文件放入 PATH 目录下:
4. 将 kubectl 二进制文件移动到系统 `PATH` 环境变量中的某个位置
```bash
sudo mv ./kubectl /usr/local/bin/kubectl
sudo mv ./kubectl /usr/local/bin/kubectl && \
sudo chown root: /usr/local/bin/kubectl
```
<!--
4. Test to ensure the version you installed is up-to-date:
-->
4. 测试以确保所安装的版本是最新的:
5. 测试以确保所安装的版本是最新的:
```bash
kubectl version --client
@ -256,7 +363,7 @@ If you are on macOS and using [Homebrew](https://brew.sh/) package manager, you
```
<!--
2. Test to ensure the version you installed is sufficiently up-to-date:
1. Test to ensure the version you installed is sufficiently up-to-date:
-->
2. 测试以确保你安装的版本是最新的:
@ -271,7 +378,8 @@ If you are on macOS and using [Macports](https://macports.org/) package manager,
-->
### 在 macOS 上用 Macports 安装 kubectl
如果你使用的是 macOS 系统并使用 [Macports](https://macports.org/) 包管理器,你可以通过 Macports 安装 kubectl。
如果你使用的是 macOS 系统并使用 [Macports](https://macports.org/) 包管理器,
你可以通过 Macports 安装 kubectl。
<!--
1. Run the installation command:
@ -293,7 +401,7 @@ If you are on macOS and using [Macports](https://macports.org/) package manager,
```
<!--
## Install kubectl on Windows
## Install kubectl on Windows {#install-kubectl-on-windows}
### Install kubectl binary with curl on Windows
-->
@ -302,36 +410,74 @@ If you are on macOS and using [Macports](https://macports.org/) package manager,
### 在 Windows 上使用 curl 安装 kubectl 二进制文件
<!--
1. Download the latest release {{< param "fullversion" >}} from [this link](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
1. Download the [latest release {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
Or if you have `curl` installed, use this command:
-->
1. 从[此链接](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)
下载最新发行版本 {{< param "fullversion" >}}。
1. 下载[最新发行版本 {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)。
或者如何你安装了 `curl`,使用下面的命令:
```bash
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
```
<!--
To find out the latest stable version (for example, for scripting), take a look at [https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt).
To find out the latest stable version (for example, for scripting), take a look at [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
-->
要了解最新的稳定版本(例如,出于脚本编写目的),可查看
[https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt)。
要了解哪个是最新的稳定版本(例如,出于脚本编写目的),可查看
[https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt)。
<!--
2. Add the binary in to your PATH.
1. Validate the binary (optional)
-->
2. 将可执行文件放到 PATH 目录下。
2. 验证二进制可执行文件(可选操作)
<!--
Download the kubectl checksum file:
-->
下载 kubectl 校验和文件:
```powershell
curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256
```
<!--
Validate the kubectl binary against the checksum file:
-->
使用校验和文件验证 kubectl 可执行二进制文件:
<!--
- Using Command Prompt to manually compare `CertUtil`'s output to the checksum file downloaded:
-->
- 使用命令行提示符Commmand Prompt来手动比较 `CertUtil` 的输出与
所下载的校验和文件:
```cmd
CertUtil -hashfile kubectl.exe SHA256
type kubectl.exe.sha256
```
<!--
- Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result:
-->
- 使用 PowerShell 的 `-eq` 操作符来自动完成校验操作,获得 `True``False` 结果:
```powershell
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
```
<!--
3. Test to ensure the version of `kubectl` is the same as downloaded:
1. Add the binary in to your PATH.
-->
3. 测试以确定所下载的 `kubectl` 版本是正确的的:
3. 将可执行文件放到 PATH 目录下。
```bash
<!--
1. Test to ensure the version of `kubectl` is the same as downloaded:
-->
4. 测试以确定所下载的 `kubectl` 版本是正确的的:
```cmd
kubectl version --client
```
@ -341,20 +487,20 @@ If you have installed Docker Desktop before, you may need to place your PATH ent
-->
{{< note >}}
[Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/#kubernetes)
在 PATH 中添加自己的 `kubectl` 程序
将自己的 `kubectl` 程序添加到 PATH 中
如果你之前安装过 Docker Desktop你可能需要将新安装的 PATH 项放到 Docker Desktop
安装程序所添加的目录之前,或者干脆删除 Docker Desktop 所安装的 `kubectl`
{{< /note >}}
<!--
## Install with Powershell from PSGallery
## Install with PowerShell from PSGallery
If you are on Windows and using [Powershell Gallery](https://www.powershellgallery.com/) package manager, you can install and update kubectl with Powershell.
If you are on Windows and using [PowerShell Gallery](https://www.powershellgallery.com/) package manager, you can install and update kubectl with PowerShell.
-->
## 使用 PowerShell 从 PSGallery 安装 kubectl
如果你使用的是 Windows 系统并使用 [Powershell Gallery](https://www.powershellgallery.com/)
软件包管理器,你可以使用 Powershell 安装和更新 kubectl。
软件包管理器,你可以使用 PowerShell 安装和更新 kubectl。
<!--
1. Run the installation commands (making sure to specify a `DownloadLocation`):
@ -363,14 +509,14 @@ If you are on Windows and using [Powershell Gallery](https://www.powershellgalle
```powershell
Install-Script -Name 'install-kubectl' -Scope CurrentUser -Force
install-kubectl.ps1 [-DownloadLocation <path>]
install-kubectl.ps1 [-DownloadLocation <路径名>]
```
<!--
If you do not specify a `DownloadLocation`, `kubectl` will be installed in the user's temp Directory.
If you do not specify a `DownloadLocation`, `kubectl` will be installed in the user's `temp` Directory.
-->
{{< note >}}
如果你没有指定 `DownloadLocation`,那么 `kubectl` 将安装在用户的临时目录中。
如果你没有指定 `DownloadLocation`,那么 `kubectl` 将安装在用户的 `temp` 目录中。
{{< /note >}}
<!--
@ -402,8 +548,6 @@ Updating the installation is performed by rerunning the two commands listed in s
<!--
1. To install kubectl on Windows you can use either [Chocolatey](https://chocolatey.org) package manager or [Scoop](https://scoop.sh) command-line installer.
-->
1. 要在 Windows 上用 [Chocolatey](https://chocolatey.org) 或者
[Scoop](https://scoop.sh) 命令行安装程序安装 kubectl
@ -421,7 +565,7 @@ Updating the installation is performed by rerunning the two commands listed in s
{{< /tabs >}}
<!--
2. Test to ensure the version you installed is up-to-date:
1. Test to ensure the version you installed is up-to-date:
-->
2. 测试以确保你安装的版本是最新的:
@ -430,7 +574,7 @@ Updating the installation is performed by rerunning the two commands listed in s
```
<!--
3. Navigate to your home directory:
1. Navigate to your home directory:
-->
3. 切换到你的 HOME 目录:
@ -440,7 +584,7 @@ Updating the installation is performed by rerunning the two commands listed in s
```
<!--
4. Create the `.kube` directory:
1. Create the `.kube` directory:
-->
4. 创建 `.kube` 目录:
@ -449,7 +593,7 @@ Updating the installation is performed by rerunning the two commands listed in s
```
<!--
5. Change to the `.kube` directory you just created:
1. Change to the `.kube` directory you just created:
-->
5. 进入到刚刚创建的 `.kube` 目录:
@ -458,7 +602,7 @@ Updating the installation is performed by rerunning the two commands listed in s
```
<!--
6. Configure kubectl to use a remote Kubernetes cluster:
1. Configure kubectl to use a remote Kubernetes cluster:
-->
6. 配置 kubectl 以使用远程 Kubernetes 集群:
@ -539,11 +683,13 @@ If you see a URL response, kubectl is correctly configured to access your cluste
If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster.
-->
如果你看到一个 URL 被返回,那么 kubectl 已经被正确配置,能够正常访问你的 Kubernetes 集群。
如果你看到一个 URL 被返回,那么 kubectl 已经被正确配置,
能够正常访问你的 Kubernetes 集群。
如果你看到类似以下的信息被返回,那么 kubectl 没有被正确配置,无法正常访问你的 Kubernetes 集群。
如果你看到类似以下的信息被返回,那么 kubectl 没有被正确配置,
无法正常访问你的 Kubernetes 集群。
```shell
```
The connection to the server <server-name:port> was refused - did you specify the right host or port?
```
@ -552,9 +698,11 @@ For example, if you are intending to run a Kubernetes cluster on your laptop (lo
If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use:
-->
例如,如果你打算在笔记本电脑(本地)上运行 Kubernetes 集群,则需要首先安装 minikube 等工具,然后重新运行上述命令。
例如,如果你打算在笔记本电脑(本地)上运行 Kubernetes 集群,则需要首先安装
minikube 等工具,然后重新运行上述命令。
如果 kubectl cluster-info 能够返回 URL 响应,但你无法访问你的集群,可以使用下面的命令检查配置是否正确:
如果 kubectl cluster-info 能够返回 URL 响应,但你无法访问你的集群,可以使用
下面的命令检查配置是否正确:
```shell
kubectl cluster-info dump

View File

@ -19,7 +19,7 @@ spec:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"

View File

@ -0,0 +1,28 @@
<script src="/js/popper-1.14.3.min.js" integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49" crossorigin="anonymous"></script>
<script src="/js/bootstrap-4.3.1.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
{{ if .Site.Params.mermaid.enable }}
<script src="/js//mermaid.min.js" crossorigin="anonymous"></script>
{{ end }}
{{ $jsBase := resources.Get "js/base.js" }}
{{ $jsAnchor := resources.Get "js/anchor.js" }}
{{ $jsSearch := resources.Get "js/search.js" | resources.ExecuteAsTemplate "js/search.js" .Site.Home }}
{{ $jsMermaid := resources.Get "js/mermaid.js" | resources.ExecuteAsTemplate "js/mermaid.js" . }}
{{ if .Site.Params.offlineSearch }}
{{ $jsSearch = resources.Get "js/offline-search.js" }}
{{ end }}
{{ $js := (slice $jsBase $jsAnchor $jsSearch $jsMermaid) | resources.Concat "js/main.js" }}
{{ if .Site.IsServer }}
<script src="{{ $js.RelPermalink }}"></script>
{{ else }}
{{ $js := $js | minify | fingerprint }}
<script src="{{ $js.RelPermalink }}" integrity="{{ $js.Data.Integrity }}" crossorigin="anonymous"></script>
{{ end }}
{{ with .Site.Params.prism_syntax_highlighting }}
<!-- scripts for prism -->
<script src='/js/prism.js'></script>
{{ end }}
{{ partial "hooks/body-end.html" . }}

View File

@ -375,9 +375,7 @@
/docs/user-guide/liveness/ /docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ 301
/docs/user-guide/load-balancer/ /docs/tasks/access-application-cluster/create-external-load-balancer/ 301
/docs/user-guide/logging/ /docs/concepts/cluster-administration/logging/ 301
/docs/user-guide/logging/elasticsearch/ /docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ 301
/docs/user-guide/logging/overview/ /docs/concepts/cluster-administration/logging/ 301
/docs/user-guide/logging/stackdriver/ /docs/tasks/debug-application-cluster/logging-stackdriver/ 301
/docs/user-guide/managing-deployments/ /docs/concepts/cluster-administration/manage-deployment/ 301
/docs/user-guide/monitoring/ /docs/tasks/debug-application-cluster/resource-usage-monitoring/ 301
/docs/user-guide/namespaces/ /docs/concepts/overview/working-with-objects/namespaces/ 301

5
static/js/popper-1.14.3.min.js vendored Normal file

File diff suppressed because one or more lines are too long