Merge master into dev-1.21 to keep in sync

pull/27242/head
Victor Palade 2021-03-26 21:26:43 +01:00
commit ca046d9b1f
200 changed files with 4310 additions and 2439 deletions

View File

@ -30,6 +30,17 @@ El método recomendado para levantar una copia local del sitio web kubernetes.io
> Si prefiere levantar el sitio web sin utilizar **Docker**, puede seguir las instrucciones disponibles en la sección [Levantando kubernetes.io en local con Hugo](#levantando-kubernetesio-en-local-con-hugo). > Si prefiere levantar el sitio web sin utilizar **Docker**, puede seguir las instrucciones disponibles en la sección [Levantando kubernetes.io en local con Hugo](#levantando-kubernetesio-en-local-con-hugo).
**`Nota`: Para el procedimiento de construir una imagen de Docker e iniciar el servidor.**
El sitio web de Kubernetes utiliza Docsy Hugo theme. Se sugiere que se instale si aún no se ha hecho, los **submódulos** y otras dependencias de herramientas de desarrollo ejecutando el siguiente comando de `git`:
```bash
# pull de los submódulos del repositorio
git submodule update --init --recursive --depth 1
```
Si identifica que `git` reconoce una cantidad innumerable de cambios nuevos en el proyecto, la forma más simple de solucionarlo es cerrando y volviendo a abrir el proyecto en el editor. Los submódulos son automáticamente detectados por `git`, pero los plugins usados por los editores pueden tener dificultades para ser cargados.
Una vez tenga Docker [configurado en su máquina](https://www.docker.com/get-started), puede construir la imagen de Docker `kubernetes-hugo` localmente ejecutando el siguiente comando en la raíz del repositorio: Una vez tenga Docker [configurado en su máquina](https://www.docker.com/get-started), puede construir la imagen de Docker `kubernetes-hugo` localmente ejecutando el siguiente comando en la raíz del repositorio:
```bash ```bash

View File

@ -144,6 +144,15 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
Esta solução funciona tanto para o MacOS Catalina quanto para o MacOS Mojave. Esta solução funciona tanto para o MacOS Catalina quanto para o MacOS Mojave.
### Erro de "Out of Memory"
Se você executar o comando `make container-serve` e retornar o seguinte erro:
```
make: *** [container-serve] Error 137
```
Verifique a quantidade de memória disponível para o agente de execução de contêiner. No caso do Docker Desktop para macOS, abra o menu "Preferences..." -> "Resources..." e tente disponibilizar mais memória.
# Comunidade, discussão, contribuição e apoio # Comunidade, discussão, contribuição e apoio
Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/). Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/).

View File

@ -11,20 +11,20 @@ aliases:
<!-- overview --> <!-- overview -->
This document catalogs the communication paths between the control plane (really the apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). This document catalogs the communication paths between the control plane (apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider).
<!-- body --> <!-- body -->
## Node to Control Plane ## Node to Control Plane
Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminate at the apiserver (none of the other control plane components are designed to expose remote services). The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminates at the apiserver. None of the other control plane components are designed to expose remote services. The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled.
One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed. One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed.
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated. Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated.
The `kubernetes` service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
The control plane components also communicate with the cluster apiserver over the secure port. The control plane components also communicate with the cluster apiserver over the secure port.
@ -42,7 +42,7 @@ The connections from the apiserver to the kubelet are used for:
* Attaching (through kubectl) to running pods. * Attaching (through kubectl) to running pods.
* Providing the kubelet's port-forwarding functionality. * Providing the kubelet's port-forwarding functionality.
These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and **unsafe** to run over untrusted and/or public networks. These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks and **unsafe** to run over untrusted and/or public networks.
To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate. To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate.
@ -53,20 +53,20 @@ Finally, [Kubelet authentication and/or authorization](/docs/reference/command-l
### apiserver to nodes, pods, and services ### apiserver to nodes, pods, and services
The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials so while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted and/or public networks. The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted or public networks.
### SSH tunnels ### SSH tunnels
Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel. Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel.
This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running. This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running.
SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel.
### Konnectivity service ### Konnectivity service
{{< feature-state for_k8s_version="v1.18" state="beta" >}} {{< feature-state for_k8s_version="v1.18" state="beta" >}}
As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server and the Konnectivity agents, running in the control plane network and the nodes network respectively. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections.
After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections. After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections.
Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster. Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster.

View File

@ -17,7 +17,7 @@ and contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}} {{< glossary_tooltip text="Pods" term_id="pod" >}}
Typically you have several nodes in a cluster; in a learning or resource-limited Typically you have several nodes in a cluster; in a learning or resource-limited
environment, you might have just one. environment, you might have only one node.
The [components](/docs/concepts/overview/components/#node-components) on a node include the The [components](/docs/concepts/overview/components/#node-components) on a node include the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a
@ -237,6 +237,7 @@ responsible for:
- Evicting all the pods from the node using graceful termination if - Evicting all the pods from the node using graceful termination if
the node continues to be unreachable. The default timeouts are 40s to start the node continues to be unreachable. The default timeouts are 40s to start
reporting ConditionUnknown and 5m after that to start evicting pods. reporting ConditionUnknown and 5m after that to start evicting pods.
The node controller checks the state of each node every `--node-monitor-period` seconds. The node controller checks the state of each node every `--node-monitor-period` seconds.
#### Heartbeats #### Heartbeats
@ -278,6 +279,7 @@ the same time:
`--large-cluster-size-threshold` nodes - default 50), then evictions are stopped. `--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate` - Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
(default 0.01) per second. (default 0.01) per second.
The reason these policies are implemented per availability zone is because one The reason these policies are implemented per availability zone is because one
availability zone might become partitioned from the master while the others remain availability zone might become partitioned from the master while the others remain
connected. If your cluster does not span multiple cloud provider availability zones, connected. If your cluster does not span multiple cloud provider availability zones,

View File

@ -427,7 +427,7 @@ poorly-behaved workloads that may be harming system health.
histogram vector of queue lengths for the queues, broken down by histogram vector of queue lengths for the queues, broken down by
the labels `priority_level` and `flow_schema`, as sampled by the the labels `priority_level` and `flow_schema`, as sampled by the
enqueued requests. Each request that gets queued contributes one enqueued requests. Each request that gets queued contributes one
sample to its histogram, reporting the length of the queue just sample to its histogram, reporting the length of the queue immediately
after the request was added. Note that this produces different after the request was added. Note that this produces different
statistics than an unbiased survey would. statistics than an unbiased survey would.
{{< note >}} {{< note >}}

View File

@ -278,7 +278,7 @@ pod/my-nginx-2035384211-u3t6x labeled
``` ```
This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe". This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe".
To see the pods you just labeled, run: To see the pods you labeled, run:
```shell ```shell
kubectl get pods -l app=nginx -L tier kubectl get pods -l app=nginx -L tier
@ -411,7 +411,7 @@ and
## Disruptive updates ## Disruptive updates
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file: In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:
```shell ```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force

View File

@ -39,7 +39,7 @@ There are several different proxies you may encounter when using Kubernetes:
- proxies UDP, TCP and SCTP - proxies UDP, TCP and SCTP
- does not understand HTTP - does not understand HTTP
- provides load balancing - provides load balancing
- is just used to reach services - is only used to reach services
1. A Proxy/Load-balancer in front of apiserver(s): 1. A Proxy/Load-balancer in front of apiserver(s):

View File

@ -72,8 +72,7 @@ You cannot overcommit `hugepages-*` resources.
This is different from the `memory` and `cpu` resources. This is different from the `memory` and `cpu` resources.
{{< /note >}} {{< /note >}}
CPU and memory are collectively referred to as *compute resources*, or just CPU and memory are collectively referred to as *compute resources*, or *resources*. Compute
*resources*. Compute
resources are measurable quantities that can be requested, allocated, and resources are measurable quantities that can be requested, allocated, and
consumed. They are distinct from consumed. They are distinct from
[API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and [API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and
@ -554,7 +553,7 @@ extender.
### Consuming extended resources ### Consuming extended resources
Users can consume extended resources in Pod specs just like CPU and memory. Users can consume extended resources in Pod specs like CPU and memory.
The scheduler takes care of the resource accounting so that no more than the The scheduler takes care of the resource accounting so that no more than the
available amount is simultaneously allocated to Pods. available amount is simultaneously allocated to Pods.

View File

@ -109,7 +109,7 @@ empty-secret Opaque 0 2m6s
``` ```
The `DATA` column shows the number of data items stored in the Secret. The `DATA` column shows the number of data items stored in the Secret.
In this case, `0` means we have just created an empty Secret. In this case, `0` means we have created an empty Secret.
### Service account token Secrets ### Service account token Secrets

View File

@ -50,10 +50,11 @@ A more detailed description of the termination behavior can be found in
### Hook handler implementations ### Hook handler implementations
Containers can access a hook by implementing and registering a handler for that hook. Containers can access a hook by implementing and registering a handler for that hook.
There are two types of hook handlers that can be implemented for Containers: There are three types of hook handlers that can be implemented for Containers:
* Exec - Executes a specific command, such as `pre-stop.sh`, inside the cgroups and namespaces of the Container. * Exec - Executes a specific command, such as `pre-stop.sh`, inside the cgroups and namespaces of the Container.
Resources consumed by the command are counted against the Container. Resources consumed by the command are counted against the Container.
* TCP - Opens a TCP connecton against a specific port on the Container.
* HTTP - Executes an HTTP request against a specific endpoint on the Container. * HTTP - Executes an HTTP request against a specific endpoint on the Container.
### Hook handler execution ### Hook handler execution

View File

@ -135,7 +135,7 @@ Here are the recommended steps to configuring your nodes to use a private regist
example, run these on your desktop/laptop: example, run these on your desktop/laptop:
1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC. 1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC.
1. View `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use. 1. View `$HOME/.docker/config.json` in an editor to ensure it contains only the credentials you want to use.
1. Get a list of your nodes; for example: 1. Get a list of your nodes; for example:
- if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )` - if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )`
- if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )` - if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )`

View File

@ -145,7 +145,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat
### Authorization ### Authorization
[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. [Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
### Dynamic Admission Control ### Dynamic Admission Control

View File

@ -146,7 +146,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat
### Authorization ### Authorization
[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. [Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
### Dynamic Admission Control ### Dynamic Admission Control

View File

@ -28,7 +28,7 @@ resource can only be in one namespace.
Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)). Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)).
It is not necessary to use multiple namespaces just to separate slightly different It is not necessary to use multiple namespaces to separate slightly different
resources, such as different versions of the same software: use resources, such as different versions of the same software: use
[labels](/docs/concepts/overview/working-with-objects/labels) to distinguish [labels](/docs/concepts/overview/working-with-objects/labels) to distinguish
resources within the same namespace. resources within the same namespace.
@ -91,7 +91,7 @@ kubectl config view --minify | grep namespace:
When you create a [Service](/docs/concepts/services-networking/service/), When you create a [Service](/docs/concepts/services-networking/service/),
it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>`, it will resolve to the service which that if a container only uses `<service-name>`, it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN). across namespaces, you need to use the fully qualified domain name (FQDN).

View File

@ -120,12 +120,12 @@ pod is eligible to be scheduled on, based on labels on the node.
There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
`preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively, `preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively,
in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (just like in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (similar to
`nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler `nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler
will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar
to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer
met, the pod will still continue to run on the node. In the future we plan to offer met, the pod continues to run on the node. In the future we plan to offer
`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution` `requiredDuringSchedulingRequiredDuringExecution` which will be identical to `requiredDuringSchedulingIgnoredDuringExecution`
except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements. except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements.
Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs" Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs"

View File

@ -43,7 +43,7 @@ Authenticators are described in more detail in
[Authentication](/docs/reference/access-authn-authz/authentication/). [Authentication](/docs/reference/access-authn-authz/authentication/).
The input to the authentication step is the entire HTTP request; however, it typically The input to the authentication step is the entire HTTP request; however, it typically
just examines the headers and/or client certificate. examines the headers and/or client certificate.
Authentication modules include client certificates, password, and plain tokens, Authentication modules include client certificates, password, and plain tokens,
bootstrap tokens, and JSON Web Tokens (used for service accounts). bootstrap tokens, and JSON Web Tokens (used for service accounts).

View File

@ -387,7 +387,7 @@ $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
<h1>Welcome to nginx!</h1> <h1>Welcome to nginx!</h1>
``` ```
Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
```shell ```shell
kubectl edit svc my-nginx kubectl edit svc my-nginx

View File

@ -260,7 +260,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service
{{< codenew file="service/networking/test-ingress.yaml" >}} {{< codenew file="service/networking/test-ingress.yaml" >}}
If you create it using `kubectl apply -f` you should be able to view the state If you create it using `kubectl apply -f` you should be able to view the state
of the Ingress you just added: of the Ingress you added:
```bash ```bash
kubectl get ingress test-ingress kubectl get ingress test-ingress

View File

@ -57,7 +57,7 @@ the first label matches the originating Node's value for that label. If there is
no backend for the Service on a matching Node, then the second label will be no backend for the Service on a matching Node, then the second label will be
considered, and so forth, until no labels remain. considered, and so forth, until no labels remain.
If no match is found, the traffic will be rejected, just as if there were no If no match is found, the traffic will be rejected, as if there were no
backends for the Service at all. That is, endpoints are chosen based on the first backends for the Service at all. That is, endpoints are chosen based on the first
topology key with available backends. If this field is specified and all entries topology key with available backends. If this field is specified and all entries
have no backends that match the topology of the client, the service has no have no backends that match the topology of the client, the service has no
@ -87,7 +87,7 @@ traffic as follows.
* Service topology is not compatible with `externalTrafficPolicy=Local`, and * Service topology is not compatible with `externalTrafficPolicy=Local`, and
therefore a Service cannot use both of these features. It is possible to use therefore a Service cannot use both of these features. It is possible to use
both features in the same cluster on different Services, just not on the same both features in the same cluster on different Services, only not on the same
Service. Service.
* Valid topology keys are currently limited to `kubernetes.io/hostname`, * Valid topology keys are currently limited to `kubernetes.io/hostname`,

View File

@ -527,7 +527,7 @@ for NodePort use.
Using a NodePort gives you the freedom to set up your own load balancing solution, Using a NodePort gives you the freedom to set up your own load balancing solution,
to configure environments that are not fully supported by Kubernetes, or even to configure environments that are not fully supported by Kubernetes, or even
to just expose one or more nodes' IPs directly. to expose one or more nodes' IPs directly.
Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort` Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).) and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).)
@ -785,8 +785,7 @@ you can use the following annotations:
``` ```
In the above example, if the Service contained three ports, `80`, `443`, and In the above example, if the Service contained three ports, `80`, `443`, and
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just `8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP.
be proxied HTTP.
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
To see which policies are available for use, you can use the `aws` command line tool: To see which policies are available for use, you can use the `aws` command line tool:
@ -958,7 +957,8 @@ groups are modified with the following IP rules:
| Rule | Protocol | Port(s) | IpRange(s) | IpRange Description | | Rule | Protocol | Port(s) | IpRange(s) | IpRange Description |
|------|----------|---------|------------|---------------------| |------|----------|---------|------------|---------------------|
| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> | | Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | Subnet CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> |
| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> | | Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> |
| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> | | MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> |
@ -1107,7 +1107,7 @@ but the current API requires it.
## Virtual IP implementation {#the-gory-details-of-virtual-ips} ## Virtual IP implementation {#the-gory-details-of-virtual-ips}
The previous information should be sufficient for many people who just want to The previous information should be sufficient for many people who want to
use Services. However, there is a lot going on behind the scenes that may be use Services. However, there is a lot going on behind the scenes that may be
worth understanding. worth understanding.

View File

@ -29,7 +29,7 @@ A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been pro
A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)). A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)).
While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource.
See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/).

View File

@ -37,7 +37,7 @@ request a particular class. Administrators set the name and other parameters
of a class when first creating StorageClass objects, and the objects cannot of a class when first creating StorageClass objects, and the objects cannot
be updated once they are created. be updated once they are created.
Administrators can specify a default StorageClass just for PVCs that don't Administrators can specify a default StorageClass only for PVCs that don't
request any particular class to bind to: see the request any particular class to bind to: see the
[PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) [PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
for details. for details.
@ -569,7 +569,7 @@ parameters:
`"http(s)://api-server:7860"` `"http(s)://api-server:7860"`
* `registry`: Quobyte registry to use to mount the volume. You can specify the * `registry`: Quobyte registry to use to mount the volume. You can specify the
registry as ``<host>:<port>`` pair or if you want to specify multiple registry as ``<host>:<port>`` pair or if you want to specify multiple
registries you just have to put a comma between them e.q. registries, put a comma between them.
``<host1>:<port>,<host2>:<port>,<host3>:<port>``. ``<host1>:<port>,<host2>:<port>,<host3>:<port>``.
The host can be an IP address or if you have a working DNS you can also The host can be an IP address or if you have a working DNS you can also
provide the DNS names. provide the DNS names.

View File

@ -40,7 +40,7 @@ Users need to be aware of the following when using this feature:
## Provisioning ## Provisioning
Clones are provisioned just like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.
```yaml ```yaml
apiVersion: v1 apiVersion: v1

View File

@ -38,7 +38,7 @@ that run within the pod, and data is preserved across container restarts. When a
ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
destroy persistent volumes. destroy persistent volumes.
At its core, a volume is just a directory, possibly with some data in it, which At its core, a volume is a directory, possibly with some data in it, which
is accessible to the containers in a pod. How that directory comes to be, the is accessible to the containers in a pod. How that directory comes to be, the
medium that backs it, and the contents of it are determined by the particular medium that backs it, and the contents of it are determined by the particular
volume type used. volume type used.

View File

@ -708,7 +708,7 @@ nginx-deployment-618515232 11 11 11 7m
You can pause a Deployment before triggering one or more updates and then resume it. This allows you to You can pause a Deployment before triggering one or more updates and then resume it. This allows you to
apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts. apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
* For example, with a Deployment that was just created: * For example, with a Deployment that was created:
Get the Deployment details: Get the Deployment details:
```shell ```shell
kubectl get deploy kubectl get deploy

View File

@ -100,7 +100,7 @@ pi-5rwd7
``` ```
Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression
that just gets the name from each Pod in the returned list. with the name from each Pod in the returned list.
View the standard output of one of the pods: View the standard output of one of the pods:

View File

@ -222,7 +222,7 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods
## Writing a ReplicaSet manifest ## Writing a ReplicaSet manifest
As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields.
For ReplicaSets, the kind is always just ReplicaSet. For ReplicaSets, the `kind` is always a ReplicaSet.
In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated. In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated.
Refer to the first lines of the `frontend.yaml` example for guidance. Refer to the first lines of the `frontend.yaml` example for guidance.
@ -237,7 +237,7 @@ The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-temp
required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`. required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`.
Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod. Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod.
For the template's [restart policy](/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy) field, For the template's [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) field,
`.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default. `.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default.
### Pod Selector ### Pod Selector

View File

@ -110,8 +110,7 @@ nginx-3ntk0 nginx-4ok8v nginx-qrm3m
Here, the selector is the same as the selector for the ReplicationController (seen in the Here, the selector is the same as the selector for the ReplicationController (seen in the
`kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option `kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option
specifies an expression that just gets the name from each pod in the returned list. specifies an expression with the name from each pod in the returned list.
## Writing a ReplicationController Spec ## Writing a ReplicationController Spec

View File

@ -312,7 +312,7 @@ can specify a readiness probe that checks an endpoint specific to readiness that
is different from the liveness probe. is different from the liveness probe.
{{< note >}} {{< note >}}
If you just want to be able to drain requests when the Pod is deleted, you do not If you want to be able to drain requests when the Pod is deleted, you do not
necessarily need a readiness probe; on deletion, the Pod automatically puts itself necessarily need a readiness probe; on deletion, the Pod automatically puts itself
into an unready state regardless of whether the readiness probe exists. into an unready state regardless of whether the readiness probe exists.
The Pod remains in the unready state while it waits for the containers in the Pod The Pod remains in the unready state while it waits for the containers in the Pod

View File

@ -40,7 +40,7 @@ Anyone can write a blog post and submit it for review.
- Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog. - Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog.
- Blog posts should be original content - Blog posts should be original content
- The official blog is not for repurposing existing content from a third party as new content. - The official blog is not for repurposing existing content from a third party as new content.
- The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog does allow commercial use of the content for commercial purposes, just not the other way around. - The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog allows commercial use of the content for commercial purposes, but not the other way around.
- Blog posts should aim to be future proof - Blog posts should aim to be future proof
- Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader. - Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader.
- It can be a better choice to add a tutorial or update official documentation than to write a high level overview as a blog post. - It can be a better choice to add a tutorial or update official documentation than to write a high level overview as a blog post.

View File

@ -77,9 +77,8 @@ merged. Keep the following in mind:
Alpha features. Alpha features.
- It's hard to test (and therefore to document) a feature that hasn't been merged, - It's hard to test (and therefore to document) a feature that hasn't been merged,
or is at least considered feature-complete in its PR. or is at least considered feature-complete in its PR.
- Determining whether a feature needs documentation is a manual process and - Determining whether a feature needs documentation is a manual process. Even if
just because a feature is not marked as needing docs doesn't mean it doesn't a feature is not marked as needing docs, you may need to document the feature.
need them.
## For developers or other SIG members ## For developers or other SIG members

View File

@ -19,7 +19,7 @@ Attribute-based access control (ABAC) defines an access control paradigm whereby
To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup. To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup.
The file format is [one JSON object per line](https://jsonlines.org/). There The file format is [one JSON object per line](https://jsonlines.org/). There
should be no enclosing list or map, just one map per line. should be no enclosing list or map, only one map per line.
Each line is a "policy object", where each such object is a map with the following Each line is a "policy object", where each such object is a map with the following
properties: properties:

View File

@ -193,7 +193,7 @@ This admission controller will deny exec and attach commands to pods that run wi
allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and
have access to the host PID namespace. have access to the host PID namespace.
The DenyEscalatingExec admission plugin is deprecated and will be removed in v1.21. The DenyEscalatingExec admission plugin is deprecated.
Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin)
which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods
@ -206,7 +206,7 @@ is recommended instead.
This admission controller will intercept all requests to exec a command in a pod if that pod has a privileged container. This admission controller will intercept all requests to exec a command in a pod if that pod has a privileged container.
This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec). This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec).
The DenyExecOnPrivileged admission plugin is deprecated and will be removed in v1.21. The DenyExecOnPrivileged admission plugin is deprecated.
Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin)
which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods

View File

@ -458,7 +458,7 @@ clusters:
- name: name-of-remote-authn-service - name: name-of-remote-authn-service
cluster: cluster:
certificate-authority: /path/to/ca.pem # CA for verifying the remote service. certificate-authority: /path/to/ca.pem # CA for verifying the remote service.
server: https://authn.example.com/authenticate # URL of remote service to query. Must use 'https'. server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production.
# users refers to the API server's webhook configuration. # users refers to the API server's webhook configuration.
users: users:

View File

@ -138,7 +138,7 @@ no
exposes the API server authorization to external services. Other resources in exposes the API server authorization to external services. Other resources in
this group include: this group include:
* `SubjectAccessReview` - Access review for any user, not just the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs. * `SubjectAccessReview` - Access review for any user, not only the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs.
* `LocalSubjectAccessReview` - Like `SubjectAccessReview` but restricted to a specific namespace. * `LocalSubjectAccessReview` - Like `SubjectAccessReview` but restricted to a specific namespace.
* `SelfSubjectRulesReview` - A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide/show actions. * `SelfSubjectRulesReview` - A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide/show actions.

View File

@ -167,7 +167,7 @@ data:
users: [] users: []
``` ```
The `kubeconfig` member of the ConfigMap is a config file with just the cluster The `kubeconfig` member of the ConfigMap is a config file with only the cluster
information filled out. The key thing being communicated here is the information filled out. The key thing being communicated here is the
`certificate-authority-data`. This may be expanded in the future. `certificate-authority-data`. This may be expanded in the future.

View File

@ -196,8 +196,8 @@ O is the group that this user will belong to. You can refer to
[RBAC](/docs/reference/access-authn-authz/rbac/) for standard groups. [RBAC](/docs/reference/access-authn-authz/rbac/) for standard groups.
```shell ```shell
openssl genrsa -out john.key 2048 openssl genrsa -out myuser.key 2048
openssl req -new -key john.key -out john.csr openssl req -new -key myuser.key -out myuser.csr
``` ```
### Create CertificateSigningRequest ### Create CertificateSigningRequest
@ -209,7 +209,7 @@ cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1 apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest kind: CertificateSigningRequest
metadata: metadata:
name: john name: myuser
spec: spec:
groups: groups:
- system:authenticated - system:authenticated
@ -224,7 +224,7 @@ Some points to note:
- `usages` has to be '`client auth`' - `usages` has to be '`client auth`'
- `request` is the base64 encoded value of the CSR file content. - `request` is the base64 encoded value of the CSR file content.
You can get the content using this command: ```cat john.csr | base64 | tr -d "\n"``` You can get the content using this command: ```cat myuser.csr | base64 | tr -d "\n"```
### Approve certificate signing request ### Approve certificate signing request
@ -239,7 +239,7 @@ kubectl get csr
Approve the CSR: Approve the CSR:
```shell ```shell
kubectl certificate approve john kubectl certificate approve myuser
``` ```
### Get the certificate ### Get the certificate
@ -247,11 +247,17 @@ kubectl certificate approve john
Retrieve the certificate from the CSR: Retrieve the certificate from the CSR:
```shell ```shell
kubectl get csr/john -o yaml kubectl get csr/myuser -o yaml
``` ```
The certificate value is in Base64-encoded format under `status.certificate`. The certificate value is in Base64-encoded format under `status.certificate`.
Export the issued certificate from the CertificateSigningRequest.
```
kubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt
```
### Create Role and RoleBinding ### Create Role and RoleBinding
With the certificate created. it is time to define the Role and RoleBinding for With the certificate created. it is time to define the Role and RoleBinding for
@ -266,31 +272,30 @@ kubectl create role developer --verb=create --verb=get --verb=list --verb=update
This is a sample command to create a RoleBinding for this new user: This is a sample command to create a RoleBinding for this new user:
```shell ```shell
kubectl create rolebinding developer-binding-john --role=developer --user=john kubectl create rolebinding developer-binding-myuser --role=developer --user=myuser
``` ```
### Add to kubeconfig ### Add to kubeconfig
The last step is to add this user into the kubeconfig file. The last step is to add this user into the kubeconfig file.
This example assumes the key and certificate files are located at "/home/vagrant/work/".
First, you need to add new credentials: First, you need to add new credentials:
``` ```
kubectl config set-credentials john --client-key=/home/vagrant/work/john.key --client-certificate=/home/vagrant/work/john.crt --embed-certs=true kubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true
``` ```
Then, you need to add the context: Then, you need to add the context:
``` ```
kubectl config set-context john --cluster=kubernetes --user=john kubectl config set-context myuser --cluster=kubernetes --user=myuser
``` ```
To test it, change the context to `john`: To test it, change the context to `myuser`:
``` ```
kubectl config use-context john kubectl config use-context myuser
``` ```
## Approval or rejection {#approval-rejection} ## Approval or rejection {#approval-rejection}
@ -363,7 +368,7 @@ status:
It's usual to set `status.conditions.reason` to a machine-friendly reason It's usual to set `status.conditions.reason` to a machine-friendly reason
code using TitleCase; this is a convention but you can set it to anything code using TitleCase; this is a convention but you can set it to anything
you like. If you want to add a note just for human consumption, use the you like. If you want to add a note for human consumption, use the
`status.conditions.message` field. `status.conditions.message` field.
## Signing ## Signing
@ -438,4 +443,3 @@ status:
* View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go) * View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)
* For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1 * For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1
* For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986) * For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986)

View File

@ -219,7 +219,7 @@ the role that is granted to those subjects.
1. A binding to a different role is a fundamentally different binding. 1. A binding to a different role is a fundamentally different binding.
Requiring a binding to be deleted/recreated in order to change the `roleRef` Requiring a binding to be deleted/recreated in order to change the `roleRef`
ensures the full list of subjects in the binding is intended to be granted ensures the full list of subjects in the binding is intended to be granted
the new role (as opposed to enabling accidentally modifying just the roleRef the new role (as opposed to enabling or accidentally modifying only the roleRef
without verifying all of the existing subjects should be given the new role's without verifying all of the existing subjects should be given the new role's
permissions). permissions).
@ -333,7 +333,7 @@ as a cluster administrator, include rules for custom resources, such as those se
or aggregated API servers, to extend the default roles. or aggregated API servers, to extend the default roles.
For example: the following ClusterRoles let the "admin" and "edit" default roles manage the custom resource For example: the following ClusterRoles let the "admin" and "edit" default roles manage the custom resource
named CronTab, whereas the "view" role can perform just read actions on CronTab resources. named CronTab, whereas the "view" role can perform only read actions on CronTab resources.
You can assume that CronTab objects are named `"crontabs"` in URLs as seen by the API server. You can assume that CronTab objects are named `"crontabs"` in URLs as seen by the API server.
```yaml ```yaml

View File

@ -185,9 +185,9 @@ systemd unit file perhaps) to enable the token file. See docs
further details. further details.
### Authorize kubelet to create CSR ### Authorize kubelet to create CSR
Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and just these) permissions, `system:node-bootstrapper`. Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and only these) permissions, `system:node-bootstrapper`.
To do this, you just need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`.
``` ```
# enable bootstrapping nodes to create CSR # enable bootstrapping nodes to create CSR
@ -345,7 +345,7 @@ The important elements to note are:
* `token`: the token to use * `token`: the token to use
The format of the token does not matter, as long as it matches what kube-apiserver expects. In the above example, we used a bootstrap token. The format of the token does not matter, as long as it matches what kube-apiserver expects. In the above example, we used a bootstrap token.
As stated earlier, _any_ valid authentication method can be used, not just tokens. As stated earlier, _any_ valid authentication method can be used, not only tokens.
Because the bootstrap `kubeconfig` _is_ a standard `kubeconfig`, you can use `kubectl` to generate it. To create the above example file: Because the bootstrap `kubeconfig` _is_ a standard `kubeconfig`, you can use `kubectl` to generate it. To create the above example file:

View File

@ -909,7 +909,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td colspan="2">--pod-infra-container-image string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `k8s.gcr.io/pause:3.2`</td> <td colspan="2">--pod-infra-container-image string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `k8s.gcr.io/pause:3.2`</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The image whose network/IPC namespaces containers in each pod will use. This docker-specific flag only works when container-runtime is set to `docker`.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;"> Specified image will not be pruned by the image garbage collector. When container-runtime is set to `docker`, all containers in each pod will use the network/ipc namespaces from this image. Other CRI implementations have their own configuration to set this image.</td>
</tr> </tr>
<tr> <tr>

View File

@ -14,7 +14,7 @@ tags:
A Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} component A Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} component
that embeds cloud-specific control logic. The cloud controller manager lets you link your that embeds cloud-specific control logic. The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that just interact with your cluster. with that cloud platform from components that only interact with your cluster.
<!--more--> <!--more-->

View File

@ -360,7 +360,7 @@ Other operations for exploring API resources:
```bash ```bash
kubectl api-resources --namespaced=true # All namespaced resources kubectl api-resources --namespaced=true # All namespaced resources
kubectl api-resources --namespaced=false # All non-namespaced resources kubectl api-resources --namespaced=false # All non-namespaced resources
kubectl api-resources -o name # All resources with simple output (just the resource name) kubectl api-resources -o name # All resources with simple output (only the resource name)
kubectl api-resources -o wide # All resources with expanded (aka "wide") output kubectl api-resources -o wide # All resources with expanded (aka "wide") output
kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs
kubectl api-resources --api-group=extensions # All resources in the "extensions" API group kubectl api-resources --api-group=extensions # All resources in the "extensions" API group
@ -387,6 +387,9 @@ Examples using `-o=custom-columns`:
# All images running in a cluster # All images running in a cluster
kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image' kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'
# All images running in namespace: default, grouped by Pod
kubectl get pods --namespace default --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image"
# All images excluding "k8s.gcr.io/coredns:1.6.2" # All images excluding "k8s.gcr.io/coredns:1.6.2"
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image' kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image'

View File

@ -69,7 +69,7 @@ for example `create`, `get`, `describe`, `delete`.
Flags that you specify from the command line override default values and any corresponding environment variables. Flags that you specify from the command line override default values and any corresponding environment variables.
{{< /caution >}} {{< /caution >}}
If you need help, just run `kubectl help` from the terminal window. If you need help, run `kubectl help` from the terminal window.
## Operations ## Operations

View File

@ -123,7 +123,7 @@ If your configuration is not using the latest version it is **recommended** that
the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command. the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command.
For more information on the fields and usage of the configuration you can navigate to our API reference For more information on the fields and usage of the configuration you can navigate to our API reference
page and pick a version from [the list](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#pkg-subdirectories). page and pick a version from [the list](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories).
### Adding kube-proxy parameters {#kube-proxy} ### Adding kube-proxy parameters {#kube-proxy}

View File

@ -116,7 +116,7 @@ The **certificates.k8s.io/v1beta1** API version of CertificateSigningRequest wil
* All existing persisted objects are accessible via the new API * All existing persisted objects are accessible via the new API
* Notable changes in `certificates.k8s.io/v1`: * Notable changes in `certificates.k8s.io/v1`:
* For API clients requesting certificates: * For API clients requesting certificates:
* `spec.signerName` is now required (see [known Kubernetes signers](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)), and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API * `spec.signerName` is now required (see [known Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)), and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API
* `spec.usages` is now required, may not contain duplicate values, and must only contain known usages * `spec.usages` is now required, may not contain duplicate values, and must only contain known usages
* For API clients approving or signing certificates: * For API clients approving or signing certificates:
* `status.conditions` may not contain duplicate types * `status.conditions` may not contain duplicate types

View File

@ -327,7 +327,7 @@ supported in API v1 must exist and function until API v1 is removed.
### Component config structures ### Component config structures
Component configs are versioned and managed just like REST resources. Component configs are versioned and managed similar to REST resources.
### Future work ### Future work

View File

@ -209,9 +209,8 @@ would have failed due to conflicting ownership.
The merging strategy, implemented with Server Side Apply, provides a generally The merging strategy, implemented with Server Side Apply, provides a generally
more stable object lifecycle. Server Side Apply tries to merge fields based on more stable object lifecycle. Server Side Apply tries to merge fields based on
the fact who manages them instead of overruling just based on values. This way the actor who manages them instead of overruling based on values. This way
it is intended to make it easier and more stable for multiple actors updating multiple actors can update the same object without causing unexpected interference.
the same object by causing less unexpected interference.
When a user sends a "fully-specified intent" object to the Server Side Apply When a user sends a "fully-specified intent" object to the Server Side Apply
endpoint, the server merges it with the live object favoring the value in the endpoint, the server merges it with the live object favoring the value in the
@ -319,7 +318,7 @@ kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replic
``` ```
If the apply results in a conflict with the HPA controller, then do nothing. The If the apply results in a conflict with the HPA controller, then do nothing. The
conflict just indicates the controller has claimed the field earlier in the conflict indicates the controller has claimed the field earlier in the
process than it sometimes does. process than it sometimes does.
At this point the user may remove the `replicas` field from their configuration. At this point the user may remove the `replicas` field from their configuration.
@ -436,7 +435,7 @@ Data: [{"op": "replace", "path": "/metadata/managedFields", "value": [{}]}]
This will overwrite the managedFields with a list containing a single empty This will overwrite the managedFields with a list containing a single empty
entry that then results in the managedFields being stripped entirely from the entry that then results in the managedFields being stripped entirely from the
object. Note that just setting the managedFields to an empty list will not object. Note that setting the managedFields to an empty list will not
reset the field. This is on purpose, so managedFields never get stripped by reset the field. This is on purpose, so managedFields never get stripped by
clients not aware of the field. clients not aware of the field.

View File

@ -69,10 +69,9 @@ When creating a cluster, you can (using custom tooling):
## Addon resources ## Addon resources
Kubernetes [resource limits](/docs/concepts/configuration/manage-resources-containers/) Kubernetes [resource limits](/docs/concepts/configuration/manage-resources-containers/)
help to minimise the impact of memory leaks and other ways that pods and containers can help to minimize the impact of memory leaks and other ways that pods and containers can
impact on other components. These resource limits can and should apply to impact on other components. These resource limits apply to
{{< glossary_tooltip text="addon" term_id="addons" >}} just as they apply to application {{< glossary_tooltip text="addon" term_id="addons" >}} resources just as they apply to application workloads.
workloads.
For example, you can set CPU and memory limits for a logging component: For example, you can set CPU and memory limits for a logging component:

View File

@ -68,7 +68,7 @@ Kubespray provides the ability to customize many aspects of the deployment:
* {{< glossary_tooltip term_id="cri-o" >}} * {{< glossary_tooltip term_id="cri-o" >}}
* Certificate generation methods * Certificate generation methods
Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
### (4/5) Deploy a Cluster ### (4/5) Deploy a Cluster

View File

@ -333,7 +333,7 @@ These features were added in Kubernetes v1.15:
##### DNS {#dns-limitations} ##### DNS {#dns-limitations}
* ClusterFirstWithHostNet is not supported for DNS. Windows treats all names with a '.' as a FQDN and skips PQDN resolution * ClusterFirstWithHostNet is not supported for DNS. Windows treats all names with a '.' as a FQDN and skips PQDN resolution
* On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On Windows, we only have 1 DNS suffix, which is the DNS suffix associated with that pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs and services or names resolvable with just that suffix. For example, a pod spawned in the default namespace, will have the DNS suffix **default.svc.cluster.local**. On a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** and **kubernetes**, but not the in-betweens, like **kubernetes.default** or **kubernetes.default.svc**. * On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On Windows, we only have 1 DNS suffix, which is the DNS suffix associated with that pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs and services or names resolvable with only that suffix. For example, a pod spawned in the default namespace, will have the DNS suffix **default.svc.cluster.local**. On a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** and **kubernetes**, but not the in-betweens, like **kubernetes.default** or **kubernetes.default.svc**.
* On Windows, there are multiple DNS resolvers that can be used. As these come with slightly different behaviors, using the `Resolve-DNSName` utility for name query resolutions is recommended. * On Windows, there are multiple DNS resolvers that can be used. As these come with slightly different behaviors, using the `Resolve-DNSName` utility for name query resolutions is recommended.
##### IPv6 ##### IPv6
@ -363,9 +363,9 @@ There are no differences in how most of the Kubernetes APIs work for Windows. Th
At a high level, these OS concepts are different: At a high level, these OS concepts are different:
* Identity - Linux uses userID (UID) and groupID (GID) which are represented as integer types. User and group names are not canonical - they are just an alias in `/etc/groups` or `/etc/passwd` back to UID+GID. Windows uses a larger binary security identifier (SID) which is stored in the Windows Security Access Manager (SAM) database. This database is not shared between the host and containers, or between containers. * Identity - Linux uses userID (UID) and groupID (GID) which are represented as integer types. User and group names are not canonical - they are an alias in `/etc/groups` or `/etc/passwd` back to UID+GID. Windows uses a larger binary security identifier (SID) which is stored in the Windows Security Access Manager (SAM) database. This database is not shared between the host and containers, or between containers.
* File permissions - Windows uses an access control list based on SIDs, rather than a bitmask of permissions and UID+GID * File permissions - Windows uses an access control list based on SIDs, rather than a bitmask of permissions and UID+GID
* File paths - convention on Windows is to use `\` instead of `/`. The Go IO libraries typically accept both and just make it work, but when you're setting a path or command line that's interpreted inside a container, `\` may be needed. * File paths - convention on Windows is to use `\` instead of `/`. The Go IO libraries accept both types of file path separators. However, when you're setting a path or command line that's interpreted inside a container, `\` may be needed.
* Signals - Windows interactive apps handle termination differently, and can implement one or more of these: * Signals - Windows interactive apps handle termination differently, and can implement one or more of these:
* A UI thread handles well-defined messages including WM_CLOSE * A UI thread handles well-defined messages including WM_CLOSE
* Console apps handle ctrl-c or ctrl-break using a Control Handler * Console apps handle ctrl-c or ctrl-break using a Control Handler

View File

@ -231,7 +231,7 @@ You have several options for connecting to nodes, pods and services from outside
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](/docs/concepts/services-networking/service/) and the cluster. See the [services](/docs/concepts/services-networking/service/) and
[kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation.
- Depending on your cluster environment, this may just expose the service to your corporate network, - Depending on your cluster environment, this may only expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure. or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication? Does it do its own authentication?
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
@ -283,7 +283,7 @@ at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-l
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL: As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy` `http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. You can also use the port number in place of the *port_name* for both named and unnamed ports.
By default, the API server proxies to your service using http. To use https, prefix the service name with `https:`: By default, the API server proxies to your service using http. To use https, prefix the service name with `https:`:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`https:service_name:[port_name]`*`/proxy` `http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`https:service_name:[port_name]`*`/proxy`
@ -291,9 +291,9 @@ By default, the API server proxies to your service using http. To use https, pre
The supported formats for the name segment of the URL are: The supported formats for the name segment of the URL are:
* `<service_name>` - proxies to the default or unnamed port using http * `<service_name>` - proxies to the default or unnamed port using http
* `<service_name>:<port_name>` - proxies to the specified port using http * `<service_name>:<port_name>` - proxies to the specified port name or port number using http
* `https:<service_name>:` - proxies to the default or unnamed port using https (note the trailing colon) * `https:<service_name>:` - proxies to the default or unnamed port using https (note the trailing colon)
* `https:<service_name>:<port_name>` - proxies to the specified port using https * `https:<service_name>:<port_name>` - proxies to the specified port name or port number using https
##### Examples ##### Examples
@ -357,7 +357,7 @@ There are several different proxies you may encounter when using Kubernetes:
- proxies UDP and TCP - proxies UDP and TCP
- does not understand HTTP - does not understand HTTP
- provides load balancing - provides load balancing
- is just used to reach services - is only used to reach services
1. A Proxy/Load-balancer in front of apiserver(s): 1. A Proxy/Load-balancer in front of apiserver(s):

View File

@ -7,7 +7,7 @@ min-kubernetes-server-version: v1.10
<!-- overview --> <!-- overview -->
This page shows how to use `kubectl port-forward` to connect to a Redis This page shows how to use `kubectl port-forward` to connect to a MongoDB
server running in a Kubernetes cluster. This type of connection can be useful server running in a Kubernetes cluster. This type of connection can be useful
for database debugging. for database debugging.
@ -19,25 +19,25 @@ for database debugging.
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
* Install [redis-cli](http://redis.io/topics/rediscli). * Install [MongoDB Shell](https://www.mongodb.com/try/download/shell).
<!-- steps --> <!-- steps -->
## Creating Redis deployment and service ## Creating MongoDB deployment and service
1. Create a Deployment that runs Redis: 1. Create a Deployment that runs MongoDB:
```shell ```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml
``` ```
The output of a successful command verifies that the deployment was created: The output of a successful command verifies that the deployment was created:
``` ```
deployment.apps/redis-master created deployment.apps/mongo created
``` ```
View the pod status to check that it is ready: View the pod status to check that it is ready:
@ -50,7 +50,7 @@ for database debugging.
``` ```
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
redis-master-765d459796-258hz 1/1 Running 0 50s mongo-75f59d57f4-4nd6q 1/1 Running 0 2m4s
``` ```
View the Deployment's status: View the Deployment's status:
@ -63,7 +63,7 @@ for database debugging.
``` ```
NAME READY UP-TO-DATE AVAILABLE AGE NAME READY UP-TO-DATE AVAILABLE AGE
redis-master 1/1 1 1 55s mongo 1/1 1 1 2m21s
``` ```
The Deployment automatically manages a ReplicaSet. The Deployment automatically manages a ReplicaSet.
@ -77,49 +77,49 @@ for database debugging.
``` ```
NAME DESIRED CURRENT READY AGE NAME DESIRED CURRENT READY AGE
redis-master-765d459796 1 1 1 1m mongo-75f59d57f4 1 1 1 3m12s
``` ```
2. Create a Service to expose Redis on the network: 2. Create a Service to expose MongoDB on the network:
```shell ```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml
``` ```
The output of a successful command verifies that the Service was created: The output of a successful command verifies that the Service was created:
``` ```
service/redis-master created service/mongo created
``` ```
Check the Service created: Check the Service created:
```shell ```shell
kubectl get service redis-master kubectl get service mongo
``` ```
The output displays the service created: The output displays the service created:
``` ```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master ClusterIP 10.0.0.213 <none> 6379/TCP 27s mongo ClusterIP 10.96.41.183 <none> 27017/TCP 11s
``` ```
3. Verify that the Redis server is running in the Pod, and listening on port 6379: 3. Verify that the MongoDB server is running in the Pod, and listening on port 27017:
```shell ```shell
# Change redis-master-765d459796-258hz to the name of the Pod # Change mongo-75f59d57f4-4nd6q to the name of the Pod
kubectl get pod redis-master-765d459796-258hz --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' kubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
``` ```
The output displays the port for Redis in that Pod: The output displays the port for MongoDB in that Pod:
``` ```
6379 27017
``` ```
(this is the TCP port allocated to Redis on the internet). (this is the TCP port allocated to MongoDB on the internet).
## Forward a local port to a port on the Pod ## Forward a local port to a port on the Pod
@ -127,39 +127,39 @@ for database debugging.
```shell ```shell
# Change redis-master-765d459796-258hz to the name of the Pod # Change mongo-75f59d57f4-4nd6q to the name of the Pod
kubectl port-forward redis-master-765d459796-258hz 7000:6379 kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017
``` ```
which is the same as which is the same as
```shell ```shell
kubectl port-forward pods/redis-master-765d459796-258hz 7000:6379 kubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017
``` ```
or or
```shell ```shell
kubectl port-forward deployment/redis-master 7000:6379 kubectl port-forward deployment/mongo 28015:27017
``` ```
or or
```shell ```shell
kubectl port-forward replicaset/redis-master 7000:6379 kubectl port-forward replicaset/mongo-75f59d57f4 28015:27017
``` ```
or or
```shell ```shell
kubectl port-forward service/redis-master 7000:redis kubectl port-forward service/mongo 28015:27017
``` ```
Any of the above commands works. The output is similar to this: Any of the above commands works. The output is similar to this:
``` ```
Forwarding from 127.0.0.1:7000 -> 6379 Forwarding from 127.0.0.1:28015 -> 27017
Forwarding from [::1]:7000 -> 6379 Forwarding from [::1]:28015 -> 27017
``` ```
{{< note >}} {{< note >}}
@ -168,22 +168,22 @@ for database debugging.
{{< /note >}} {{< /note >}}
2. Start the Redis command line interface: 2. Start the MongoDB command line interface:
```shell ```shell
redis-cli -p 7000 mongosh --port 28015
``` ```
3. At the Redis command line prompt, enter the `ping` command: 3. At the MongoDB command line prompt, enter the `ping` command:
``` ```
ping db.runCommand( { ping: 1 } )
``` ```
A successful ping request returns: A successful ping request returns:
``` ```
PONG { ok: 1 }
``` ```
### Optionally let _kubectl_ choose the local port {#let-kubectl-choose-local-port} ### Optionally let _kubectl_ choose the local port {#let-kubectl-choose-local-port}
@ -193,15 +193,22 @@ the local port and thus relieve you from having to manage local port conflicts,
the slightly simpler syntax: the slightly simpler syntax:
```shell ```shell
kubectl port-forward deployment/redis-master :6379 kubectl port-forward deployment/mongo :27017
```
The output is similar to this:
```
Forwarding from 127.0.0.1:63753 -> 27017
Forwarding from [::1]:63753 -> 27017
``` ```
The `kubectl` tool finds a local port number that is not in use (avoiding low ports numbers, The `kubectl` tool finds a local port number that is not in use (avoiding low ports numbers,
because these might be used by other applications). The output is similar to: because these might be used by other applications). The output is similar to:
``` ```
Forwarding from 127.0.0.1:62162 -> 6379 Forwarding from 127.0.0.1:63753 -> 27017
Forwarding from [::1]:62162 -> 6379 Forwarding from [::1]:63753 -> 27017
``` ```
@ -209,8 +216,8 @@ Forwarding from [::1]:62162 -> 6379
## Discussion ## Discussion
Connections made to local port 7000 are forwarded to port 6379 of the Pod that Connections made to local port 28015 are forwarded to port 27017 of the Pod that
is running the Redis server. With this connection in place, you can use your is running the MongoDB server. With this connection in place, you can use your
local workstation to debug the database that is running in the Pod. local workstation to debug the database that is running in the Pod.
{{< note >}} {{< note >}}

View File

@ -31,7 +31,7 @@ You have several options for connecting to nodes, pods and services from outside
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](/docs/concepts/services-networking/service/) and the cluster. See the [services](/docs/concepts/services-networking/service/) and
[kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation.
- Depending on your cluster environment, this may just expose the service to your corporate network, - Depending on your cluster environment, this may only expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure. or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication? Does it do its own authentication?
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,

View File

@ -70,7 +70,7 @@ for details about addon manager and how to disable individual addons.
1. Mark a StorageClass as default: 1. Mark a StorageClass as default:
Similarly to the previous step, you need to add/set the annotation Similar to the previous step, you need to add/set the annotation
`storageclass.kubernetes.io/is-default-class=true`. `storageclass.kubernetes.io/is-default-class=true`.
```bash ```bash

View File

@ -125,7 +125,16 @@ the URL schema.
Similarly, to configure etcd with secure client communication, specify flags Similarly, to configure etcd with secure client communication, specify flags
`--key-file=k8sclient.key` and `--cert-file=k8sclient.cert`, and use HTTPS as `--key-file=k8sclient.key` and `--cert-file=k8sclient.cert`, and use HTTPS as
the URL schema. the URL schema. Here is an example on a client command that uses secure
communication:
```
ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
member list
```
### Limiting access of etcd clusters ### Limiting access of etcd clusters
@ -269,6 +278,24 @@ If etcd is running on a storage volume that supports backup, such as Amazon
Elastic Block Store, back up etcd data by taking a snapshot of the storage Elastic Block Store, back up etcd data by taking a snapshot of the storage
volume. volume.
### Snapshot using etcdctl options
We can also take the snapshot using various options given by etcdctl. For example
```shell
ETCDCTL_API=3 etcdctl --h
```
will list various options available from etcdctl. For example, you can take a snapshot by specifying
the endpoint, certificates etc as shown below:
```shell
ETCDCTL_API=3 etcdctl --endpoints=[127.0.0.1:2379] \
--cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
snapshot save <backup-file-location>
```
where `trusted-ca-file`, `cert-file` and `key-file` can be obtained from the description of the etcd Pod.
## Scaling up etcd clusters ## Scaling up etcd clusters
Scaling up etcd clusters increases availability by trading off performance. Scaling up etcd clusters increases availability by trading off performance.
@ -293,6 +320,12 @@ employed to recover the data of a failed cluster.
Before starting the restore operation, a snapshot file must be present. It can Before starting the restore operation, a snapshot file must be present. It can
either be a snapshot file from a previous backup operation, or from a remaining either be a snapshot file from a previous backup operation, or from a remaining
[data directory]( https://etcd.io/docs/current/op-guide/configuration/#--data-dir). [data directory]( https://etcd.io/docs/current/op-guide/configuration/#--data-dir).
Here is an example:
```shell
ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshotdb
```
For more information and examples on restoring a cluster from a snapshot file, see For more information and examples on restoring a cluster from a snapshot file, see
[etcd disaster recovery documentation](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster). [etcd disaster recovery documentation](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster).
@ -324,4 +357,3 @@ We also recommend restarting any components (e.g. `kube-scheduler`,
stale data. Note that in practice, the restore takes a bit of time. During the stale data. Note that in practice, the restore takes a bit of time. During the
restoration, critical components will lose leader lock and restart themselves. restoration, critical components will lose leader lock and restart themselves.
{{< /note >}} {{< /note >}}

View File

@ -54,7 +54,7 @@ Host: k8s-master:8080
``` ```
Note that Kubernetes does not need to know what a dongle is or what a dongle is for. Note that Kubernetes does not need to know what a dongle is or what a dongle is for.
The preceding PATCH request just tells Kubernetes that your Node has four things that The preceding PATCH request tells Kubernetes that your Node has four things that
you call dongles. you call dongles.
Start a proxy, so that you can easily send requests to the Kubernetes API server: Start a proxy, so that you can easily send requests to the Kubernetes API server:

View File

@ -9,24 +9,17 @@ content_type: concept
<!-- overview --> <!-- overview -->
In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine Kubernetes core components such as the API server, scheduler, and controller-manager run on a control plane node. However, add-ons must run on a regular cluster node.
there are a number of add-ons which, for various reasons, must run on a regular cluster node (rather than the Kubernetes master).
Some of these add-ons are critical to a fully functional cluster, such as metrics-server, DNS, and UI. Some of these add-ons are critical to a fully functional cluster, such as metrics-server, DNS, and UI.
A cluster may stop working properly if a critical add-on is evicted (either manually or as a side effect of another operation like upgrade) A cluster may stop working properly if a critical add-on is evicted (either manually or as a side effect of another operation like upgrade)
and becomes pending (for example when the cluster is highly utilized and either there are other pending pods that schedule into the space and becomes pending (for example when the cluster is highly utilized and either there are other pending pods that schedule into the space
vacated by the evicted critical add-on pod or the amount of resources available on the node changed for some other reason). vacated by the evicted critical add-on pod or the amount of resources available on the node changed for some other reason).
Note that marking a pod as critical is not meant to prevent evictions entirely; it only prevents the pod from becoming permanently unavailable. Note that marking a pod as critical is not meant to prevent evictions entirely; it only prevents the pod from becoming permanently unavailable.
For static pods, this means it can't be evicted, but for non-static pods, it just means they will always be rescheduled. A static pod marked as critical, can't be evicted. However, a non-static pods marked as critical are always rescheduled.
<!-- body --> <!-- body -->
### Marking pod as critical ### Marking pod as critical
To mark a Pod as critical, set priorityClassName for that Pod to `system-cluster-critical` or `system-node-critical`. `system-node-critical` is the highest available priority, even higher than `system-cluster-critical`. To mark a Pod as critical, set priorityClassName for that Pod to `system-cluster-critical` or `system-node-critical`. `system-node-critical` is the highest available priority, even higher than `system-cluster-critical`.

View File

@ -35,7 +35,7 @@ and kubeadm will use this CA for signing the rest of the certificates.
## External CA mode {#external-ca-mode} ## External CA mode {#external-ca-mode}
It is also possible to provide just the `ca.crt` file and not the It is also possible to provide only the `ca.crt` file and not the
`ca.key` file (this is only available for the root CA file, not other cert pairs). `ca.key` file (this is only available for the root CA file, not other cert pairs).
If all other certificates and kubeconfig files are in place, kubeadm recognizes If all other certificates and kubeconfig files are in place, kubeadm recognizes
this condition and activates the "External CA" mode. kubeadm will proceed without the this condition and activates the "External CA" mode. kubeadm will proceed without the
@ -170,7 +170,7 @@ controllerManager:
### Create certificate signing requests (CSR) ### Create certificate signing requests (CSR)
See [Create CertificateSigningRequest](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API. See [Create CertificateSigningRequest](/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API.
## Renew certificates with external CA ## Renew certificates with external CA

View File

@ -37,7 +37,7 @@ The upgrade workflow at high level is the following:
### Additional information ### Additional information
- [Draining nodes](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) before kubelet MINOR version - [Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/) before kubelet MINOR version
upgrades is required. In the case of control plane nodes, they could be running CoreDNS Pods or other critical workloads. upgrades is required. In the case of control plane nodes, they could be running CoreDNS Pods or other critical workloads.
- All containers are restarted after upgrade, because the container spec hash value is changed. - All containers are restarted after upgrade, because the container spec hash value is changed.

View File

@ -50,7 +50,7 @@ and scheduling of Pods; on each node, the {{< glossary_tooltip text="kubelet" te
uses the container runtime interface as an abstraction so that you can use any compatible uses the container runtime interface as an abstraction so that you can use any compatible
container runtime. container runtime.
In its earliest releases, Kubernetes offered compatibility with just one container runtime: Docker. In its earliest releases, Kubernetes offered compatibility with one container runtime: Docker.
Later in the Kubernetes project's history, cluster operators wanted to adopt additional container runtimes. Later in the Kubernetes project's history, cluster operators wanted to adopt additional container runtimes.
The CRI was designed to allow this kind of flexibility - and the kubelet began supporting CRI. However, The CRI was designed to allow this kind of flexibility - and the kubelet began supporting CRI. However,
because Docker existed before the CRI specification was invented, the Kubernetes project created an because Docker existed before the CRI specification was invented, the Kubernetes project created an
@ -75,7 +75,7 @@ or execute something inside container using `docker exec`.
If you're running workloads via Kubernetes, the best way to stop a container is through If you're running workloads via Kubernetes, the best way to stop a container is through
the Kubernetes API rather than directly through the container runtime (this advice applies the Kubernetes API rather than directly through the container runtime (this advice applies
for all container runtimes, not just Docker). for all container runtimes, not only Docker).
{{< /note >}} {{< /note >}}

View File

@ -232,7 +232,7 @@ Apply the manifest to create a Deployment
```shell ```shell
kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml
``` ```
We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname.
```shell ```shell
kubectl get deployment kubectl get deployment

View File

@ -196,7 +196,7 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te
```shell ```shell
kubectl create deployment snowflake --image=k8s.gcr.io/serve_hostname -n=development --replicas=2 kubectl create deployment snowflake --image=k8s.gcr.io/serve_hostname -n=development --replicas=2
``` ```
We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname.
```shell ```shell
kubectl get deployment -n=development kubectl get deployment -n=development
@ -302,7 +302,7 @@ Use cases include:
When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>` it will resolve to the service which that if a container uses `<service-name>` it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN). across namespaces, you need to use the fully qualified domain name (FQDN).

View File

@ -20,7 +20,7 @@ Decide whether you want to deploy a [cloud](#creating-a-calico-cluster-with-goog
**Prerequisite**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts). **Prerequisite**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts).
1. To launch a GKE cluster with Calico, just include the `--enable-network-policy` flag. 1. To launch a GKE cluster with Calico, include the `--enable-network-policy` flag.
**Syntax** **Syntax**
```shell ```shell

View File

@ -128,8 +128,8 @@ curl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.ex
The API can respond in one of three ways: The API can respond in one of three ways:
- If the eviction is granted, then the Pod is deleted just as if you had sent - If the eviction is granted, then the Pod is deleted as if you sent
a `DELETE` request to the Pod's URL and you get back `200 OK`. a `DELETE` request to the Pod's URL and received back `200 OK`.
- If the current state of affairs wouldn't allow an eviction by the rules set - If the current state of affairs wouldn't allow an eviction by the rules set
forth in the budget, you get back `429 Too Many Requests`. This is forth in the budget, you get back `429 Too Many Requests`. This is
typically used for generic rate limiting of *any* requests, but here we mean typically used for generic rate limiting of *any* requests, but here we mean

View File

@ -184,7 +184,7 @@ Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
## Clean Up ## Clean Up
To delete the Secret you have just created: To delete the Secret you have created:
```shell ```shell
kubectl delete secret mysecret kubectl delete secret mysecret

View File

@ -115,8 +115,7 @@ accidentally to an onlooker, or from being stored in a terminal log.
## Decoding the Secret {#decoding-secret} ## Decoding the Secret {#decoding-secret}
To view the contents of the Secret we just created, you can run the following To view the contents of the Secret you created, run the following command:
command:
```shell ```shell
kubectl get secret db-user-pass -o jsonpath='{.data}' kubectl get secret db-user-pass -o jsonpath='{.data}'
@ -125,10 +124,10 @@ kubectl get secret db-user-pass -o jsonpath='{.data}'
The output is similar to: The output is similar to:
```json ```json
{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="} {"password":"MWYyZDFlMmU2N2Rm","username":"YWRtaW4="}
``` ```
Now you can decode the `password.txt` data: Now you can decode the `password` data:
```shell ```shell
echo 'MWYyZDFlMmU2N2Rm' | base64 --decode echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
@ -142,7 +141,7 @@ The output is similar to:
## Clean Up ## Clean Up
To delete the Secret you have just created: To delete the Secret you have created:
```shell ```shell
kubectl delete secret db-user-pass kubectl delete secret db-user-pass

View File

@ -113,7 +113,7 @@ To check the actual content of the encoded data, please refer to
## Clean Up ## Clean Up
To delete the Secret you have just created: To delete the Secret you have created:
```shell ```shell
kubectl delete secret db-user-pass-96mffmfh4k kubectl delete secret db-user-pass-96mffmfh4k

View File

@ -112,7 +112,7 @@ kubectl top pod cpu-demo --namespace=cpu-example
``` ```
This example output shows that the Pod is using 974 milliCPU, which is This example output shows that the Pod is using 974 milliCPU, which is
just a bit less than the limit of 1 CPU specified in the Pod configuration. slightly less than the limit of 1 CPU specified in the Pod configuration.
``` ```
NAME CPU(cores) MEMORY(bytes) NAME CPU(cores) MEMORY(bytes)

View File

@ -204,7 +204,7 @@ seconds.
In addition to the readiness probe, this configuration includes a liveness probe. In addition to the readiness probe, this configuration includes a liveness probe.
The kubelet will run the first liveness probe 15 seconds after the container The kubelet will run the first liveness probe 15 seconds after the container
starts. Just like the readiness probe, this will attempt to connect to the starts. Similar to the readiness probe, this will attempt to connect to the
`goproxy` container on port 8080. If the liveness probe fails, the container `goproxy` container on port 8080. If the liveness probe fails, the container
will be restarted. will be restarted.

View File

@ -118,7 +118,7 @@ those secrets might also be visible to other users on your PC during the time th
## Inspecting the Secret `regcred` ## Inspecting the Secret `regcred`
To understand the contents of the `regcred` Secret you just created, start by viewing the Secret in YAML format: To understand the contents of the `regcred` Secret you created, start by viewing the Secret in YAML format:
```shell ```shell
kubectl get secret regcred --output=yaml kubectl get secret regcred --output=yaml

View File

@ -67,7 +67,7 @@ sudo yum -y install kompose
{{% /tab %}} {{% /tab %}}
{{% tab name="Fedora package" %}} {{% tab name="Fedora package" %}}
Kompose is in Fedora 24, 25 and 26 repositories. You can install it just like any other package. Kompose is in Fedora 24, 25 and 26 repositories. You can install it like any other package.
```bash ```bash
sudo dnf -y install kompose sudo dnf -y install kompose
@ -87,7 +87,7 @@ brew install kompose
## Use Kompose ## Use Kompose
In just a few steps, we'll take you from Docker Compose to Kubernetes. All In a few steps, we'll take you from Docker Compose to Kubernetes. All
you need is an existing `docker-compose.yml` file. you need is an existing `docker-compose.yml` file.
1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one. 1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one.

View File

@ -177,7 +177,7 @@ kubectl describe pod nginx-deployment-1370807587-fz9sd
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `FailedScheduling` (and possibly others). The message tells us that there were not enough resources for the Pod on any of the nodes. Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `FailedScheduling` (and possibly others). The message tells us that there were not enough resources for the Pod on any of the nodes.
To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.) To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could leave the one Pod pending, which is harmless.)
Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use

View File

@ -57,7 +57,7 @@ case you can try several things:
will never be scheduled. will never be scheduled.
You can check node capacities with the `kubectl get nodes -o <format>` You can check node capacities with the `kubectl get nodes -o <format>`
command. Here are some example command lines that extract just the necessary command. Here are some example command lines that extract the necessary
information: information:
```shell ```shell

View File

@ -178,7 +178,7 @@ kubectl expose deployment hostnames --port=80 --target-port=9376
service/hostnames exposed service/hostnames exposed
``` ```
And read it back, just to be sure: And read it back:
```shell ```shell
kubectl get svc hostnames kubectl get svc hostnames
@ -427,8 +427,7 @@ hostnames-632524106-ly40y 1/1 Running 0 1h
hostnames-632524106-tlaok 1/1 Running 0 1h hostnames-632524106-tlaok 1/1 Running 0 1h
``` ```
The `-l app=hostnames` argument is a label selector - just like our Service The `-l app=hostnames` argument is a label selector configured on the Service.
has.
The "AGE" column says that these Pods are about an hour old, which implies that The "AGE" column says that these Pods are about an hour old, which implies that
they are running fine and not crashing. they are running fine and not crashing.
@ -607,7 +606,7 @@ iptables-save | grep hostnames
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577 -A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
``` ```
There should be 2 rules for each port of your Service (just one in this There should be 2 rules for each port of your Service (only one in this
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST".
Almost nobody should be using the "userspace" mode any more, so you won't spend Almost nobody should be using the "userspace" mode any more, so you won't spend

View File

@ -294,9 +294,9 @@ a running cluster in the [Deploying section](#deploying).
### Changing `DaemonSet` parameters ### Changing `DaemonSet` parameters
When you have the Stackdriver Logging `DaemonSet` in your cluster, you can just modify the When you have the Stackdriver Logging `DaemonSet` in your cluster, you can modify the
`template` field in its spec, daemonset controller will update the pods for you. For example, `template` field in its spec. The DaemonSet controller manages the pods for you.
let's assume you've just installed the Stackdriver Logging as described above. Now you want to For example, assume you've installed the Stackdriver Logging as described above. Now you want to
change the memory limit to give fluentd more memory to safely process more logs. change the memory limit to give fluentd more memory to safely process more logs.
Get the spec of `DaemonSet` running in your cluster: Get the spec of `DaemonSet` running in your cluster:

View File

@ -12,7 +12,7 @@ weight: 20
Kubernetes ships with a default scheduler that is described Kubernetes ships with a default scheduler that is described
[here](/docs/reference/command-line-tools-reference/kube-scheduler/). [here](/docs/reference/command-line-tools-reference/kube-scheduler/).
If the default scheduler does not suit your needs you can implement your own scheduler. If the default scheduler does not suit your needs you can implement your own scheduler.
Not just that, you can even run multiple schedulers simultaneously alongside the default Moreover, you can even run multiple schedulers simultaneously alongside the default
scheduler and instruct Kubernetes what scheduler to use for each of your pods. Let's scheduler and instruct Kubernetes what scheduler to use for each of your pods. Let's
learn how to run multiple schedulers in Kubernetes with an example. learn how to run multiple schedulers in Kubernetes with an example.
@ -30,7 +30,7 @@ in the Kubernetes source directory for a canonical example.
## Package the scheduler ## Package the scheduler
Package your scheduler binary into a container image. For the purposes of this example, Package your scheduler binary into a container image. For the purposes of this example,
let's just use the default scheduler (kube-scheduler) as our second scheduler as well. you can use the default scheduler (kube-scheduler) as your second scheduler.
Clone the [Kubernetes source code from GitHub](https://github.com/kubernetes/kubernetes) Clone the [Kubernetes source code from GitHub](https://github.com/kubernetes/kubernetes)
and build the source. and build the source.
@ -61,9 +61,9 @@ gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0
## Define a Kubernetes Deployment for the scheduler ## Define a Kubernetes Deployment for the scheduler
Now that we have our scheduler in a container image, we can just create a pod Now that you have your scheduler in a container image, create a pod
config for it and run it in our Kubernetes cluster. But instead of creating a pod configuration for it and run it in your Kubernetes cluster. But instead of creating a pod
directly in the cluster, let's use a [Deployment](/docs/concepts/workloads/controllers/deployment/) directly in the cluster, you can use a [Deployment](/docs/concepts/workloads/controllers/deployment/)
for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment/) manages a for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment/) manages a
[Replica Set](/docs/concepts/workloads/controllers/replicaset/) which in turn manages the pods, [Replica Set](/docs/concepts/workloads/controllers/replicaset/) which in turn manages the pods,
thereby making the scheduler resilient to failures. Here is the deployment thereby making the scheduler resilient to failures. Here is the deployment
@ -83,7 +83,7 @@ detailed description of other command line arguments.
## Run the second scheduler in the cluster ## Run the second scheduler in the cluster
In order to run your scheduler in a Kubernetes cluster, just create the deployment In order to run your scheduler in a Kubernetes cluster, create the deployment
specified in the config above in a Kubernetes cluster: specified in the config above in a Kubernetes cluster:
```shell ```shell
@ -132,9 +132,9 @@ kubectl edit clusterrole system:kube-scheduler
## Specify schedulers for pods ## Specify schedulers for pods
Now that our second scheduler is running, let's create some pods, and direct them Now that your second scheduler is running, create some pods, and direct them
to be scheduled by either the default scheduler or the one we just deployed. to be scheduled by either the default scheduler or the one you deployed.
In order to schedule a given pod using a specific scheduler, we specify the name of the In order to schedule a given pod using a specific scheduler, specify the name of the
scheduler in that pod spec. Let's look at three examples. scheduler in that pod spec. Let's look at three examples.
- Pod spec without any scheduler name - Pod spec without any scheduler name
@ -196,7 +196,7 @@ while the other two pods get scheduled. Once we submit the scheduler deployment
and our new scheduler starts running, the `annotation-second-scheduler` pod gets and our new scheduler starts running, the `annotation-second-scheduler` pod gets
scheduled as well. scheduled as well.
Alternatively, one could just look at the "Scheduled" entries in the event logs to Alternatively, you can look at the "Scheduled" entries in the event logs to
verify that the pods were scheduled by the desired schedulers. verify that the pods were scheduled by the desired schedulers.
```shell ```shell

View File

@ -404,7 +404,7 @@ how to [authenticate API servers](/docs/reference/access-authn-authz/extensible-
A conversion webhook must not mutate anything inside of `metadata` of the converted object A conversion webhook must not mutate anything inside of `metadata` of the converted object
other than `labels` and `annotations`. other than `labels` and `annotations`.
Attempted changes to `name`, `UID` and `namespace` are rejected and fail the request Attempted changes to `name`, `UID` and `namespace` are rejected and fail the request
which caused the conversion. All other changes are just ignored. which caused the conversion. All other changes are ignored.
### Deploy the conversion webhook service ### Deploy the conversion webhook service

View File

@ -520,7 +520,7 @@ CustomResourceDefinition and migrating your objects from one version to another.
### Finalizers ### Finalizers
*Finalizers* allow controllers to implement asynchronous pre-delete hooks. *Finalizers* allow controllers to implement asynchronous pre-delete hooks.
Custom objects support finalizers just like built-in objects. Custom objects support finalizers similar to built-in objects.
You can add a finalizer to a custom object like this: You can add a finalizer to a custom object like this:

View File

@ -41,7 +41,7 @@ Alternatively, you can use an existing 3rd party solution, such as [apiserver-bu
1. Make sure that your extension-apiserver loads those certs from that volume and that they are used in the HTTPS handshake. 1. Make sure that your extension-apiserver loads those certs from that volume and that they are used in the HTTPS handshake.
1. Create a Kubernetes service account in your namespace. 1. Create a Kubernetes service account in your namespace.
1. Create a Kubernetes cluster role for the operations you want to allow on your resources. 1. Create a Kubernetes cluster role for the operations you want to allow on your resources.
1. Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you just created. 1. Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you created.
1. Create a Kubernetes cluster role binding from the service account in your namespace to the `system:auth-delegator` cluster role to delegate auth decisions to the Kubernetes core API server. 1. Create a Kubernetes cluster role binding from the service account in your namespace to the `system:auth-delegator` cluster role to delegate auth decisions to the Kubernetes core API server.
1. Create a Kubernetes role binding from the service account in your namespace to the `extension-apiserver-authentication-reader` role. This allows your extension api-server to access the `extension-apiserver-authentication` configmap. 1. Create a Kubernetes role binding from the service account in your namespace to the `extension-apiserver-authentication-reader` role. This allows your extension api-server to access the `extension-apiserver-authentication` configmap.
1. Create a Kubernetes apiservice. The CA cert above should be base64 encoded, stripped of new lines and used as the spec.caBundle in the apiservice. This should not be namespaced. If using the [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/), only pass in the PEM encoded CA bundle because the base 64 encoding is done for you. 1. Create a Kubernetes apiservice. The CA cert above should be base64 encoded, stripped of new lines and used as the spec.caBundle in the apiservice. This should not be namespaced. If using the [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/), only pass in the PEM encoded CA bundle because the base 64 encoding is done for you.

View File

@ -19,7 +19,7 @@ Here is an overview of the steps in this example:
1. **Start a message queue service.** In this example, we use RabbitMQ, but you could use another 1. **Start a message queue service.** In this example, we use RabbitMQ, but you could use another
one. In practice you would set up a message queue service once and reuse it for many jobs. one. In practice you would set up a message queue service once and reuse it for many jobs.
1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In
this example, a message is just an integer that we will do a lengthy computation on. this example, a message is an integer that we will do a lengthy computation on.
1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes
one task from the message queue, processes it, and repeats until the end of the queue is reached. one task from the message queue, processes it, and repeats until the end of the queue is reached.
@ -141,13 +141,12 @@ root@temp-loe07:/#
``` ```
In the last command, the `amqp-consume` tool takes one message (`-c 1`) In the last command, the `amqp-consume` tool takes one message (`-c 1`)
from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` is just printing from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` prints out the characters read from standard input, and the echo adds a carriage
out what it gets on the standard input, and the echo is just to add a carriage
return so the example is readable. return so the example is readable.
## Filling the Queue with tasks ## Filling the Queue with tasks
Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be Now let's fill the queue with some "tasks". In our example, our tasks are strings to be
printed. printed.
In a practice, the content of the messages might be: In a practice, the content of the messages might be:

View File

@ -21,7 +21,7 @@ Here is an overview of the steps in this example:
detect when a finite-length work queue is empty. In practice you would set up a store such detect when a finite-length work queue is empty. In practice you would set up a store such
as Redis once and reuse it for the work queues of many jobs, and other things. as Redis once and reuse it for the work queues of many jobs, and other things.
1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In
this example, a message is just an integer that we will do a lengthy computation on. this example, a message is an integer that we will do a lengthy computation on.
1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes
one task from the message queue, processes it, and repeats until the end of the queue is reached. one task from the message queue, processes it, and repeats until the end of the queue is reached.
@ -55,7 +55,7 @@ You could also download the following files directly:
## Filling the Queue with tasks ## Filling the Queue with tasks
Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be Now let's fill the queue with some "tasks". In our example, our tasks are strings to be
printed. printed.
Start a temporary interactive pod for running the Redis CLI. Start a temporary interactive pod for running the Redis CLI.

View File

@ -25,7 +25,7 @@ You should already know how to [perform a rolling update on a
### Step 1: Find the DaemonSet revision you want to roll back to ### Step 1: Find the DaemonSet revision you want to roll back to
You can skip this step if you just want to roll back to the last revision. You can skip this step if you only want to roll back to the last revision.
List all revisions of a DaemonSet: List all revisions of a DaemonSet:

View File

@ -111,7 +111,7 @@ kubectl edit ds/fluentd-elasticsearch -n kube-system
##### Updating only the container image ##### Updating only the container image
If you just need to update the container image in the DaemonSet template, i.e. If you only need to update the container image in the DaemonSet template, i.e.
`.spec.template.spec.containers[*].image`, use `kubectl set image`: `.spec.template.spec.containers[*].image`, use `kubectl set image`:
```shell ```shell
@ -167,7 +167,7 @@ If the recent DaemonSet template update is broken, for example, the container is
crash looping, or the container image doesn't exist (often due to a typo), crash looping, or the container image doesn't exist (often due to a typo),
DaemonSet rollout won't progress. DaemonSet rollout won't progress.
To fix this, just update the DaemonSet template again. New rollout won't be To fix this, update the DaemonSet template again. New rollout won't be
blocked by previous unhealthy rollouts. blocked by previous unhealthy rollouts.
#### Clock skew #### Clock skew

View File

@ -37,7 +37,7 @@ When the above conditions are true, Kubernetes will expose `amd.com/gpu` or
`nvidia.com/gpu` as a schedulable resource. `nvidia.com/gpu` as a schedulable resource.
You can consume these GPUs from your containers by requesting You can consume these GPUs from your containers by requesting
`<vendor>.com/gpu` just like you request `cpu` or `memory`. `<vendor>.com/gpu` the same way you request `cpu` or `memory`.
However, there are some limitations in how you specify the resource requirements However, there are some limitations in how you specify the resource requirements
when using GPUs: when using GPUs:

View File

@ -43,8 +43,8 @@ You may need to delete the associated headless service separately after the Stat
kubectl delete service <service-name> kubectl delete service <service-name>
``` ```
Deleting a StatefulSet through kubectl will scale it down to 0, thereby deleting all pods that are a part of it. When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=false`.
If you want to delete just the StatefulSet and not the pods, use `--cascade=false`. For example:
```shell ```shell
kubectl delete -f <file.yaml> --cascade=false kubectl delete -f <file.yaml> --cascade=false

View File

@ -44,7 +44,7 @@ for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod
[shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) [shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
before the kubelet deletes the name from the apiserver. before the kubelet deletes the name from the apiserver.
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. A Pod is not deleted automatically when a node is unreachable.
The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a
[timeout](/docs/concepts/architecture/nodes/#condition). [timeout](/docs/concepts/architecture/nodes/#condition).
Pods may also enter these states when the user attempts graceful deletion of a Pod Pods may also enter these states when the user attempts graceful deletion of a Pod

View File

@ -382,7 +382,7 @@ with *external metrics*.
Using external metrics requires knowledge of your monitoring system; the setup is Using external metrics requires knowledge of your monitoring system; the setup is
similar to that required when using custom metrics. External metrics allow you to autoscale your cluster similar to that required when using custom metrics. External metrics allow you to autoscale your cluster
based on any metric available in your monitoring system. Just provide a `metric` block with a based on any metric available in your monitoring system. Provide a `metric` block with a
`name` and `selector`, as above, and use the `External` metric type instead of `Object`. `name` and `selector`, as above, and use the `External` metric type instead of `Object`.
If multiple time series are matched by the `metricSelector`, If multiple time series are matched by the `metricSelector`,
the sum of their values is used by the HorizontalPodAutoscaler. the sum of their values is used by the HorizontalPodAutoscaler.

View File

@ -23,9 +23,7 @@ Pod Autoscaling does not apply to objects that can't be scaled, for example, Dae
The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller.
The resource determines the behavior of the controller. The resource determines the behavior of the controller.
The controller periodically adjusts the number of replicas in a replication controller or deployment The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed metrics such as average CPU utilisation, average memory utilisation or any other custom metric to the target specified by the user.
to match the observed average CPU utilization to the target specified by user.
@ -162,7 +160,7 @@ can be fetched, scaling is skipped. This means that the HPA is still capable
of scaling up if one or more metrics give a `desiredReplicas` greater than of scaling up if one or more metrics give a `desiredReplicas` greater than
the current value. the current value.
Finally, just before HPA scales the target, the scale recommendation is recorded. The Finally, right before HPA scales the target, the scale recommendation is recorded. The
controller considers all recommendations within a configurable window choosing the controller considers all recommendations within a configurable window choosing the
highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes. highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes.
This means that scaledowns will occur gradually, smoothing out the impact of rapidly This means that scaledowns will occur gradually, smoothing out the impact of rapidly

View File

@ -39,6 +39,7 @@ on general patterns for running stateful applications in Kubernetes.
[ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/). [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).
* Some familiarity with MySQL helps, but this tutorial aims to present * Some familiarity with MySQL helps, but this tutorial aims to present
general patterns that should be useful for other systems. general patterns that should be useful for other systems.
* You are using the default namespace or another namespace that does not contain any conflicting objects.
@ -534,10 +535,9 @@ kubectl delete pvc data-mysql-4
* Learn more about [debugging a StatefulSet](/docs/tasks/debug-application-cluster/debug-stateful-set/). * Learn more about [debugging a StatefulSet](/docs/tasks/debug-application-cluster/debug-stateful-set/).
* Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/). * Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/).
* Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/). * Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
* Look in the [Helm Charts repository](https://github.com/kubernetes/charts) * Look in the [Helm Charts repository](https://artifacthub.io/)
for other stateful application examples. for other stateful application examples.

View File

@ -12,10 +12,7 @@ You can use the GCP [Service Catalog Installer](https://github.com/GoogleCloudPl
tool to easily install or uninstall Service Catalog on your Kubernetes cluster, linking it to tool to easily install or uninstall Service Catalog on your Kubernetes cluster, linking it to
Google Cloud projects. Google Cloud projects.
Service Catalog itself can work with any kind of managed service, not just Google Cloud. Service Catalog can work with any kind of managed service, not only Google Cloud.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}

View File

@ -17,9 +17,9 @@ and view logs. For more information including a complete list of kubectl operati
kubectl is installable on a variety of Linux platforms, macOS and Windows. kubectl is installable on a variety of Linux platforms, macOS and Windows.
Find your preferred operating system below. Find your preferred operating system below.
- [Install kubectl on Linux](install-kubectl-linux) - [Install kubectl on Linux](/docs/tasks/tools/install-kubectl-linux)
- [Install kubectl on macOS](install-kubectl-macos) - [Install kubectl on macOS](/docs/tasks/tools/install-kubectl-macos)
- [Install kubectl on Windows](install-kubectl-windows) - [Install kubectl on Windows](/docs/tasks/tools/install-kubectl-windows)
## kind ## kind

View File

@ -113,7 +113,7 @@ mind:
two consecutive lists. **The HTML comment needs to be at the left margin.** two consecutive lists. **The HTML comment needs to be at the left margin.**
2. Numbered lists can have paragraphs or block elements within them. 2. Numbered lists can have paragraphs or block elements within them.
Just indent the content to be the same as the first line of the bullet Indent the content to be the same as the first line of the bullet
point. **This paragraph and the code block line up with the `N` in point. **This paragraph and the code block line up with the `N` in
`Numbered` above.** `Numbered` above.**

View File

@ -184,7 +184,7 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
``` ```
Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our
nodes. For this example we'll just use SSH to install the profiles, but other approaches are nodes. For this example we'll use SSH to install the profiles, but other approaches are
discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles). discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
```shell ```shell

View File

@ -67,8 +67,8 @@ into the cluster.
For simplicity, [kind](https://kind.sigs.k8s.io/) can be used to create a single For simplicity, [kind](https://kind.sigs.k8s.io/) can be used to create a single
node cluster with the seccomp profiles loaded. Kind runs Kubernetes in Docker, node cluster with the seccomp profiles loaded. Kind runs Kubernetes in Docker,
so each node of the cluster is actually just a container. This allows for files so each node of the cluster is a container. This allows for files
to be mounted in the filesystem of each container just as one might load files to be mounted in the filesystem of each container similar to loading files
onto a node. onto a node.
{{< codenew file="pods/security/seccomp/kind.yaml" >}} {{< codenew file="pods/security/seccomp/kind.yaml" >}}

View File

@ -46,7 +46,7 @@ This tutorial provides a container image that uses NGINX to echo back all the re
{{< kat-button >}} {{< kat-button >}}
{{< note >}} {{< note >}}
If you installed minikube locally, run `minikube start`. If you installed minikube locally, run `minikube start`. Before you run `minikube dashboard`, you should open a new terminal, start `minikube dashboard` there, and then switch back to the main terminal.
{{< /note >}} {{< /note >}}
2. Open the Kubernetes dashboard in a browser: 2. Open the Kubernetes dashboard in a browser:
@ -152,7 +152,7 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/).
The application code inside the image `k8s.gcr.io/echoserver` only listens on TCP port 8080. If you used The application code inside the image `k8s.gcr.io/echoserver` only listens on TCP port 8080. If you used
`kubectl expose` to expose a different port, clients could not connect to that other port. `kubectl expose` to expose a different port, clients could not connect to that other port.
2. View the Service you just created: 2. View the Service you created:
```shell ```shell
kubectl get services kubectl get services
@ -227,7 +227,7 @@ The minikube tool includes a set of built-in {{< glossary_tooltip text="addons"
metrics-server was successfully enabled metrics-server was successfully enabled
``` ```
3. View the Pod and Service you just created: 3. View the Pod and Service you created:
```shell ```shell
kubectl get pod,svc -n kube-system kubectl get pod,svc -n kube-system

View File

@ -1083,7 +1083,7 @@ above.
`Parallel` pod management tells the StatefulSet controller to launch or `Parallel` pod management tells the StatefulSet controller to launch or
terminate all Pods in parallel, and not to wait for Pods to become Running terminate all Pods in parallel, and not to wait for Pods to become Running
and Ready or completely terminated prior to launching or terminating another and Ready or completely terminated prior to launching or terminating another
Pod. Pod. This option only affects the behavior for scaling operations. Updates are not affected.
{{< codenew file="application/web/web-parallel.yaml" >}} {{< codenew file="application/web/web-parallel.yaml" >}}

View File

@ -0,0 +1,757 @@
---
title: Administrando los recursos de los contenedores
content_type: concept
weight: 40
feature:
title: Bin packing automático
description: >
Coloca los contenedores automáticamente en base a los recursos solicitados y otras limitaciones, mientras no se afecte la
disponibilidad. Combina cargas críticas y best-effort para mejorar el uso y ahorrar recursos.
---
<!-- overview -->
Cuando especificas un {{< glossary_tooltip term_id="pod" >}}, opcionalmente puedes especificar
los recursos que necesita un {{< glossary_tooltip text="Contenedor" term_id="container" >}}.
Los recursos que normalmente se definen son CPU y memoria (RAM); pero hay otros.
Cuando especificas el recurso _request_ para Contenedores en un {{< glossary_tooltip term_id="pod" >}},
el {{< glossary_tooltip text="Scheduler de Kubernetes " term_id="kube-scheduler" >}} usa esta información para decidir en qué nodo colocar el {{< glossary_tooltip term_id="pod" >}}.
Cuando especificas el recurso _limit_ para un Contenedor, Kubelet impone estos límites, así que el contenedor no
puede utilizar más recursos que el límite que le definimos. Kubelet también reserva al menos la cantidad
especificada en _request_ para el contenedor.
<!-- body -->
## Peticiones y límites
Si el nodo donde está corriendo un pod tiene suficientes recursos disponibles, es posible
(y válido) que el {{< glossary_tooltip text="contenedor" term_id="container" >}} utilice más recursos de los especificados en `request`.
Sin embargo, un {{< glossary_tooltip text="contenedor" term_id="container" >}} no está autorizado a utilizar más de lo especificado en `limit`.
Por ejemplo, si configuras una petición de `memory` de 256 MiB para un {{< glossary_tooltip text="contenedor" term_id="container" >}}, y ese contenedor está
en un {{< glossary_tooltip term_id="pod" >}} colocado en un nodo con 8GiB de memoria y no hay otros {{< glossary_tooltip term_id="pod" >}}, entonces el contenedor puede intentar usar
más RAM.
Si configuras un límite de `memory` de 4GiB para el contenedor, {{< glossary_tooltip text="kubelet" term_id="kubelet" >}})
(y
{{< glossary_tooltip text="motor de ejecución del contenedor" term_id="container-runtime" >}}) impone el límite.
El Runtime evita que el {{< glossary_tooltip text="contenedor" term_id="container" >}} use más recursos de los configurados en el límite. Por ejemplo:
cuando un proceso en el {{< glossary_tooltip text="contenedor" term_id="container" >}} intenta consumir más cantidad de memoria de la permitida,
el Kernel del sistema termina el proceso que intentó la utilización de la memoria, con un error de out of memory (OOM).
Los límites se pueden implementar de forma reactiva (el sistema interviene cuando ve la violación)
o por imposición (el sistema previene al contenedor de exceder el límite). Diferentes Runtimes pueden tener distintas
implementaciones a las mismas restricciones.
{{< note >}}
Si un contenedor especifica su propio límite de memoria, pero no especifica la petición de memoria, Kubernetes
automáticamente asigna una petición de memoria igual a la del límite. De igual manera, si un contenedor especifica su propio límite de CPU, pero no especifica una petición de CPU, Kubernetes automáticamente asigna una petición de CPU igual a la especificada en el límite.
{{< /note >}}
## Tipos de recursos
*CPU* y *memoria* son cada uno un *tipo de recurso*. Un tipo de recurso tiene una unidad base.
CPU representa procesos de computación y es especificada en unidades de [Kubernetes CPUs](#meaning-of-cpu).
Memoria es especificada en unidades de bytes.
Si estás usando Kubernetes v1.14 o posterior, puedes especificar recursos _huge page_.
Huge pages son una característica de Linux específica donde el kernel del nodo asigna bloques
de memoria que son más grandes que el tamaño de paginación por defecto.
Por ejemplo, en un sistema donde el tamaño de paginación por defecto es de 4KiB, podrías
especificar un límite, `hugepages-2Mi: 80Mi`. Si el contenedor intenta asignar
más de 40 2MiB huge pages (un total de 80 MiB), la asignación fallará.
{{< note >}}
No se pueden sobreasignar recursos `hugepages-*`.
A diferencia de los recursos de `memoria` y `cpu`.
{{< /note >}}
CPU y memoria son colectivamente conocidos como *recursos de computación*, o simplemente como
*recursos*. Los recursos de computación son cantidades medibles que pueden ser solicitadas, asignadas
y consumidas. Son distintas a los [Recursos API](/docs/concepts/overview/kubernetes-api/). Los recursos API , como {{< glossary_tooltip text="Pods" term_id="pod" >}} y
[Services](/docs/concepts/services-networking/service/) son objetos que pueden ser leídos y modificados
a través de la API de Kubernetes.
## Peticiones y límites de recursos de Pods y Contenedores
Cada contenedor de un Pod puede especificar uno o más de los siguientes:
* `spec.containers[].resources.limits.cpu`
* `spec.containers[].resources.limits.memory`
* `spec.containers[].resources.limits.hugepages-<size>`
* `spec.containers[].resources.requests.cpu`
* `spec.containers[].resources.requests.memory`
* `spec.containers[].resources.requests.hugepages-<size>`
Aunque las peticiones y límites pueden ser especificadas solo en contenedores individuales, es conveniente hablar
sobre los recursos de peticiones y límites del Pod. Un *limite/petición
de recursos de un Pod* para un tipo de recurso particular es la suma de
peticiones/límites de cada tipo para cada contenedor del Pod.
## Unidades de recursos en Kubernetes
### Significado de CPU
Límites y peticiones para recursos de CPU son medidos en unidades de *cpu*.
Una cpu, en Kubernetes, es equivalente a **1 vCPU/Core** para proveedores de cloud y **1 hyperthread** en procesadores bare-metal Intel.
Las peticiones fraccionadas están permitidas. Un contenedor con `spec.containers[].resources.requests.cpu` de `0.5` tiene garantizada la mitad, tanto
CPU como otro que requiere 1 CPU. La expresión `0.1` es equivalente a la expresión `100m`, que puede ser leída como "cien millicpus". Algunas personas dicen
"cienmilicores", y se entiende que quiere decir lo mismo. Una solicitud con un punto decimal, como `0.1`, es convertido a `100m` por la API, y no se permite
una precisión mayor que `1m`. Por esta razón, la forma `100m` es la preferente.
CPU es siempre solicitada como una cantidad absoluta, nunca como una cantidad relativa;
0.1 es la misma cantidad de cpu que un core-simple, dual-core, o máquina de 48-core.
### Significado de memoria
Los límites y peticiones de `memoria` son medidos en bytes. Puedes expresar la memoria como
un número entero o como un número decimal usando alguno de estos sufijos:
E, P, T, G, M, K. También puedes usar los equivalentes en potencia de dos: Ei, Pi, Ti, Gi,
Mi, Ki. Por ejemplo, los siguientes valores representan lo mismo:
```shell
128974848, 129e6, 129M, 123Mi
```
Aquí un ejemplo.
El siguiente {{< glossary_tooltip text="Pod" term_id="pod" >}} tiene dos contenedores. Cada contenedor tiene una petición de 0.25 cpu
y 64MiB (2<sup>26</sup> bytes) de memoria. Cada contenedor tiene un límite de 0.5 cpu
y 128MiB de memoria. Puedes decirle al Pod que solicite 0.5 cpu y 128MiB de memoria
y un límite de 1 cpu y 256MiB de memoria.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
```
## Cómo son programados los Pods con solicitudes de recursos
Cuando creas un {{< glossary_tooltip text="Pod" term_id="pod" >}}, el {{< glossary_tooltip text="planificador de Kubernetes " term_id="kube-scheduler" >}} determina el nodo para correr dicho {{< glossary_tooltip text="Pod" term_id="pod" >}}.
Cada nodo tiene una capacidad máxima para cada tipo de recurso:
la cantidad de CPU y memoria que dispone para los Pods. El {{< glossary_tooltip text="planificador de Kubernetes" term_id="kube-scheduler" >}} se asegura de que,
para cada tipo de recurso, la suma de los recursos solicitados de los contenedores programados sea menor a la capacidad del nodo. Cabe mencionar que aunque la memoria actual o CPU
en uso de los nodos sea muy baja, el {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}} todavía rechaza programar un {{< glossary_tooltip text="Pod" term_id="pod" >}} en un nodo si
la comprobación de capacidad falla. Esto protege contra escasez de recursos en un nodo
cuando el uso de recursos posterior crece, por ejemplo, durante un pico diario de
solicitud de recursos.
## Cómo corren los Pods con límites de recursos
Cuando el {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} inicia un {{< glossary_tooltip text="contenedor" term_id="container" >}} de un {{< glossary_tooltip text="Pod" term_id="pod" >}}, este pasa los límites de CPU y
memoria al {{< glossary_tooltip text="runtime del contenedor" term_id="container-runtime" >}}.
Cuando usas Docker:
- El `spec.containers[].resources.requests.cpu` es convertido a su valor interno,
el cuál es fraccional, y multiplicado por 1024. El mayor valor de este número o
2 es usado por el valor de
[`--cpu-shares`](https://docs.docker.com/engine/reference/run/#cpu-share-constraint)
en el comando `docker run`.
- El `spec.containers[].resources.limits.cpu` se convierte a su valor en milicore y
multiplicado por 100. El resultado es el tiempo total de CPU que un contenedor puede usar
cada 100ms. Un contenedor no puede usar más tiempo de CPU que del solicitado durante este intervalo.
{{< note >}}
El período por defecto es de 100ms. La resolución mínima de cuota mínima es 1ms.
{{</ note >}}
- El `spec.containers[].resources.limits.memory` se convierte a entero, y
se usa como valor de
[`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints)
del comando `docker run`.
Si el {{< glossary_tooltip text="contenedor" term_id="container" >}} excede su límite de memoria, este quizá se detenga. Si es reiniciable,
el {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} lo reiniciará, así como cualquier otro error.
Si un Contenedor excede su petición de memoria, es probable que ese Pod sea
desalojado en cualquier momento que el nodo se quede sin memoria.
Un Contenedor puede o no tener permitido exceder el límite de CPU por
algunos períodos de tiempo. Sin embargo, esto no lo destruirá por uso excesivo de CPU.
Para conocer cuando un Contenedor no puede ser programado o será destruido debido a
límite de recursos, revisa la sección de [Troubleshooting](#troubleshooting).
### Monitorización del uso de recursos de computación y memoria.
El uso de recursos de un Pod es reportado como parte del estado del Pod.
Si [herramientas opcionales para monitorización](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
están disponibles en tu cluster, entonces el uso de recursos del Pod puede extraerse directamente de
[Métricas API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api)
o desde tus herramientas de monitorización.
## Almacenamiento local efímero
<!-- feature gate LocalStorageCapacityIsolation -->
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
Los nodos tienen almacenamiento local efímero, respaldado por
dispositivos de escritura agregados o, a veces, por RAM.
"Efímero" significa que no se garantiza la durabilidad a largo plazo.
.
Los Pods usan el almacenamiento local efímero para añadir espacio, caché, y para logs.
Kubelet puede proveer espacio añadido a los Pods usando almacenamiento local efímero para
montar [`emptyDir`](/docs/concepts/storage/volumes/#emptydir)
{{< glossary_tooltip term_id="volume" text="volumes" >}} en los contenedores.
Kubelet también usa este tipo de almacenamiento para guardar
[logs de contenedores a nivel de nodo](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level),
imágenes de contenedores, y la capa de escritura de los contenedores.
{{< caution >}}
Si un nodo falla, los datos en el almacenamiento efímero se pueden perder.
Tus aplicaciones no pueden esperar ningun SLA (IOPS de disco, por ejemplo)
del almacenamiento local efímero.
{{< /caution >}}
Como característica beta, Kubernetes te deja probar, reservar y limitar la cantidad
de almacenamiento local efímero que un Pod puede consumir.
### Configuraciones para almacenamiento local efímero
Kubernetes soporta 2 maneras de configurar el almacenamiento local efímero en un nodo:
{{< tabs name="local_storage_configurations" >}}
{{% tab name="Single filesystem" %}}
En esta configuración, colocas todos los tipos de datos (`emptyDir` volúmenes, capa de escritura,
imágenes de contenedores, logs) en un solo sistema de ficheros.
La manera más efectiva de configurar Kubelet es dedicando este sistema de archivos para los datos de Kubernetes (kubelet).
Kubelet también escribe
[logs de contenedores a nivel de nodo](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
y trata estos de manera similar al almacenamiento efímero.
Kubelet escribe logs en ficheros dentro del directorio de logs (por defecto `/var/log`
); y tiene un directorio base para otros datos almacenados localmente
(`/var/lib/kubelet` por defecto).
Por lo general, `/var/lib/kubelet` y `/var/log` están en el sistema de archivos de root,
y Kubelet es diseñado con ese objetivo en mente.
Tu nodo puede tener tantos otros sistema de archivos, no usados por Kubernetes,
como quieras.
{{% /tab %}}
{{% tab name="Two filesystems" %}}
Tienes un sistema de archivos en el nodo que estás usando para datos efímeros que
provienen de los Pods corriendo: logs, y volúmenes `emptyDir`.
Puedes usar este sistema de archivos para otros datos (por ejemplo: logs del sistema no relacionados
con Kubernetes); estos pueden ser incluso del sistema de archivos root.
Kubelet también escribe
[logs de contenedores a nivel de nodo](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
en el primer sistema de archivos, y trata estos de manera similar al almacenamiento efímero.
También usas un sistema de archivos distinto, respaldado por un dispositivo de almacenamiento lógico diferente.
En esta configuración, el directorio donde le dices a Kubelet que coloque
las capas de imágenes de los contenedores y capas de escritura es este segundo sistema de archivos.
El primer sistema de archivos no guarda ninguna capa de imágenes o de escritura.
Tu nodo puede tener tantos sistemas de archivos, no usados por Kubernetes, como quieras.
{{% /tab %}}
{{< /tabs >}}
Kubelet puede medir la cantidad de almacenamiento local que se está usando. Esto es posible por:
- el `LocalStorageCapacityIsolation`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
está habilitado (esta caracterísitca está habilitada por defecto), y
- has configurado el nodo usando una de las configuraciones soportadas
para almacenamiento local efímero..
Si tienes una configuración diferente, entonces Kubelet no aplica límites de recursos
para almacenamiento local efímero.
{{< note >}}
Kubelet rastrea `tmpfs` volúmenes emptyDir como uso de memoria de contenedor, en lugar de
almacenamiento local efímero.
{{< /note >}}
### Configurando solicitudes y límites para almacenamiento local efímero
Puedes usar _ephemeral-storage_ para manejar almacenamiento local efímero. Cada contenedor de un Pod puede especificar
uno o más de los siguientes:
* `spec.containers[].resources.limits.ephemeral-storage`
* `spec.containers[].resources.requests.ephemeral-storage`
Los límites y solicitudes para `almacenamiento-efímero` son medidos en bytes. Puedes expresar el almacenamiento
como un numero entero o flotante usando los siguientes sufijos:
E, P, T, G, M, K. También puedes usar las siguientes equivalencias: Ei, Pi, Ti, Gi,
Mi, Ki. Por ejemplo, los siguientes representan el mismo valor:
```shell
128974848, 129e6, 129M, 123Mi
```
En el siguiente ejemplo, el Pod tiene dos contenedores. Cada contenedor tiene una petición de 2GiB de almacenamiento local efímero. Cada
contenedor tiene un límite de 4GiB de almacenamiento local efímero. Sin embargo, el Pod tiene una petición de 4GiB de almacenamiento efímero
, y un límite de 8GiB de almacenamiento local efímero.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "4Gi"
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "4Gi"
```
### Como son programados los Pods con solicitudes de almacenamiento efímero
Cuando creas un Pod, el planificador de Kubernetes selecciona un nodo para el Pod donde sera creado.
Cada nodo tiene una cantidad máxima de almacenamiento local efímero que puede proveer a los Pods. Para
más información, mira [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
El planificador se asegura de que el total de los recursos solicitados para los contenedores sea menor que la capacidad del nodo.
### Manejo del consumo de almacenamiento efímero {#resource-emphemeralstorage-consumption}
Si Kubelet está manejando el almacenamiento efímero local como un recurso, entonces
Kubelet mide el uso de almacenamiento en:
- volúmenes `emptyDir`, excepto _tmpfs_ volúmenes`emptyDir`
- directorios que guardan logs de nivel de nodo
- capas de escritura de contenedores
Si un Pod está usando más almacenamiento efímero que el permitido, Kubelet
establece una señal de desalojo que desencadena el desalojo del Pod.
Para aislamiento a nivel de contenedor, si una capa de escritura del contenedor y
logs excede el límite de uso del almacenamiento, Kubelet marca el Pod para desalojo.
Para aislamiento a nivel de Pod, Kubelet calcula un límite de almacenamiento
general para el Pod sumando los límites de los contenedores de ese Pod.
En este caso, si la suma del uso de almacenamiento local efímero para todos los contenedores
y los volúmenes `emptyDir` de los Pods excede el límite de almacenamiento general del
Pod, Kubelet marca el Pod para desalojo.
{{< caution >}}
Si Kubelet no está midiendo el almacenamiento local efímero, entonces el Pod
que excede este límite de almacenamiento, no será desalojado para liberar
el límite del recurso de almacenamiento.
Sin embargo, si el espacio del sistema de archivos para la capa de escritura del contenedor,
logs a nivel de nodo o volúmenes `emptyDir` decae, el
{{< glossary_tooltip text="taints" term_id="taint" >}} del nodo lanza la desalojo para
cualquier Pod que no tolere dicho taint.
Mira las [configuraciones soportadas](#configurations-for-local-ephemeral-storage)
para almacenamiento local efímero.
{{< /caution >}}
Kubelet soporta diferentes maneras de medir el uso de almacenamiento del Pod:
{{< tabs name="resource-emphemeralstorage-measurement" >}}
{{% tab name="Periodic scanning" %}}
Kubelet realiza frecuentemente, verificaciones programadas que revisan cada
volumen `emptyDir`, directorio de logs del contenedor, y capa de escritura
del contenedor.
El escáner mide cuanto espacio está en uso.
{{< note >}}
En este modo, Kubelet no rastrea descriptores de archivos abiertos
para archivos eliminados.
Si tú (o un contenedor) creas un archivo dentro de un volumen `emptyDir`,
y algo mas abre ese archivo, y tú lo borras mientras este está abierto,
entonces el inodo para este archivo borrado se mantiene hasta que cierras
el archivo, pero Kubelet no cataloga este espacio como en uso.
{{< /note >}}
{{% /tab %}}
{{% tab name="Filesystem project quota" %}}
{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
Las cuotas de proyecto están definidas a nivel de sistema operativo
para el manejo de uso de almacenamiento en uso de sistema de archivos.
Con Kubernetes, puedes habilitar las cuotas de proyecto para el uso
de la monitorización del almacenamiento. Asegúrate que el respaldo del
Sistema de archivos de los volúmenes `emptyDir` , en el nodo, provee soporte de
cuotas de proyecto.
Por ejemplo, XFS y ext4fs ofrecen cuotas de proyecto.
{{< note >}}
Las cuotas de proyecto te permiten monitorear el uso del almacenamiento; no
fuerzan los límites.
{{< /note >}}
Kubernetes usa IDs de proyecto empezando por `1048576`. Los IDs en uso
son registrados en `/etc/projects` y `/etc/projid`. Si los IDs de proyecto
en este rango son usados para otros propósitos en el sistema, esos IDs
de proyecto deben ser registrados en `/etc/projects` y `/etc/projid` para
que Kubernetes no los use.
Las cuotas son más rápidas y más precisas que el escáner de directorios.
Cuando un directorio es asignado a un proyecto, todos los ficheros creados
bajo un directorio son creados en ese proyecto, y el kernel simplemente
tiene que mantener rastreados cuántos bloques están en uso por ficheros
en ese proyecto. Si un fichero es creado y borrado, pero tiene un fichero abierto,
continúa consumiendo espacio. El seguimiento de cuotas registra ese espacio
con precisión mientras que los escaneos de directorios pasan por alto
el almacenamiento utilizado por los archivos eliminados
Si quieres usar cuotas de proyecto, debes:
* Habilitar el `LocalStorageCapacityIsolationFSQuotaMonitoring=true`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
en la configuración del kubelet.
* Asegúrese de que el sistema de archivos raíz (o el sistema de archivos en tiempo de ejecución opcional)
tiene las cuotas de proyectos habilitadas. Todos los sistemas de archivos XFS admiten cuotas de proyectos.
Para los sistemas de archivos ext4, debe habilitar la función de seguimiento de cuotas del proyecto
mientras el sistema de archivos no está montado.
```bash
# For ext4, with /dev/block-device not mounted
sudo tune2fs -O project -Q prjquota /dev/block-device
```
* Asegúrese de que el sistema de archivos raíz (o el sistema de archivos de tiempo de ejecución opcional) esté
montado con cuotas de proyecto habilitadas. Tanto para XFS como para ext4fs, la opción de montaje
se llama `prjquota`.
{{% /tab %}}
{{< /tabs >}}
## Recursos extendidos
Los recursos extendidos son nombres de recursos calificados fuera del
dominio `kubernetes.io`. Permiten que los operadores de clústers publiciten y los usuarios
consuman los recursos no integrados de Kubernetes.
Hay dos pasos necesarios para utilizar los recursos extendidos. Primero, el operador del clúster
debe anunciar un Recurso Extendido. En segundo lugar, los usuarios deben solicitar
el Recurso Extendido en los Pods.
### Manejando recursos extendidos
#### Recursos extendido a nivel de nodo
Los recursos extendidos a nivel de nodo están vinculados a los nodos
##### Device plugin managed resources
Mira [Plugins de
Dispositivos](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
para percibir como los plugins de dispositivos manejan los recursos
en cada nodo.
##### Otros recursos
Para anunciar un nuevo recurso extendido a nivel de nodo, el operador del clúster puede
enviar una solicitud HTTP `PATCH` al servidor API para especificar la cantidad
disponible en el `status.capacity` para un nodo en el clúster. Después de esta
operación, el `status.capacity` del nodo incluirá un nuevo recurso. El campo
`status.allocatable` se actualiza automáticamente con el nuevo recurso
de forma asíncrona por el {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}. Tenga en cuenta que debido a que el {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}}
utiliza el valor de `status.allocatable` del nodo cuando evalúa la aptitud del {{< glossary_tooltip text="Pod" term_id="pod" >}}, puede haber un breve
retraso entre parchear la capacidad del nodo con un nuevo recurso y el primer Pod
que solicita el recurso en ese nodo.
**Ejemplo:**
Aquí hay un ejemplo que muestra cómo usar `curl` para formar una solicitud HTTP que
anuncia cinco recursos "example.com/foo" en el nodo `k8s-node-1` cuyo nodo master
es `k8s-master`.
```shell
curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path": "/status/capacity/example.com~1foo", "value": "5"}]' \
http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
```
{{< note >}}
En la solicitud anterior, `~ 1` es la codificación del carácter` / `
en la ruta del parche. El valor de la ruta de operación en JSON-Patch se interpreta como un
puntero JSON. Para obtener más detalles, consulte
[IETF RFC 6901, sección 3](https://tools.ietf.org/html/rfc6901#section-3).
{{< /note >}}
#### Recursos extendidos a nivel de Clúster
Los recursos extendidos a nivel de clúster no están vinculados a los nodos. Suelen estar gestionados
por extensores del scheduler, que manejan el consumo de recursos y la cuota de recursos.
Puedes especificar los recursos extendidos que son mantenidos por los extensores del scheduler en
[configuración de políticas del scheduler](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31).
**Ejemplo:**
La siguiente configuración para una política del scheduler indica que el
recurso extendido a nivel de clúster "example.com/foo" es mantenido
por el extensor del scheduler.
- El scheduler envía un Pod al extensor del scheduler solo si la solicitud del Pod "example.com/foo".
- El campo `ignoredByScheduler` especifica que el schduler no compruba el recurso
"example.com/foo" en su predicado `PodFitsResources`.
```json
{
"kind": "Policy",
"apiVersion": "v1",
"extenders": [
{
"urlPrefix":"<extender-endpoint>",
"bindVerb": "bind",
"managedResources": [
{
"name": "example.com/foo",
"ignoredByScheduler": true
}
]
}
]
}
```
### Consumiendo recursos extendidos
Los usuarios pueden consumir recursos extendidos en las especificaciones del Pod, como la CPU y la memoria.
El {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}} se encarga de la contabilidad de recursos para que no más de
la cantidad disponible sea asignada simultáneamente a los Pods.
El servidor de API restringe las cantidades de recursos extendidos a números enteros.
Ejemplos de cantidades _validas_ son `3`,` 3000m` y `3Ki`. Ejemplos de
_cantidades no válidas_ son `0.5` y` 1500m`.
{{< note >}}
Los recursos extendidos reemplazan los Recursos Integrales Opacos.
Los usuarios pueden usar cualquier otro prefijo de dominio que `kubernetes.io`
tenga reservado.
{{< /note >}}
Para consumir un recurso extendido en un Pod, incluye un nombre de recurso
como clave en `spec.containers[].resources.limits` en las especificaciones del contenedor.
{{< note >}}
Los Recursos Extendidos no pueden ser sobreescritos, así que solicitudes y límites
deben ser iguales si ambos están presentes en las especificaciones de un contenedor.
{{< /note >}}
Un pod se programa solo si se satisfacen todas las solicitudes de recursos, incluidas
CPU, memoria y cualquier recurso extendido. El {{< glossary_tooltip text="Pod" term_id="pod" >}} permanece en estado `PENDING`
siempre que no se pueda satisfacer la solicitud de recursos.
**Ejemplo:**
El siguiente {{< glossary_tooltip text="Pod" term_id="pod" >}} solicita 2CPUs y 1 "example.com/foo" (un recurso extendido).
```yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: myimage
resources:
requests:
cpu: 2
example.com/foo: 1
limits:
example.com/foo: 1
```
## Solución de problemas
### Mis Pods están en estado pendiente con un mensaje de failedScheduling
Si el {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}} no puede encontrar ningún nodo donde pueda colocar un {{< glossary_tooltip text="Pod" term_id="pod" >}}, el {{< glossary_tooltip text="Pod" term_id="pod" >}} permanece
no programado hasta que se pueda encontrar un lugar. Se produce un evento cada vez que
el {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}} no encuentra un lugar para el {{< glossary_tooltip text="Pod" term_id="pod" >}}, como este:
```shell
kubectl describe pod frontend | grep -A 3 Events
```
```
Events:
FirstSeen LastSeen Count From Subobject PathReason Message
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
```
En el ejemplo anterior, el Pod llamado "frontend" no se puede programar debido a
recursos de CPU insuficientes en el nodo. Mensajes de error similares también pueden sugerir
fallo debido a memoria insuficiente (PodExceedsFreeMemory). En general, si un Pod
está pendiente con un mensaje de este tipo, hay varias cosas para probar:
- Añadir más nodos al clúster.
- Terminar Pods innecesarios para hacer hueco a los Pods en estado pendiente.
- Compruebe que el Pod no sea más grande que todos los nodos. Por ejemplo, si todos los
los nodos tienen una capacidad de `cpu: 1`, entonces un Pod con una solicitud de` cpu: 1.1`
nunca se programará.
Puedes comprobar las capacidades del nodo y cantidad utilizada con el comando
`kubectl describe nodes`. Por ejemplo:
```shell
kubectl describe nodes e2e-test-node-pool-4lw4
```
```
Name: e2e-test-node-pool-4lw4
[ ... lines removed for clarity ...]
Capacity:
cpu: 2
memory: 7679792Ki
pods: 110
Allocatable:
cpu: 1800m
memory: 7474992Ki
pods: 110
[ ... lines removed for clarity ...]
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)
kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)
kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)
kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)
```
EN la salida anterior, puedes ver si una solicitud de Pod mayor que 1120m
CPUs o 6.23Gi de memoria, no cabrán en el nodo.
Echando un vistazo a la sección `Pods`, puedes ver qué Pods están ocupando espacio
en el nodo.
La cantidad de recursos disponibles para los pods es menor que la capacidad del nodo, porque
los demonios del sistema utilizan una parte de los recursos disponibles. El campo `allocatable`
[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core)
indica la cantidad de recursos que están disponibles para los Pods. Para más información, mira
[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md).
La característica [resource quota](/docs/concepts/policy/resource-quotas/) se puede configurar
para limitar la cantidad total de recursos que se pueden consumir. Si se usa en conjunto
con espacios de nombres, puede evitar que un equipo acapare todos los recursos.
### Mi contenedor está terminado
Es posible que su contenedor se cancele porque carece de recursos. Para verificar
si un contenedor está siendo eliminado porque está alcanzando un límite de recursos, ejecute
`kubectl describe pod` en el Pod de interés:
```shell
kubectl describe pod simmemleak-hra99
```
```
Name: simmemleak-hra99
Namespace: default
Image(s): saadali/simmemleak
Node: kubernetes-node-tf0f/10.240.216.66
Labels: name=simmemleak
Status: Running
Reason:
Message:
IP: 10.244.2.75
Replication Controllers: simmemleak (1/1 replicas created)
Containers:
simmemleak:
Image: saadali/simmemleak
Limits:
cpu: 100m
memory: 50Mi
State: Running
Started: Tue, 07 Jul 2015 12:54:41 -0700
Last Termination State: Terminated
Exit Code: 1
Started: Fri, 07 Jul 2015 12:54:30 -0700
Finished: Fri, 07 Jul 2015 12:54:33 -0700
Ready: False
Restart Count: 5
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
```
En el ejemplo anterior, `Restart Count: 5` indica que el contenedor `simmemleak`
del Pod se reinició cinco veces.
Puedes ejecutar `kubectl get pod` con la opción `-o go-template=...` para extraer el estado
previos de los Contenedores terminados:
```shell
kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99
```
```
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
```
Puedes ver que el Contenedor fué terminado a causa de `reason:OOM Killed`, donde `OOM` indica una falta de memoria.
## {{% heading "whatsnext" %}}
* Obtén experiencia práctica [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
* Obtén experiencia práctica [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
* Para más detalles sobre la diferencia entre solicitudes y límites, mira
[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md).
* Lee [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) referencia de API
* Lee [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) referencia de API
* Lee sobre [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) en XFS

View File

@ -0,0 +1,5 @@
---
title: "Administrar Objetos en Kubernetes"
description: Interactuando con el API de Kubernetes aplicando paradigmas declarativo e imperativo.
weight: 25
---

File diff suppressed because it is too large Load Diff

View File

@ -49,7 +49,6 @@ Puedes correr una aplicación creando un `deployment` de Kubernetes, y puedes de
El resultado es similar a esto: El resultado es similar a esto:
user@computer:~/website$ kubectl describe deployment nginx-deployment
Name: nginx-deployment Name: nginx-deployment
Namespace: default Namespace: default
CreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700 CreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700

Some files were not shown because too many files have changed in this diff Show More