diff --git a/README-es.md b/README-es.md index f5b1e870dd..3f00746125 100644 --- a/README-es.md +++ b/README-es.md @@ -30,6 +30,17 @@ El método recomendado para levantar una copia local del sitio web kubernetes.io > Si prefiere levantar el sitio web sin utilizar **Docker**, puede seguir las instrucciones disponibles en la sección [Levantando kubernetes.io en local con Hugo](#levantando-kubernetesio-en-local-con-hugo). +**`Nota`: Para el procedimiento de construir una imagen de Docker e iniciar el servidor.** +El sitio web de Kubernetes utiliza Docsy Hugo theme. Se sugiere que se instale si aún no se ha hecho, los **submódulos** y otras dependencias de herramientas de desarrollo ejecutando el siguiente comando de `git`: + +```bash +# pull de los submódulos del repositorio +git submodule update --init --recursive --depth 1 + +``` + +Si identifica que `git` reconoce una cantidad innumerable de cambios nuevos en el proyecto, la forma más simple de solucionarlo es cerrando y volviendo a abrir el proyecto en el editor. Los submódulos son automáticamente detectados por `git`, pero los plugins usados por los editores pueden tener dificultades para ser cargados. + Una vez tenga Docker [configurado en su máquina](https://www.docker.com/get-started), puede construir la imagen de Docker `kubernetes-hugo` localmente ejecutando el siguiente comando en la raíz del repositorio: ```bash @@ -73,4 +84,4 @@ La participación en la comunidad de Kubernetes está regulada por el [Código d Kubernetes es posible gracias a la participación de la comunidad y la documentación es vital para facilitar el acceso al proyecto. -Agradecemos muchísimo sus contribuciones a nuestro sitio web y nuestra documentación. \ No newline at end of file +Agradecemos muchísimo sus contribuciones a nuestro sitio web y nuestra documentación. diff --git a/README-pt.md b/README-pt.md index 0992f6c045..e27bf544d1 100644 --- a/README-pt.md +++ b/README-pt.md @@ -144,6 +144,15 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist Esta solução funciona tanto para o MacOS Catalina quanto para o MacOS Mojave. +### Erro de "Out of Memory" + +Se você executar o comando `make container-serve` e retornar o seguinte erro: +``` +make: *** [container-serve] Error 137 +``` + +Verifique a quantidade de memória disponível para o agente de execução de contêiner. No caso do Docker Desktop para macOS, abra o menu "Preferences..." -> "Resources..." e tente disponibilizar mais memória. + # Comunidade, discussão, contribuição e apoio Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/). diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md index 2e7235a89f..a4814aab4b 100644 --- a/content/en/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -11,20 +11,20 @@ aliases: -This document catalogs the communication paths between the control plane (really the apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). +This document catalogs the communication paths between the control plane (apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). ## Node to Control Plane -Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminate at the apiserver (none of the other control plane components are designed to expose remote services). The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. +Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminates at the apiserver. None of the other control plane components are designed to expose remote services. The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed. Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated. -The `kubernetes` service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. +The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. The control plane components also communicate with the cluster apiserver over the secure port. @@ -42,7 +42,7 @@ The connections from the apiserver to the kubelet are used for: * Attaching (through kubectl) to running pods. * Providing the kubelet's port-forwarding functionality. -These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and **unsafe** to run over untrusted and/or public networks. +These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks and **unsafe** to run over untrusted and/or public networks. To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate. @@ -53,20 +53,20 @@ Finally, [Kubelet authentication and/or authorization](/docs/reference/command-l ### apiserver to nodes, pods, and services -The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials so while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted and/or public networks. +The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted or public networks. ### SSH tunnels Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel. This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running. -SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. +SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. ### Konnectivity service {{< feature-state for_k8s_version="v1.18" state="beta" >}} -As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server and the Konnectivity agents, running in the control plane network and the nodes network respectively. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. +As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections. Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster. diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 7f1f55b91d..5e41e66c5a 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -17,7 +17,7 @@ and contains the services necessary to run {{< glossary_tooltip text="Pods" term_id="pod" >}} Typically you have several nodes in a cluster; in a learning or resource-limited -environment, you might have just one. +environment, you might have only one node. The [components](/docs/concepts/overview/components/#node-components) on a node include the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a @@ -237,6 +237,7 @@ responsible for: - Evicting all the pods from the node using graceful termination if the node continues to be unreachable. The default timeouts are 40s to start reporting ConditionUnknown and 5m after that to start evicting pods. + The node controller checks the state of each node every `--node-monitor-period` seconds. #### Heartbeats @@ -278,6 +279,7 @@ the same time: `--large-cluster-size-threshold` nodes - default 50), then evictions are stopped. - Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate` (default 0.01) per second. + The reason these policies are implemented per availability zone is because one availability zone might become partitioned from the master while the others remain connected. If your cluster does not span multiple cloud provider availability zones, diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index e8fd3e4061..3e94277d93 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -427,7 +427,7 @@ poorly-behaved workloads that may be harming system health. histogram vector of queue lengths for the queues, broken down by the labels `priority_level` and `flow_schema`, as sampled by the enqueued requests. Each request that gets queued contributes one - sample to its histogram, reporting the length of the queue just + sample to its histogram, reporting the length of the queue immediately after the request was added. Note that this produces different statistics than an unbiased survey would. {{< note >}} diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index 40320e4285..f51911116d 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -278,7 +278,7 @@ pod/my-nginx-2035384211-u3t6x labeled ``` This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe". -To see the pods you just labeled, run: +To see the pods you labeled, run: ```shell kubectl get pods -l app=nginx -L tier @@ -411,7 +411,7 @@ and ## Disruptive updates -In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file: +In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file: ```shell kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force diff --git a/content/en/docs/concepts/cluster-administration/proxies.md b/content/en/docs/concepts/cluster-administration/proxies.md index 9bf204bd9f..ba86c969b8 100644 --- a/content/en/docs/concepts/cluster-administration/proxies.md +++ b/content/en/docs/concepts/cluster-administration/proxies.md @@ -39,7 +39,7 @@ There are several different proxies you may encounter when using Kubernetes: - proxies UDP, TCP and SCTP - does not understand HTTP - provides load balancing - - is just used to reach services + - is only used to reach services 1. A Proxy/Load-balancer in front of apiserver(s): diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index 2668050d26..fa683e97f3 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -72,8 +72,7 @@ You cannot overcommit `hugepages-*` resources. This is different from the `memory` and `cpu` resources. {{< /note >}} -CPU and memory are collectively referred to as *compute resources*, or just -*resources*. Compute +CPU and memory are collectively referred to as *compute resources*, or *resources*. Compute resources are measurable quantities that can be requested, allocated, and consumed. They are distinct from [API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and @@ -554,7 +553,7 @@ extender. ### Consuming extended resources -Users can consume extended resources in Pod specs just like CPU and memory. +Users can consume extended resources in Pod specs like CPU and memory. The scheduler takes care of the resource accounting so that no more than the available amount is simultaneously allocated to Pods. diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index e011ad37b0..7bdba8dc5d 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -109,7 +109,7 @@ empty-secret Opaque 0 2m6s ``` The `DATA` column shows the number of data items stored in the Secret. -In this case, `0` means we have just created an empty Secret. +In this case, `0` means we have created an empty Secret. ### Service account token Secrets diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md index 96569f9518..49cc25ffbd 100644 --- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md @@ -50,10 +50,11 @@ A more detailed description of the termination behavior can be found in ### Hook handler implementations Containers can access a hook by implementing and registering a handler for that hook. -There are two types of hook handlers that can be implemented for Containers: +There are three types of hook handlers that can be implemented for Containers: * Exec - Executes a specific command, such as `pre-stop.sh`, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container. +* TCP - Opens a TCP connecton against a specific port on the Container. * HTTP - Executes an HTTP request against a specific endpoint on the Container. ### Hook handler execution diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 1166d4106a..6d0db16fe8 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -135,7 +135,7 @@ Here are the recommended steps to configuring your nodes to use a private regist example, run these on your desktop/laptop: 1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC. - 1. View `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use. + 1. View `$HOME/.docker/config.json` in an editor to ensure it contains only the credentials you want to use. 1. Get a list of your nodes; for example: - if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )` - if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )` diff --git a/content/en/docs/concepts/extend-kubernetes/_index.md b/content/en/docs/concepts/extend-kubernetes/_index.md index 429912a9ed..cc5ba809ec 100644 --- a/content/en/docs/concepts/extend-kubernetes/_index.md +++ b/content/en/docs/concepts/extend-kubernetes/_index.md @@ -145,7 +145,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat ### Authorization -[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. +[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. ### Dynamic Admission Control diff --git a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md index 84d14cee3e..2bdc74e7e9 100644 --- a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md +++ b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md @@ -146,7 +146,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat ### Authorization -[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. +[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. ### Dynamic Admission Control diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index f078cb8636..b7ae176d7c 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -28,7 +28,7 @@ resource can only be in one namespace. Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)). -It is not necessary to use multiple namespaces just to separate slightly different +It is not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same software: use [labels](/docs/concepts/overview/working-with-objects/labels) to distinguish resources within the same namespace. @@ -91,7 +91,7 @@ kubectl config view --minify | grep namespace: When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). This entry is of the form `..svc.cluster.local`, which means -that if a container just uses ``, it will resolve to the service which +that if a container only uses ``, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN). diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 41c4d0ceca..4ae128afa0 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -120,12 +120,12 @@ pod is eligible to be scheduled on, based on labels on the node. There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively, -in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (just like +in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (similar to `nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer -met, the pod will still continue to run on the node. In the future we plan to offer -`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution` +met, the pod continues to run on the node. In the future we plan to offer +`requiredDuringSchedulingRequiredDuringExecution` which will be identical to `requiredDuringSchedulingIgnoredDuringExecution` except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements. Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs" diff --git a/content/en/docs/concepts/security/controlling-access.md b/content/en/docs/concepts/security/controlling-access.md index e025ac10e3..9d6c2b9617 100644 --- a/content/en/docs/concepts/security/controlling-access.md +++ b/content/en/docs/concepts/security/controlling-access.md @@ -43,7 +43,7 @@ Authenticators are described in more detail in [Authentication](/docs/reference/access-authn-authz/authentication/). The input to the authentication step is the entire HTTP request; however, it typically -just examines the headers and/or client certificate. +examines the headers and/or client certificate. Authentication modules include client certificates, password, and plain tokens, bootstrap tokens, and JSON Web Tokens (used for service accounts). diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index 402c3c57ca..14bc98101f 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -387,7 +387,7 @@ $ curl https://: -k

Welcome to nginx!

``` -Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: +Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: ```shell kubectl edit svc my-nginx diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index 7a189a401b..b6be91cb9a 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -260,7 +260,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service {{< codenew file="service/networking/test-ingress.yaml" >}} If you create it using `kubectl apply -f` you should be able to view the state -of the Ingress you just added: +of the Ingress you added: ```bash kubectl get ingress test-ingress diff --git a/content/en/docs/concepts/services-networking/service-topology.md b/content/en/docs/concepts/services-networking/service-topology.md index d36b76f55f..66976b23fb 100644 --- a/content/en/docs/concepts/services-networking/service-topology.md +++ b/content/en/docs/concepts/services-networking/service-topology.md @@ -57,7 +57,7 @@ the first label matches the originating Node's value for that label. If there is no backend for the Service on a matching Node, then the second label will be considered, and so forth, until no labels remain. -If no match is found, the traffic will be rejected, just as if there were no +If no match is found, the traffic will be rejected, as if there were no backends for the Service at all. That is, endpoints are chosen based on the first topology key with available backends. If this field is specified and all entries have no backends that match the topology of the client, the service has no @@ -87,7 +87,7 @@ traffic as follows. * Service topology is not compatible with `externalTrafficPolicy=Local`, and therefore a Service cannot use both of these features. It is possible to use - both features in the same cluster on different Services, just not on the same + both features in the same cluster on different Services, only not on the same Service. * Valid topology keys are currently limited to `kubernetes.io/hostname`, diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index b7a7edcd38..c62d29ea51 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -527,7 +527,7 @@ for NodePort use. Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by Kubernetes, or even -to just expose one or more nodes' IPs directly. +to expose one or more nodes' IPs directly. Note that this Service is visible as `:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).) @@ -785,8 +785,7 @@ you can use the following annotations: ``` In the above example, if the Service contained three ports, `80`, `443`, and -`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just -be proxied HTTP. +`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP. From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. To see which policies are available for use, you can use the `aws` command line tool: @@ -958,7 +957,8 @@ groups are modified with the following IP rules: | Rule | Protocol | Port(s) | IpRange(s) | IpRange Description | |------|----------|---------|------------|---------------------| -| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\ | +| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | Subnet CIDR | kubernetes.io/rule/nlb/health=\ | + | Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\ | | MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ | @@ -1107,7 +1107,7 @@ but the current API requires it. ## Virtual IP implementation {#the-gory-details-of-virtual-ips} -The previous information should be sufficient for many people who just want to +The previous information should be sufficient for many people who want to use Services. However, there is a lot going on behind the scenes that may be worth understanding. diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index ef46b7f99a..54e42bae9e 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -29,7 +29,7 @@ A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been pro A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)). -While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. +While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 6834977d70..0abdf6b545 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -37,7 +37,7 @@ request a particular class. Administrators set the name and other parameters of a class when first creating StorageClass objects, and the objects cannot be updated once they are created. -Administrators can specify a default StorageClass just for PVCs that don't +Administrators can specify a default StorageClass only for PVCs that don't request any particular class to bind to: see the [PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) for details. @@ -569,7 +569,7 @@ parameters: `"http(s)://api-server:7860"` * `registry`: Quobyte registry to use to mount the volume. You can specify the registry as ``:`` pair or if you want to specify multiple - registries you just have to put a comma between them e.q. + registries, put a comma between them. ``:,:,:``. The host can be an IP address or if you have a working DNS you can also provide the DNS names. diff --git a/content/en/docs/concepts/storage/volume-pvc-datasource.md b/content/en/docs/concepts/storage/volume-pvc-datasource.md index 8210df661c..9e59560d1d 100644 --- a/content/en/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/en/docs/concepts/storage/volume-pvc-datasource.md @@ -40,7 +40,7 @@ Users need to be aware of the following when using this feature: ## Provisioning -Clones are provisioned just like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. +Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. ```yaml apiVersion: v1 diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 28abe9ee8a..2301662882 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -38,7 +38,7 @@ that run within the pod, and data is preserved across container restarts. When a ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes. -At its core, a volume is just a directory, possibly with some data in it, which +At its core, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used. diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 0380889414..22b95255c5 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -708,7 +708,7 @@ nginx-deployment-618515232 11 11 11 7m You can pause a Deployment before triggering one or more updates and then resume it. This allows you to apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts. -* For example, with a Deployment that was just created: +* For example, with a Deployment that was created: Get the Deployment details: ```shell kubectl get deploy diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md index 97a865fc22..a23c37ad0d 100644 --- a/content/en/docs/concepts/workloads/controllers/job.md +++ b/content/en/docs/concepts/workloads/controllers/job.md @@ -100,7 +100,7 @@ pi-5rwd7 ``` Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression -that just gets the name from each Pod in the returned list. +with the name from each Pod in the returned list. View the standard output of one of the pods: diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 92d8fe7707..3f970ec1a0 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -222,7 +222,7 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods ## Writing a ReplicaSet manifest As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. -For ReplicaSets, the kind is always just ReplicaSet. +For ReplicaSets, the `kind` is always a ReplicaSet. In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated. Refer to the first lines of the `frontend.yaml` example for guidance. @@ -237,7 +237,7 @@ The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-temp required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`. Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod. -For the template's [restart policy](/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy) field, +For the template's [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) field, `.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default. ### Pod Selector diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index 23d87f81fd..9bfb1264a4 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -110,8 +110,7 @@ nginx-3ntk0 nginx-4ok8v nginx-qrm3m Here, the selector is the same as the selector for the ReplicationController (seen in the `kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option -specifies an expression that just gets the name from each pod in the returned list. - +specifies an expression with the name from each pod in the returned list. ## Writing a ReplicationController Spec diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index 832785923a..778bee6c02 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -312,7 +312,7 @@ can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe. {{< note >}} -If you just want to be able to drain requests when the Pod is deleted, you do not +If you want to be able to drain requests when the Pod is deleted, you do not necessarily need a readiness probe; on deletion, the Pod automatically puts itself into an unready state regardless of whether the readiness probe exists. The Pod remains in the unready state while it waits for the containers in the Pod diff --git a/content/en/docs/contribute/new-content/blogs-case-studies.md b/content/en/docs/contribute/new-content/blogs-case-studies.md index 6628925694..8f2f6baaf7 100644 --- a/content/en/docs/contribute/new-content/blogs-case-studies.md +++ b/content/en/docs/contribute/new-content/blogs-case-studies.md @@ -39,8 +39,8 @@ Anyone can write a blog post and submit it for review. - Posts about other CNCF projects may or may not be on topic. We recommend asking the blog team before submitting a draft. - Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog. - Blog posts should be original content - - The official blog is not for repurposing existing content from a third party as new content. - - The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog does allow commercial use of the content for commercial purposes, just not the other way around. + - The official blog is not for repurposing existing content from a third party as new content. + - The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog allows commercial use of the content for commercial purposes, but not the other way around. - Blog posts should aim to be future proof - Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader. - It can be a better choice to add a tutorial or update official documentation than to write a high level overview as a blog post. diff --git a/content/en/docs/contribute/new-content/new-features.md b/content/en/docs/contribute/new-content/new-features.md index a0e3600562..268c447402 100644 --- a/content/en/docs/contribute/new-content/new-features.md +++ b/content/en/docs/contribute/new-content/new-features.md @@ -77,9 +77,8 @@ merged. Keep the following in mind: Alpha features. - It's hard to test (and therefore to document) a feature that hasn't been merged, or is at least considered feature-complete in its PR. -- Determining whether a feature needs documentation is a manual process and - just because a feature is not marked as needing docs doesn't mean it doesn't - need them. +- Determining whether a feature needs documentation is a manual process. Even if + a feature is not marked as needing docs, you may need to document the feature. ## For developers or other SIG members diff --git a/content/en/docs/reference/access-authn-authz/abac.md b/content/en/docs/reference/access-authn-authz/abac.md index 99fce41aba..3e2aea6b36 100644 --- a/content/en/docs/reference/access-authn-authz/abac.md +++ b/content/en/docs/reference/access-authn-authz/abac.md @@ -19,7 +19,7 @@ Attribute-based access control (ABAC) defines an access control paradigm whereby To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup. The file format is [one JSON object per line](https://jsonlines.org/). There -should be no enclosing list or map, just one map per line. +should be no enclosing list or map, only one map per line. Each line is a "policy object", where each such object is a map with the following properties: diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index f66f72eaab..581c218755 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -193,7 +193,7 @@ This admission controller will deny exec and attach commands to pods that run wi allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace. -The DenyEscalatingExec admission plugin is deprecated and will be removed in v1.21. +The DenyEscalatingExec admission plugin is deprecated. Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods @@ -206,7 +206,7 @@ is recommended instead. This admission controller will intercept all requests to exec a command in a pod if that pod has a privileged container. This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec). -The DenyExecOnPrivileged admission plugin is deprecated and will be removed in v1.21. +The DenyExecOnPrivileged admission plugin is deprecated. Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index 17702520e4..0a830586f0 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -458,7 +458,7 @@ clusters: - name: name-of-remote-authn-service cluster: certificate-authority: /path/to/ca.pem # CA for verifying the remote service. - server: https://authn.example.com/authenticate # URL of remote service to query. Must use 'https'. + server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production. # users refers to the API server's webhook configuration. users: diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md index 04963e10ee..af73a23350 100644 --- a/content/en/docs/reference/access-authn-authz/authorization.md +++ b/content/en/docs/reference/access-authn-authz/authorization.md @@ -138,7 +138,7 @@ no exposes the API server authorization to external services. Other resources in this group include: -* `SubjectAccessReview` - Access review for any user, not just the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs. +* `SubjectAccessReview` - Access review for any user, not only the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs. * `LocalSubjectAccessReview` - Like `SubjectAccessReview` but restricted to a specific namespace. * `SelfSubjectRulesReview` - A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide/show actions. diff --git a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md index 856669a5d8..f128c14a7a 100644 --- a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md +++ b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -167,7 +167,7 @@ data: users: [] ``` -The `kubeconfig` member of the ConfigMap is a config file with just the cluster +The `kubeconfig` member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the `certificate-authority-data`. This may be expanded in the future. diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index 6d05d0436a..450bedf541 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -196,8 +196,8 @@ O is the group that this user will belong to. You can refer to [RBAC](/docs/reference/access-authn-authz/rbac/) for standard groups. ```shell -openssl genrsa -out john.key 2048 -openssl req -new -key john.key -out john.csr +openssl genrsa -out myuser.key 2048 +openssl req -new -key myuser.key -out myuser.csr ``` ### Create CertificateSigningRequest @@ -209,7 +209,7 @@ cat < myuser.crt +``` + ### Create Role and RoleBinding With the certificate created. it is time to define the Role and RoleBinding for @@ -266,31 +272,30 @@ kubectl create role developer --verb=create --verb=get --verb=list --verb=update This is a sample command to create a RoleBinding for this new user: ```shell -kubectl create rolebinding developer-binding-john --role=developer --user=john +kubectl create rolebinding developer-binding-myuser --role=developer --user=myuser ``` ### Add to kubeconfig The last step is to add this user into the kubeconfig file. -This example assumes the key and certificate files are located at "/home/vagrant/work/". First, you need to add new credentials: ``` -kubectl config set-credentials john --client-key=/home/vagrant/work/john.key --client-certificate=/home/vagrant/work/john.crt --embed-certs=true +kubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true ``` Then, you need to add the context: ``` -kubectl config set-context john --cluster=kubernetes --user=john +kubectl config set-context myuser --cluster=kubernetes --user=myuser ``` -To test it, change the context to `john`: +To test it, change the context to `myuser`: ``` -kubectl config use-context john +kubectl config use-context myuser ``` ## Approval or rejection {#approval-rejection} @@ -363,7 +368,7 @@ status: It's usual to set `status.conditions.reason` to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything -you like. If you want to add a note just for human consumption, use the +you like. If you want to add a note for human consumption, use the `status.conditions.message` field. ## Signing @@ -438,4 +443,3 @@ status: * View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go) * For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1 * For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986) - diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 4bc2b86dd6..bd9aba1aa8 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -219,7 +219,7 @@ the role that is granted to those subjects. 1. A binding to a different role is a fundamentally different binding. Requiring a binding to be deleted/recreated in order to change the `roleRef` ensures the full list of subjects in the binding is intended to be granted -the new role (as opposed to enabling accidentally modifying just the roleRef +the new role (as opposed to enabling or accidentally modifying only the roleRef without verifying all of the existing subjects should be given the new role's permissions). @@ -333,7 +333,7 @@ as a cluster administrator, include rules for custom resources, such as those se or aggregated API servers, to extend the default roles. For example: the following ClusterRoles let the "admin" and "edit" default roles manage the custom resource -named CronTab, whereas the "view" role can perform just read actions on CronTab resources. +named CronTab, whereas the "view" role can perform only read actions on CronTab resources. You can assume that CronTab objects are named `"crontabs"` in URLs as seen by the API server. ```yaml diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md index 89ad711a56..a441f8ce5b 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md @@ -185,9 +185,9 @@ systemd unit file perhaps) to enable the token file. See docs further details. ### Authorize kubelet to create CSR -Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and just these) permissions, `system:node-bootstrapper`. +Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and only these) permissions, `system:node-bootstrapper`. -To do this, you just need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. +To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. ``` # enable bootstrapping nodes to create CSR @@ -345,7 +345,7 @@ The important elements to note are: * `token`: the token to use The format of the token does not matter, as long as it matches what kube-apiserver expects. In the above example, we used a bootstrap token. -As stated earlier, _any_ valid authentication method can be used, not just tokens. +As stated earlier, _any_ valid authentication method can be used, not only tokens. Because the bootstrap `kubeconfig` _is_ a standard `kubeconfig`, you can use `kubectl` to generate it. To create the above example file: diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index b0b8c8bf19..66eb5785de 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -909,7 +909,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)
--pod-infra-container-image string     Default: `k8s.gcr.io/pause:3.2` -The image whose network/IPC namespaces containers in each pod will use. This docker-specific flag only works when container-runtime is set to `docker`. + Specified image will not be pruned by the image garbage collector. When container-runtime is set to `docker`, all containers in each pod will use the network/ipc namespaces from this image. Other CRI implementations have their own configuration to set this image. diff --git a/content/en/docs/reference/glossary/cloud-controller-manager.md b/content/en/docs/reference/glossary/cloud-controller-manager.md index c78bf393cb..874d0925cf 100755 --- a/content/en/docs/reference/glossary/cloud-controller-manager.md +++ b/content/en/docs/reference/glossary/cloud-controller-manager.md @@ -14,7 +14,7 @@ tags: A Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact -with that cloud platform from components that just interact with your cluster. +with that cloud platform from components that only interact with your cluster. diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index cbb579908b..f5a971d3bd 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -360,7 +360,7 @@ Other operations for exploring API resources: ```bash kubectl api-resources --namespaced=true # All namespaced resources kubectl api-resources --namespaced=false # All non-namespaced resources -kubectl api-resources -o name # All resources with simple output (just the resource name) +kubectl api-resources -o name # All resources with simple output (only the resource name) kubectl api-resources -o wide # All resources with expanded (aka "wide") output kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs kubectl api-resources --api-group=extensions # All resources in the "extensions" API group @@ -387,6 +387,9 @@ Examples using `-o=custom-columns`: # All images running in a cluster kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image' +# All images running in namespace: default, grouped by Pod +kubectl get pods --namespace default --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image" + # All images excluding "k8s.gcr.io/coredns:1.6.2" kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image' diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index dbad6f5cf2..f8ec7e5603 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -69,7 +69,7 @@ for example `create`, `get`, `describe`, `delete`. Flags that you specify from the command line override default values and any corresponding environment variables. {{< /caution >}} -If you need help, just run `kubectl help` from the terminal window. +If you need help, run `kubectl help` from the terminal window. ## Operations diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index eb5fb20943..0469145692 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -123,7 +123,7 @@ If your configuration is not using the latest version it is **recommended** that the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command. For more information on the fields and usage of the configuration you can navigate to our API reference -page and pick a version from [the list](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#pkg-subdirectories). +page and pick a version from [the list](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories). ### Adding kube-proxy parameters {#kube-proxy} diff --git a/content/en/docs/reference/using-api/deprecation-guide.md b/content/en/docs/reference/using-api/deprecation-guide.md index ee8328cbdd..a8ad3494bd 100755 --- a/content/en/docs/reference/using-api/deprecation-guide.md +++ b/content/en/docs/reference/using-api/deprecation-guide.md @@ -116,7 +116,7 @@ The **certificates.k8s.io/v1beta1** API version of CertificateSigningRequest wil * All existing persisted objects are accessible via the new API * Notable changes in `certificates.k8s.io/v1`: * For API clients requesting certificates: - * `spec.signerName` is now required (see [known Kubernetes signers](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)), and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API + * `spec.signerName` is now required (see [known Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)), and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API * `spec.usages` is now required, may not contain duplicate values, and must only contain known usages * For API clients approving or signing certificates: * `status.conditions` may not contain duplicate types diff --git a/content/en/docs/reference/using-api/deprecation-policy.md b/content/en/docs/reference/using-api/deprecation-policy.md index 17840f195b..4de09ee82a 100644 --- a/content/en/docs/reference/using-api/deprecation-policy.md +++ b/content/en/docs/reference/using-api/deprecation-policy.md @@ -327,7 +327,7 @@ supported in API v1 must exist and function until API v1 is removed. ### Component config structures -Component configs are versioned and managed just like REST resources. +Component configs are versioned and managed similar to REST resources. ### Future work diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md index d91497f8f2..a6684adb24 100644 --- a/content/en/docs/reference/using-api/server-side-apply.md +++ b/content/en/docs/reference/using-api/server-side-apply.md @@ -209,9 +209,8 @@ would have failed due to conflicting ownership. The merging strategy, implemented with Server Side Apply, provides a generally more stable object lifecycle. Server Side Apply tries to merge fields based on -the fact who manages them instead of overruling just based on values. This way -it is intended to make it easier and more stable for multiple actors updating -the same object by causing less unexpected interference. +the actor who manages them instead of overruling based on values. This way +multiple actors can update the same object without causing unexpected interference. When a user sends a "fully-specified intent" object to the Server Side Apply endpoint, the server merges it with the live object favoring the value in the @@ -319,7 +318,7 @@ kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replic ``` If the apply results in a conflict with the HPA controller, then do nothing. The -conflict just indicates the controller has claimed the field earlier in the +conflict indicates the controller has claimed the field earlier in the process than it sometimes does. At this point the user may remove the `replicas` field from their configuration. @@ -436,7 +435,7 @@ Data: [{"op": "replace", "path": "/metadata/managedFields", "value": [{}]}] This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the -object. Note that just setting the managedFields to an empty list will not +object. Note that setting the managedFields to an empty list will not reset the field. This is on purpose, so managedFields never get stripped by clients not aware of the field. diff --git a/content/en/docs/setup/best-practices/cluster-large.md b/content/en/docs/setup/best-practices/cluster-large.md index ccb31fe108..a75499a811 100644 --- a/content/en/docs/setup/best-practices/cluster-large.md +++ b/content/en/docs/setup/best-practices/cluster-large.md @@ -69,10 +69,9 @@ When creating a cluster, you can (using custom tooling): ## Addon resources Kubernetes [resource limits](/docs/concepts/configuration/manage-resources-containers/) -help to minimise the impact of memory leaks and other ways that pods and containers can -impact on other components. These resource limits can and should apply to -{{< glossary_tooltip text="addon" term_id="addons" >}} just as they apply to application -workloads. +help to minimize the impact of memory leaks and other ways that pods and containers can +impact on other components. These resource limits apply to +{{< glossary_tooltip text="addon" term_id="addons" >}} resources just as they apply to application workloads. For example, you can set CPU and memory limits for a logging component: diff --git a/content/en/docs/setup/production-environment/tools/kubespray.md b/content/en/docs/setup/production-environment/tools/kubespray.md index 4fac65ab88..5ba4c995dd 100644 --- a/content/en/docs/setup/production-environment/tools/kubespray.md +++ b/content/en/docs/setup/production-environment/tools/kubespray.md @@ -68,7 +68,7 @@ Kubespray provides the ability to customize many aspects of the deployment: * {{< glossary_tooltip term_id="cri-o" >}} * Certificate generation methods -Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. +Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. ### (4/5) Deploy a Cluster diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 5c33b0a94b..446bf11d3c 100644 --- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -333,7 +333,7 @@ These features were added in Kubernetes v1.15: ##### DNS {#dns-limitations} * ClusterFirstWithHostNet is not supported for DNS. Windows treats all names with a '.' as a FQDN and skips PQDN resolution -* On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On Windows, we only have 1 DNS suffix, which is the DNS suffix associated with that pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs and services or names resolvable with just that suffix. For example, a pod spawned in the default namespace, will have the DNS suffix **default.svc.cluster.local**. On a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** and **kubernetes**, but not the in-betweens, like **kubernetes.default** or **kubernetes.default.svc**. +* On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On Windows, we only have 1 DNS suffix, which is the DNS suffix associated with that pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs and services or names resolvable with only that suffix. For example, a pod spawned in the default namespace, will have the DNS suffix **default.svc.cluster.local**. On a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** and **kubernetes**, but not the in-betweens, like **kubernetes.default** or **kubernetes.default.svc**. * On Windows, there are multiple DNS resolvers that can be used. As these come with slightly different behaviors, using the `Resolve-DNSName` utility for name query resolutions is recommended. ##### IPv6 @@ -363,9 +363,9 @@ There are no differences in how most of the Kubernetes APIs work for Windows. Th At a high level, these OS concepts are different: -* Identity - Linux uses userID (UID) and groupID (GID) which are represented as integer types. User and group names are not canonical - they are just an alias in `/etc/groups` or `/etc/passwd` back to UID+GID. Windows uses a larger binary security identifier (SID) which is stored in the Windows Security Access Manager (SAM) database. This database is not shared between the host and containers, or between containers. +* Identity - Linux uses userID (UID) and groupID (GID) which are represented as integer types. User and group names are not canonical - they are an alias in `/etc/groups` or `/etc/passwd` back to UID+GID. Windows uses a larger binary security identifier (SID) which is stored in the Windows Security Access Manager (SAM) database. This database is not shared between the host and containers, or between containers. * File permissions - Windows uses an access control list based on SIDs, rather than a bitmask of permissions and UID+GID -* File paths - convention on Windows is to use `\` instead of `/`. The Go IO libraries typically accept both and just make it work, but when you're setting a path or command line that's interpreted inside a container, `\` may be needed. +* File paths - convention on Windows is to use `\` instead of `/`. The Go IO libraries accept both types of file path separators. However, when you're setting a path or command line that's interpreted inside a container, `\` may be needed. * Signals - Windows interactive apps handle termination differently, and can implement one or more of these: * A UI thread handles well-defined messages including WM_CLOSE * Console apps handle ctrl-c or ctrl-break using a Control Handler diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md index 400a54ffb2..23d4f133a6 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md @@ -231,7 +231,7 @@ You have several options for connecting to nodes, pods and services from outside - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. - - Depending on your cluster environment, this may just expose the service to your corporate network, + - Depending on your cluster environment, this may only expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication? - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, @@ -283,7 +283,7 @@ at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-l As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL: `http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy` -If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. +If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. You can also use the port number in place of the *port_name* for both named and unnamed ports. By default, the API server proxies to your service using http. To use https, prefix the service name with `https:`: `http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`https:service_name:[port_name]`*`/proxy` @@ -291,9 +291,9 @@ By default, the API server proxies to your service using http. To use https, pre The supported formats for the name segment of the URL are: * `` - proxies to the default or unnamed port using http -* `:` - proxies to the specified port using http +* `:` - proxies to the specified port name or port number using http * `https::` - proxies to the default or unnamed port using https (note the trailing colon) -* `https::` - proxies to the specified port using https +* `https::` - proxies to the specified port name or port number using https ##### Examples @@ -357,7 +357,7 @@ There are several different proxies you may encounter when using Kubernetes: - proxies UDP and TCP - does not understand HTTP - provides load balancing - - is just used to reach services + - is only used to reach services 1. A Proxy/Load-balancer in front of apiserver(s): diff --git a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md index 62ddbdcbbc..0a6d352d2c 100644 --- a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md @@ -7,7 +7,7 @@ min-kubernetes-server-version: v1.10 -This page shows how to use `kubectl port-forward` to connect to a Redis +This page shows how to use `kubectl port-forward` to connect to a MongoDB server running in a Kubernetes cluster. This type of connection can be useful for database debugging. @@ -19,25 +19,25 @@ for database debugging. * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -* Install [redis-cli](http://redis.io/topics/rediscli). +* Install [MongoDB Shell](https://www.mongodb.com/try/download/shell). -## Creating Redis deployment and service +## Creating MongoDB deployment and service -1. Create a Deployment that runs Redis: +1. Create a Deployment that runs MongoDB: ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml ``` The output of a successful command verifies that the deployment was created: ``` - deployment.apps/redis-master created + deployment.apps/mongo created ``` View the pod status to check that it is ready: @@ -49,8 +49,8 @@ for database debugging. The output displays the pod created: ``` - NAME READY STATUS RESTARTS AGE - redis-master-765d459796-258hz 1/1 Running 0 50s + NAME READY STATUS RESTARTS AGE + mongo-75f59d57f4-4nd6q 1/1 Running 0 2m4s ``` View the Deployment's status: @@ -62,8 +62,8 @@ for database debugging. The output displays that the Deployment was created: ``` - NAME READY UP-TO-DATE AVAILABLE AGE - redis-master 1/1 1 1 55s + NAME READY UP-TO-DATE AVAILABLE AGE + mongo 1/1 1 1 2m21s ``` The Deployment automatically manages a ReplicaSet. @@ -76,50 +76,50 @@ for database debugging. The output displays that the ReplicaSet was created: ``` - NAME DESIRED CURRENT READY AGE - redis-master-765d459796 1 1 1 1m + NAME DESIRED CURRENT READY AGE + mongo-75f59d57f4 1 1 1 3m12s ``` -2. Create a Service to expose Redis on the network: +2. Create a Service to expose MongoDB on the network: ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml ``` The output of a successful command verifies that the Service was created: ``` - service/redis-master created + service/mongo created ``` Check the Service created: ```shell - kubectl get service redis-master + kubectl get service mongo ``` The output displays the service created: ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - redis-master ClusterIP 10.0.0.213 6379/TCP 27s + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + mongo ClusterIP 10.96.41.183 27017/TCP 11s ``` -3. Verify that the Redis server is running in the Pod, and listening on port 6379: +3. Verify that the MongoDB server is running in the Pod, and listening on port 27017: ```shell - # Change redis-master-765d459796-258hz to the name of the Pod - kubectl get pod redis-master-765d459796-258hz --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' + # Change mongo-75f59d57f4-4nd6q to the name of the Pod + kubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' ``` - The output displays the port for Redis in that Pod: + The output displays the port for MongoDB in that Pod: ``` - 6379 + 27017 ``` - (this is the TCP port allocated to Redis on the internet). + (this is the TCP port allocated to MongoDB on the internet). ## Forward a local port to a port on the Pod @@ -127,39 +127,39 @@ for database debugging. ```shell - # Change redis-master-765d459796-258hz to the name of the Pod - kubectl port-forward redis-master-765d459796-258hz 7000:6379 + # Change mongo-75f59d57f4-4nd6q to the name of the Pod + kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017 ``` which is the same as ```shell - kubectl port-forward pods/redis-master-765d459796-258hz 7000:6379 + kubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017 ``` or ```shell - kubectl port-forward deployment/redis-master 7000:6379 + kubectl port-forward deployment/mongo 28015:27017 ``` or ```shell - kubectl port-forward replicaset/redis-master 7000:6379 + kubectl port-forward replicaset/mongo-75f59d57f4 28015:27017 ``` or ```shell - kubectl port-forward service/redis-master 7000:redis + kubectl port-forward service/mongo 28015:27017 ``` Any of the above commands works. The output is similar to this: ``` - Forwarding from 127.0.0.1:7000 -> 6379 - Forwarding from [::1]:7000 -> 6379 + Forwarding from 127.0.0.1:28015 -> 27017 + Forwarding from [::1]:28015 -> 27017 ``` {{< note >}} @@ -168,22 +168,22 @@ for database debugging. {{< /note >}} -2. Start the Redis command line interface: +2. Start the MongoDB command line interface: ```shell - redis-cli -p 7000 + mongosh --port 28015 ``` -3. At the Redis command line prompt, enter the `ping` command: +3. At the MongoDB command line prompt, enter the `ping` command: ``` - ping + db.runCommand( { ping: 1 } ) ``` A successful ping request returns: ``` - PONG + { ok: 1 } ``` ### Optionally let _kubectl_ choose the local port {#let-kubectl-choose-local-port} @@ -193,15 +193,22 @@ the local port and thus relieve you from having to manage local port conflicts, the slightly simpler syntax: ```shell -kubectl port-forward deployment/redis-master :6379 +kubectl port-forward deployment/mongo :27017 +``` + +The output is similar to this: + +``` +Forwarding from 127.0.0.1:63753 -> 27017 +Forwarding from [::1]:63753 -> 27017 ``` The `kubectl` tool finds a local port number that is not in use (avoiding low ports numbers, because these might be used by other applications). The output is similar to: ``` -Forwarding from 127.0.0.1:62162 -> 6379 -Forwarding from [::1]:62162 -> 6379 +Forwarding from 127.0.0.1:63753 -> 27017 +Forwarding from [::1]:63753 -> 27017 ``` @@ -209,8 +216,8 @@ Forwarding from [::1]:62162 -> 6379 ## Discussion -Connections made to local port 7000 are forwarded to port 6379 of the Pod that -is running the Redis server. With this connection in place, you can use your +Connections made to local port 28015 are forwarded to port 27017 of the Pod that +is running the MongoDB server. With this connection in place, you can use your local workstation to debug the database that is running in the Pod. {{< note >}} diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-services.md b/content/en/docs/tasks/administer-cluster/access-cluster-services.md index f6ba4e4fc0..927e05b77a 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-services.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-services.md @@ -31,7 +31,7 @@ You have several options for connecting to nodes, pods and services from outside - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. - - Depending on your cluster environment, this may just expose the service to your corporate network, + - Depending on your cluster environment, this may only expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication? - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, diff --git a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md index 0c7c3c3ca1..a365fd4ffc 100644 --- a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md +++ b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md @@ -70,7 +70,7 @@ for details about addon manager and how to disable individual addons. 1. Mark a StorageClass as default: - Similarly to the previous step, you need to add/set the annotation + Similar to the previous step, you need to add/set the annotation `storageclass.kubernetes.io/is-default-class=true`. ```bash diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md index 25cd61a710..5dee2ae185 100644 --- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -125,7 +125,16 @@ the URL schema. Similarly, to configure etcd with secure client communication, specify flags `--key-file=k8sclient.key` and `--cert-file=k8sclient.cert`, and use HTTPS as -the URL schema. +the URL schema. Here is an example on a client command that uses secure +communication: + +``` +ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 \ + --cert=/etc/kubernetes/pki/etcd/server.crt \ + --key=/etc/kubernetes/pki/etcd/server.key \ + --cacert=/etc/kubernetes/pki/etcd/ca.crt \ + member list +``` ### Limiting access of etcd clusters @@ -269,6 +278,24 @@ If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by taking a snapshot of the storage volume. +### Snapshot using etcdctl options + +We can also take the snapshot using various options given by etcdctl. For example + +```shell +ETCDCTL_API=3 etcdctl --h +``` + +will list various options available from etcdctl. For example, you can take a snapshot by specifying +the endpoint, certificates etc as shown below: + +```shell +ETCDCTL_API=3 etcdctl --endpoints=[127.0.0.1:2379] \ + --cacert= --cert= --key= \ + snapshot save +``` +where `trusted-ca-file`, `cert-file` and `key-file` can be obtained from the description of the etcd Pod. + ## Scaling up etcd clusters Scaling up etcd clusters increases availability by trading off performance. @@ -293,6 +320,12 @@ employed to recover the data of a failed cluster. Before starting the restore operation, a snapshot file must be present. It can either be a snapshot file from a previous backup operation, or from a remaining [data directory]( https://etcd.io/docs/current/op-guide/configuration/#--data-dir). +Here is an example: + +```shell +ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshotdb +``` + For more information and examples on restoring a cluster from a snapshot file, see [etcd disaster recovery documentation](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster). @@ -324,4 +357,3 @@ We also recommend restarting any components (e.g. `kube-scheduler`, stale data. Note that in practice, the restore takes a bit of time. During the restoration, critical components will lose leader lock and restart themselves. {{< /note >}} - diff --git a/content/en/docs/tasks/administer-cluster/extended-resource-node.md b/content/en/docs/tasks/administer-cluster/extended-resource-node.md index a95a325d5d..797993f116 100644 --- a/content/en/docs/tasks/administer-cluster/extended-resource-node.md +++ b/content/en/docs/tasks/administer-cluster/extended-resource-node.md @@ -54,7 +54,7 @@ Host: k8s-master:8080 ``` Note that Kubernetes does not need to know what a dongle is or what a dongle is for. -The preceding PATCH request just tells Kubernetes that your Node has four things that +The preceding PATCH request tells Kubernetes that your Node has four things that you call dongles. Start a proxy, so that you can easily send requests to the Kubernetes API server: diff --git a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md index 0d5b6d4ebe..a9aaaacd46 100644 --- a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md +++ b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md @@ -9,24 +9,17 @@ content_type: concept -In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine -there are a number of add-ons which, for various reasons, must run on a regular cluster node (rather than the Kubernetes master). +Kubernetes core components such as the API server, scheduler, and controller-manager run on a control plane node. However, add-ons must run on a regular cluster node. Some of these add-ons are critical to a fully functional cluster, such as metrics-server, DNS, and UI. A cluster may stop working properly if a critical add-on is evicted (either manually or as a side effect of another operation like upgrade) and becomes pending (for example when the cluster is highly utilized and either there are other pending pods that schedule into the space vacated by the evicted critical add-on pod or the amount of resources available on the node changed for some other reason). Note that marking a pod as critical is not meant to prevent evictions entirely; it only prevents the pod from becoming permanently unavailable. -For static pods, this means it can't be evicted, but for non-static pods, it just means they will always be rescheduled. - - - +A static pod marked as critical, can't be evicted. However, a non-static pods marked as critical are always rescheduled. - ### Marking pod as critical To mark a Pod as critical, set priorityClassName for that Pod to `system-cluster-critical` or `system-node-critical`. `system-node-critical` is the highest available priority, even higher than `system-cluster-critical`. - - diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 56a6c25e9a..dc7af4a329 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -35,7 +35,7 @@ and kubeadm will use this CA for signing the rest of the certificates. ## External CA mode {#external-ca-mode} -It is also possible to provide just the `ca.crt` file and not the +It is also possible to provide only the `ca.crt` file and not the `ca.key` file (this is only available for the root CA file, not other cert pairs). If all other certificates and kubeconfig files are in place, kubeadm recognizes this condition and activates the "External CA" mode. kubeadm will proceed without the @@ -170,7 +170,7 @@ controllerManager: ### Create certificate signing requests (CSR) -See [Create CertificateSigningRequest](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API. +See [Create CertificateSigningRequest](/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API. ## Renew certificates with external CA diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 45466a6b5c..da07ed672e 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -37,7 +37,7 @@ The upgrade workflow at high level is the following: ### Additional information -- [Draining nodes](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) before kubelet MINOR version +- [Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/) before kubelet MINOR version upgrades is required. In the case of control plane nodes, they could be running CoreDNS Pods or other critical workloads. - All containers are restarted after upgrade, because the container spec hash value is changed. diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md index 766db38485..fe9fd8b0c4 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md @@ -50,7 +50,7 @@ and scheduling of Pods; on each node, the {{< glossary_tooltip text="kubelet" te uses the container runtime interface as an abstraction so that you can use any compatible container runtime. -In its earliest releases, Kubernetes offered compatibility with just one container runtime: Docker. +In its earliest releases, Kubernetes offered compatibility with one container runtime: Docker. Later in the Kubernetes project's history, cluster operators wanted to adopt additional container runtimes. The CRI was designed to allow this kind of flexibility - and the kubelet began supporting CRI. However, because Docker existed before the CRI specification was invented, the Kubernetes project created an @@ -75,7 +75,7 @@ or execute something inside container using `docker exec`. If you're running workloads via Kubernetes, the best way to stop a container is through the Kubernetes API rather than directly through the container runtime (this advice applies -for all container runtimes, not just Docker). +for all container runtimes, not only Docker). {{< /note >}} diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index 1d1461ade7..5d99875527 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -232,7 +232,7 @@ Apply the manifest to create a Deployment ```shell kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml ``` -We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. +We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname. ```shell kubectl get deployment diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index 08b2868806..2934e1c0f7 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -196,7 +196,7 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te ```shell kubectl create deployment snowflake --image=k8s.gcr.io/serve_hostname -n=development --replicas=2 ``` - We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. + We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname. ```shell kubectl get deployment -n=development @@ -302,7 +302,7 @@ Use cases include: When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). This entry is of the form `..svc.cluster.local`, which means -that if a container just uses `` it will resolve to the service which +that if a container uses `` it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN). diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md index 9efdccfb6e..40733c4c96 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md @@ -20,7 +20,7 @@ Decide whether you want to deploy a [cloud](#creating-a-calico-cluster-with-goog **Prerequisite**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts). -1. To launch a GKE cluster with Calico, just include the `--enable-network-policy` flag. +1. To launch a GKE cluster with Calico, include the `--enable-network-policy` flag. **Syntax** ```shell diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md index db31cceb8b..0fc0a97ffc 100644 --- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md @@ -128,8 +128,8 @@ curl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.ex The API can respond in one of three ways: -- If the eviction is granted, then the Pod is deleted just as if you had sent - a `DELETE` request to the Pod's URL and you get back `200 OK`. +- If the eviction is granted, then the Pod is deleted as if you sent + a `DELETE` request to the Pod's URL and received back `200 OK`. - If the current state of affairs wouldn't allow an eviction by the rules set forth in the budget, you get back `429 Too Many Requests`. This is typically used for generic rate limiting of *any* requests, but here we mean diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 23f85f109b..b405d57baf 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -184,7 +184,7 @@ Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. ## Clean Up -To delete the Secret you have just created: +To delete the Secret you have created: ```shell kubectl delete secret mysecret diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 1e6d88ede4..293915736e 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -115,8 +115,7 @@ accidentally to an onlooker, or from being stored in a terminal log. ## Decoding the Secret {#decoding-secret} -To view the contents of the Secret we just created, you can run the following -command: +To view the contents of the Secret you created, run the following command: ```shell kubectl get secret db-user-pass -o jsonpath='{.data}' @@ -125,10 +124,10 @@ kubectl get secret db-user-pass -o jsonpath='{.data}' The output is similar to: ```json -{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="} +{"password":"MWYyZDFlMmU2N2Rm","username":"YWRtaW4="} ``` -Now you can decode the `password.txt` data: +Now you can decode the `password` data: ```shell echo 'MWYyZDFlMmU2N2Rm' | base64 --decode @@ -142,7 +141,7 @@ The output is similar to: ## Clean Up -To delete the Secret you have just created: +To delete the Secret you have created: ```shell kubectl delete secret db-user-pass diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md index 5cbb30b99b..fb257a6026 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md @@ -113,7 +113,7 @@ To check the actual content of the encoded data, please refer to ## Clean Up -To delete the Secret you have just created: +To delete the Secret you have created: ```shell kubectl delete secret db-user-pass-96mffmfh4k diff --git a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md index 243072eff2..21b02cc000 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -112,7 +112,7 @@ kubectl top pod cpu-demo --namespace=cpu-example ``` This example output shows that the Pod is using 974 milliCPU, which is -just a bit less than the limit of 1 CPU specified in the Pod configuration. +slightly less than the limit of 1 CPU specified in the Pod configuration. ``` NAME CPU(cores) MEMORY(bytes) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 0cdfd28258..918a5bf33e 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -204,7 +204,7 @@ seconds. In addition to the readiness probe, this configuration includes a liveness probe. The kubelet will run the first liveness probe 15 seconds after the container -starts. Just like the readiness probe, this will attempt to connect to the +starts. Similar to the readiness probe, this will attempt to connect to the `goproxy` container on port 8080. If the liveness probe fails, the container will be restarted. diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index 32b857c156..697a4c6e0e 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -118,7 +118,7 @@ those secrets might also be visible to other users on your PC during the time th ## Inspecting the Secret `regcred` -To understand the contents of the `regcred` Secret you just created, start by viewing the Secret in YAML format: +To understand the contents of the `regcred` Secret you created, start by viewing the Secret in YAML format: ```shell kubectl get secret regcred --output=yaml diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index dc4fba3480..384b709720 100644 --- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -67,7 +67,7 @@ sudo yum -y install kompose {{% /tab %}} {{% tab name="Fedora package" %}} -Kompose is in Fedora 24, 25 and 26 repositories. You can install it just like any other package. +Kompose is in Fedora 24, 25 and 26 repositories. You can install it like any other package. ```bash sudo dnf -y install kompose @@ -87,7 +87,7 @@ brew install kompose ## Use Kompose -In just a few steps, we'll take you from Docker Compose to Kubernetes. All +In a few steps, we'll take you from Docker Compose to Kubernetes. All you need is an existing `docker-compose.yml` file. 1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one. diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md index 730b9fb00c..03ba9d2c02 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -177,7 +177,7 @@ kubectl describe pod nginx-deployment-1370807587-fz9sd Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `FailedScheduling` (and possibly others). The message tells us that there were not enough resources for the Pod on any of the nodes. -To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.) +To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could leave the one Pod pending, which is harmless.) Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use diff --git a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md index 8a972e1365..c99182b854 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md @@ -57,7 +57,7 @@ case you can try several things: will never be scheduled. You can check node capacities with the `kubectl get nodes -o ` - command. Here are some example command lines that extract just the necessary + command. Here are some example command lines that extract the necessary information: ```shell diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md index 3613e5b2cb..3b3b1c6081 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-service.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md @@ -178,7 +178,7 @@ kubectl expose deployment hostnames --port=80 --target-port=9376 service/hostnames exposed ``` -And read it back, just to be sure: +And read it back: ```shell kubectl get svc hostnames @@ -427,8 +427,7 @@ hostnames-632524106-ly40y 1/1 Running 0 1h hostnames-632524106-tlaok 1/1 Running 0 1h ``` -The `-l app=hostnames` argument is a label selector - just like our Service -has. +The `-l app=hostnames` argument is a label selector configured on the Service. The "AGE" column says that these Pods are about an hour old, which implies that they are running fine and not crashing. @@ -607,7 +606,7 @@ iptables-save | grep hostnames -A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577 ``` -There should be 2 rules for each port of your Service (just one in this +There should be 2 rules for each port of your Service (only one in this example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". Almost nobody should be using the "userspace" mode any more, so you won't spend diff --git a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md index 1703bbbe42..29ace662f6 100644 --- a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md +++ b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md @@ -294,9 +294,9 @@ a running cluster in the [Deploying section](#deploying). ### Changing `DaemonSet` parameters -When you have the Stackdriver Logging `DaemonSet` in your cluster, you can just modify the -`template` field in its spec, daemonset controller will update the pods for you. For example, -let's assume you've just installed the Stackdriver Logging as described above. Now you want to +When you have the Stackdriver Logging `DaemonSet` in your cluster, you can modify the +`template` field in its spec. The DaemonSet controller manages the pods for you. +For example, assume you've installed the Stackdriver Logging as described above. Now you want to change the memory limit to give fluentd more memory to safely process more logs. Get the spec of `DaemonSet` running in your cluster: diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md index 96f55c3950..7ad7072fd7 100644 --- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md @@ -12,7 +12,7 @@ weight: 20 Kubernetes ships with a default scheduler that is described [here](/docs/reference/command-line-tools-reference/kube-scheduler/). If the default scheduler does not suit your needs you can implement your own scheduler. -Not just that, you can even run multiple schedulers simultaneously alongside the default +Moreover, you can even run multiple schedulers simultaneously alongside the default scheduler and instruct Kubernetes what scheduler to use for each of your pods. Let's learn how to run multiple schedulers in Kubernetes with an example. @@ -30,7 +30,7 @@ in the Kubernetes source directory for a canonical example. ## Package the scheduler Package your scheduler binary into a container image. For the purposes of this example, -let's just use the default scheduler (kube-scheduler) as our second scheduler as well. +you can use the default scheduler (kube-scheduler) as your second scheduler. Clone the [Kubernetes source code from GitHub](https://github.com/kubernetes/kubernetes) and build the source. @@ -61,9 +61,9 @@ gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 ## Define a Kubernetes Deployment for the scheduler -Now that we have our scheduler in a container image, we can just create a pod -config for it and run it in our Kubernetes cluster. But instead of creating a pod -directly in the cluster, let's use a [Deployment](/docs/concepts/workloads/controllers/deployment/) +Now that you have your scheduler in a container image, create a pod +configuration for it and run it in your Kubernetes cluster. But instead of creating a pod +directly in the cluster, you can use a [Deployment](/docs/concepts/workloads/controllers/deployment/) for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment/) manages a [Replica Set](/docs/concepts/workloads/controllers/replicaset/) which in turn manages the pods, thereby making the scheduler resilient to failures. Here is the deployment @@ -83,7 +83,7 @@ detailed description of other command line arguments. ## Run the second scheduler in the cluster -In order to run your scheduler in a Kubernetes cluster, just create the deployment +In order to run your scheduler in a Kubernetes cluster, create the deployment specified in the config above in a Kubernetes cluster: ```shell @@ -132,9 +132,9 @@ kubectl edit clusterrole system:kube-scheduler ## Specify schedulers for pods -Now that our second scheduler is running, let's create some pods, and direct them -to be scheduled by either the default scheduler or the one we just deployed. -In order to schedule a given pod using a specific scheduler, we specify the name of the +Now that your second scheduler is running, create some pods, and direct them +to be scheduled by either the default scheduler or the one you deployed. +In order to schedule a given pod using a specific scheduler, specify the name of the scheduler in that pod spec. Let's look at three examples. - Pod spec without any scheduler name @@ -196,7 +196,7 @@ while the other two pods get scheduled. Once we submit the scheduler deployment and our new scheduler starts running, the `annotation-second-scheduler` pod gets scheduled as well. -Alternatively, one could just look at the "Scheduled" entries in the event logs to +Alternatively, you can look at the "Scheduled" entries in the event logs to verify that the pods were scheduled by the desired schedulers. ```shell diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md index b48d44a078..671637c084 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md @@ -404,7 +404,7 @@ how to [authenticate API servers](/docs/reference/access-authn-authz/extensible- A conversion webhook must not mutate anything inside of `metadata` of the converted object other than `labels` and `annotations`. Attempted changes to `name`, `UID` and `namespace` are rejected and fail the request -which caused the conversion. All other changes are just ignored. +which caused the conversion. All other changes are ignored. ### Deploy the conversion webhook service diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index 62251eb222..3230b7b73a 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -520,7 +520,7 @@ CustomResourceDefinition and migrating your objects from one version to another. ### Finalizers *Finalizers* allow controllers to implement asynchronous pre-delete hooks. -Custom objects support finalizers just like built-in objects. +Custom objects support finalizers similar to built-in objects. You can add a finalizer to a custom object like this: diff --git a/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md b/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md index 626ddcab5c..64c41d9094 100644 --- a/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md +++ b/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md @@ -41,7 +41,7 @@ Alternatively, you can use an existing 3rd party solution, such as [apiserver-bu 1. Make sure that your extension-apiserver loads those certs from that volume and that they are used in the HTTPS handshake. 1. Create a Kubernetes service account in your namespace. 1. Create a Kubernetes cluster role for the operations you want to allow on your resources. -1. Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you just created. +1. Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you created. 1. Create a Kubernetes cluster role binding from the service account in your namespace to the `system:auth-delegator` cluster role to delegate auth decisions to the Kubernetes core API server. 1. Create a Kubernetes role binding from the service account in your namespace to the `extension-apiserver-authentication-reader` role. This allows your extension api-server to access the `extension-apiserver-authentication` configmap. 1. Create a Kubernetes apiservice. The CA cert above should be base64 encoded, stripped of new lines and used as the spec.caBundle in the apiservice. This should not be namespaced. If using the [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/), only pass in the PEM encoded CA bundle because the base 64 encoding is done for you. diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md index 951848b1b4..2db5d3ecc3 100644 --- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -19,7 +19,7 @@ Here is an overview of the steps in this example: 1. **Start a message queue service.** In this example, we use RabbitMQ, but you could use another one. In practice you would set up a message queue service once and reuse it for many jobs. 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In - this example, a message is just an integer that we will do a lengthy computation on. + this example, a message is an integer that we will do a lengthy computation on. 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached. @@ -141,13 +141,12 @@ root@temp-loe07:/# ``` In the last command, the `amqp-consume` tool takes one message (`-c 1`) -from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` is just printing -out what it gets on the standard input, and the echo is just to add a carriage +from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` prints out the characters read from standard input, and the echo adds a carriage return so the example is readable. ## Filling the Queue with tasks -Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be +Now let's fill the queue with some "tasks". In our example, our tasks are strings to be printed. In a practice, the content of the messages might be: diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md index b4cb4e641f..c5d1d0fa30 100644 --- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md @@ -21,7 +21,7 @@ Here is an overview of the steps in this example: detect when a finite-length work queue is empty. In practice you would set up a store such as Redis once and reuse it for the work queues of many jobs, and other things. 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In - this example, a message is just an integer that we will do a lengthy computation on. + this example, a message is an integer that we will do a lengthy computation on. 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached. @@ -55,7 +55,7 @@ You could also download the following files directly: ## Filling the Queue with tasks -Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be +Now let's fill the queue with some "tasks". In our example, our tasks are strings to be printed. Start a temporary interactive pod for running the Redis CLI. diff --git a/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md b/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md index 05e8060cc9..704b01cc9a 100644 --- a/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md @@ -25,7 +25,7 @@ You should already know how to [perform a rolling update on a ### Step 1: Find the DaemonSet revision you want to roll back to -You can skip this step if you just want to roll back to the last revision. +You can skip this step if you only want to roll back to the last revision. List all revisions of a DaemonSet: diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index f9e35cb0f5..2f3001da0f 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -111,7 +111,7 @@ kubectl edit ds/fluentd-elasticsearch -n kube-system ##### Updating only the container image -If you just need to update the container image in the DaemonSet template, i.e. +If you only need to update the container image in the DaemonSet template, i.e. `.spec.template.spec.containers[*].image`, use `kubectl set image`: ```shell @@ -167,7 +167,7 @@ If the recent DaemonSet template update is broken, for example, the container is crash looping, or the container image doesn't exist (often due to a typo), DaemonSet rollout won't progress. -To fix this, just update the DaemonSet template again. New rollout won't be +To fix this, update the DaemonSet template again. New rollout won't be blocked by previous unhealthy rollouts. #### Clock skew diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 4f8fc434f9..997005e9ce 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -37,7 +37,7 @@ When the above conditions are true, Kubernetes will expose `amd.com/gpu` or `nvidia.com/gpu` as a schedulable resource. You can consume these GPUs from your containers by requesting -`.com/gpu` just like you request `cpu` or `memory`. +`.com/gpu` the same way you request `cpu` or `memory`. However, there are some limitations in how you specify the resource requirements when using GPUs: diff --git a/content/en/docs/tasks/run-application/delete-stateful-set.md b/content/en/docs/tasks/run-application/delete-stateful-set.md index a70e018b85..94b3c583eb 100644 --- a/content/en/docs/tasks/run-application/delete-stateful-set.md +++ b/content/en/docs/tasks/run-application/delete-stateful-set.md @@ -43,8 +43,8 @@ You may need to delete the associated headless service separately after the Stat kubectl delete service ``` -Deleting a StatefulSet through kubectl will scale it down to 0, thereby deleting all pods that are a part of it. -If you want to delete just the StatefulSet and not the pods, use `--cascade=false`. +When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=false`. +For example: ```shell kubectl delete -f --cascade=false diff --git a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md index 28de1865fd..0001f4c9f4 100644 --- a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md +++ b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md @@ -44,7 +44,7 @@ for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod [shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) before the kubelet deletes the name from the apiserver. -Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. +A Pod is not deleted automatically when a node is unreachable. The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a [timeout](/docs/concepts/architecture/nodes/#condition). Pods may also enter these states when the user attempts graceful deletion of a Pod diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 49009e1268..84ae1addd2 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -382,7 +382,7 @@ with *external metrics*. Using external metrics requires knowledge of your monitoring system; the setup is similar to that required when using custom metrics. External metrics allow you to autoscale your cluster -based on any metric available in your monitoring system. Just provide a `metric` block with a +based on any metric available in your monitoring system. Provide a `metric` block with a `name` and `selector`, as above, and use the `External` metric type instead of `Object`. If multiple time series are matched by the `metricSelector`, the sum of their values is used by the HorizontalPodAutoscaler. diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 763d9ab996..8d5c06c244 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -23,9 +23,7 @@ Pod Autoscaling does not apply to objects that can't be scaled, for example, Dae The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. -The controller periodically adjusts the number of replicas in a replication controller or deployment -to match the observed average CPU utilization to the target specified by user. - +The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed metrics such as average CPU utilisation, average memory utilisation or any other custom metric to the target specified by the user. @@ -162,7 +160,7 @@ can be fetched, scaling is skipped. This means that the HPA is still capable of scaling up if one or more metrics give a `desiredReplicas` greater than the current value. -Finally, just before HPA scales the target, the scale recommendation is recorded. The +Finally, right before HPA scales the target, the scale recommendation is recorded. The controller considers all recommendations within a configurable window choosing the highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes. This means that scaledowns will occur gradually, smoothing out the impact of rapidly diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md index 36e5334f3d..22f929c06f 100644 --- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md @@ -39,6 +39,7 @@ on general patterns for running stateful applications in Kubernetes. [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/). * Some familiarity with MySQL helps, but this tutorial aims to present general patterns that should be useful for other systems. +* You are using the default namespace or another namespace that does not contain any conflicting objects. @@ -534,10 +535,9 @@ kubectl delete pvc data-mysql-4 * Learn more about [debugging a StatefulSet](/docs/tasks/debug-application-cluster/debug-stateful-set/). * Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/). * Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/). -* Look in the [Helm Charts repository](https://github.com/kubernetes/charts) +* Look in the [Helm Charts repository](https://artifacthub.io/) for other stateful application examples. - diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md index 0789997309..a724d5b17b 100644 --- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md +++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md @@ -12,10 +12,7 @@ You can use the GCP [Service Catalog Installer](https://github.com/GoogleCloudPl tool to easily install or uninstall Service Catalog on your Kubernetes cluster, linking it to Google Cloud projects. -Service Catalog itself can work with any kind of managed service, not just Google Cloud. - - - +Service Catalog can work with any kind of managed service, not only Google Cloud. ## {{% heading "prerequisites" %}} diff --git a/content/en/docs/tasks/tools/_index.md b/content/en/docs/tasks/tools/_index.md index 4f43a92af7..5f1b517141 100755 --- a/content/en/docs/tasks/tools/_index.md +++ b/content/en/docs/tasks/tools/_index.md @@ -17,9 +17,9 @@ and view logs. For more information including a complete list of kubectl operati kubectl is installable on a variety of Linux platforms, macOS and Windows. Find your preferred operating system below. -- [Install kubectl on Linux](install-kubectl-linux) -- [Install kubectl on macOS](install-kubectl-macos) -- [Install kubectl on Windows](install-kubectl-windows) +- [Install kubectl on Linux](/docs/tasks/tools/install-kubectl-linux) +- [Install kubectl on macOS](/docs/tasks/tools/install-kubectl-macos) +- [Install kubectl on Windows](/docs/tasks/tools/install-kubectl-windows) ## kind diff --git a/content/en/docs/test.md b/content/en/docs/test.md index aadfc9a9e3..ae5bb447f1 100644 --- a/content/en/docs/test.md +++ b/content/en/docs/test.md @@ -113,7 +113,7 @@ mind: two consecutive lists. **The HTML comment needs to be at the left margin.** 2. Numbered lists can have paragraphs or block elements within them. - Just indent the content to be the same as the first line of the bullet + Indent the content to be the same as the first line of the bullet point. **This paragraph and the code block line up with the `N` in `Numbered` above.** diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md index 54c8a0f44c..b220647e62 100644 --- a/content/en/docs/tutorials/clusters/apparmor.md +++ b/content/en/docs/tutorials/clusters/apparmor.md @@ -184,7 +184,7 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) { ``` Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our -nodes. For this example we'll just use SSH to install the profiles, but other approaches are +nodes. For this example we'll use SSH to install the profiles, but other approaches are discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles). ```shell diff --git a/content/en/docs/tutorials/clusters/seccomp.md b/content/en/docs/tutorials/clusters/seccomp.md index 376c349f72..971618cf55 100644 --- a/content/en/docs/tutorials/clusters/seccomp.md +++ b/content/en/docs/tutorials/clusters/seccomp.md @@ -67,8 +67,8 @@ into the cluster. For simplicity, [kind](https://kind.sigs.k8s.io/) can be used to create a single node cluster with the seccomp profiles loaded. Kind runs Kubernetes in Docker, -so each node of the cluster is actually just a container. This allows for files -to be mounted in the filesystem of each container just as one might load files +so each node of the cluster is a container. This allows for files +to be mounted in the filesystem of each container similar to loading files onto a node. {{< codenew file="pods/security/seccomp/kind.yaml" >}} diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index 4bd6a9a82c..234c455064 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -46,7 +46,7 @@ This tutorial provides a container image that uses NGINX to echo back all the re {{< kat-button >}} {{< note >}} -If you installed minikube locally, run `minikube start`. +If you installed minikube locally, run `minikube start`. Before you run `minikube dashboard`, you should open a new terminal, start `minikube dashboard` there, and then switch back to the main terminal. {{< /note >}} 2. Open the Kubernetes dashboard in a browser: @@ -152,7 +152,7 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/). The application code inside the image `k8s.gcr.io/echoserver` only listens on TCP port 8080. If you used `kubectl expose` to expose a different port, clients could not connect to that other port. -2. View the Service you just created: +2. View the Service you created: ```shell kubectl get services @@ -227,7 +227,7 @@ The minikube tool includes a set of built-in {{< glossary_tooltip text="addons" metrics-server was successfully enabled ``` -3. View the Pod and Service you just created: +3. View the Pod and Service you created: ```shell kubectl get pod,svc -n kube-system diff --git a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md index b8aaaabb5b..a44c4392ca 100644 --- a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md @@ -1083,7 +1083,7 @@ above. `Parallel` pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, and not to wait for Pods to become Running and Ready or completely terminated prior to launching or terminating another -Pod. +Pod. This option only affects the behavior for scaling operations. Updates are not affected. {{< codenew file="application/web/web-parallel.yaml" >}} diff --git a/content/es/docs/concepts/configuration/manage-resources-containers.md b/content/es/docs/concepts/configuration/manage-resources-containers.md new file mode 100644 index 0000000000..f936b9107e --- /dev/null +++ b/content/es/docs/concepts/configuration/manage-resources-containers.md @@ -0,0 +1,757 @@ +--- +title: Administrando los recursos de los contenedores +content_type: concept +weight: 40 +feature: + title: Bin packing automático + description: > + Coloca los contenedores automáticamente en base a los recursos solicitados y otras limitaciones, mientras no se afecte la + disponibilidad. Combina cargas críticas y best-effort para mejorar el uso y ahorrar recursos. +--- + + + +Cuando especificas un {{< glossary_tooltip term_id="pod" >}}, opcionalmente puedes especificar +los recursos que necesita un {{< glossary_tooltip text="Contenedor" term_id="container" >}}. +Los recursos que normalmente se definen son CPU y memoria (RAM); pero hay otros. + +Cuando especificas el recurso _request_ para Contenedores en un {{< glossary_tooltip term_id="pod" >}}, +el {{< glossary_tooltip text="Scheduler de Kubernetes " term_id="kube-scheduler" >}} usa esta información para decidir en qué nodo colocar el {{< glossary_tooltip term_id="pod" >}}. +Cuando especificas el recurso _limit_ para un Contenedor, Kubelet impone estos límites, así que el contenedor no +puede utilizar más recursos que el límite que le definimos. Kubelet también reserva al menos la cantidad +especificada en _request_ para el contenedor. + + + + + + +## Peticiones y límites + +Si el nodo donde está corriendo un pod tiene suficientes recursos disponibles, es posible +(y válido) que el {{< glossary_tooltip text="contenedor" term_id="container" >}} utilice más recursos de los especificados en `request`. +Sin embargo, un {{< glossary_tooltip text="contenedor" term_id="container" >}} no está autorizado a utilizar más de lo especificado en `limit`. + +Por ejemplo, si configuras una petición de `memory` de 256 MiB para un {{< glossary_tooltip text="contenedor" term_id="container" >}}, y ese contenedor está +en un {{< glossary_tooltip term_id="pod" >}} colocado en un nodo con 8GiB de memoria y no hay otros {{< glossary_tooltip term_id="pod" >}}, entonces el contenedor puede intentar usar +más RAM. + +Si configuras un límite de `memory` de 4GiB para el contenedor, {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}) + (y +{{< glossary_tooltip text="motor de ejecución del contenedor" term_id="container-runtime" >}}) impone el límite. +El Runtime evita que el {{< glossary_tooltip text="contenedor" term_id="container" >}} use más recursos de los configurados en el límite. Por ejemplo: +cuando un proceso en el {{< glossary_tooltip text="contenedor" term_id="container" >}} intenta consumir más cantidad de memoria de la permitida, +el Kernel del sistema termina el proceso que intentó la utilización de la memoria, con un error de out of memory (OOM). + +Los límites se pueden implementar de forma reactiva (el sistema interviene cuando ve la violación) +o por imposición (el sistema previene al contenedor de exceder el límite). Diferentes Runtimes pueden tener distintas +implementaciones a las mismas restricciones. + +{{< note >}} +Si un contenedor especifica su propio límite de memoria, pero no especifica la petición de memoria, Kubernetes +automáticamente asigna una petición de memoria igual a la del límite. De igual manera, si un contenedor especifica su propio límite de CPU, pero no especifica una petición de CPU, Kubernetes automáticamente asigna una petición de CPU igual a la especificada en el límite. +{{< /note >}} + +## Tipos de recursos + +*CPU* y *memoria* son cada uno un *tipo de recurso*. Un tipo de recurso tiene una unidad base. +CPU representa procesos de computación y es especificada en unidades de [Kubernetes CPUs](#meaning-of-cpu). +Memoria es especificada en unidades de bytes. +Si estás usando Kubernetes v1.14 o posterior, puedes especificar recursos _huge page_. +Huge pages son una característica de Linux específica donde el kernel del nodo asigna bloques +de memoria que son más grandes que el tamaño de paginación por defecto. + +Por ejemplo, en un sistema donde el tamaño de paginación por defecto es de 4KiB, podrías +especificar un límite, `hugepages-2Mi: 80Mi`. Si el contenedor intenta asignar + más de 40 2MiB huge pages (un total de 80 MiB), la asignación fallará. + +{{< note >}} +No se pueden sobreasignar recursos `hugepages-*`. +A diferencia de los recursos de `memoria` y `cpu`. +{{< /note >}} + +CPU y memoria son colectivamente conocidos como *recursos de computación*, o simplemente como +*recursos*. Los recursos de computación son cantidades medibles que pueden ser solicitadas, asignadas +y consumidas. Son distintas a los [Recursos API](/docs/concepts/overview/kubernetes-api/). Los recursos API , como {{< glossary_tooltip text="Pods" term_id="pod" >}} y +[Services](/docs/concepts/services-networking/service/) son objetos que pueden ser leídos y modificados +a través de la API de Kubernetes. + +## Peticiones y límites de recursos de Pods y Contenedores + +Cada contenedor de un Pod puede especificar uno o más de los siguientes: + +* `spec.containers[].resources.limits.cpu` +* `spec.containers[].resources.limits.memory` +* `spec.containers[].resources.limits.hugepages-` +* `spec.containers[].resources.requests.cpu` +* `spec.containers[].resources.requests.memory` +* `spec.containers[].resources.requests.hugepages-` + +Aunque las peticiones y límites pueden ser especificadas solo en contenedores individuales, es conveniente hablar +sobre los recursos de peticiones y límites del Pod. Un *limite/petición + de recursos de un Pod* para un tipo de recurso particular es la suma de +peticiones/límites de cada tipo para cada contenedor del Pod. + +## Unidades de recursos en Kubernetes + +### Significado de CPU + +Límites y peticiones para recursos de CPU son medidos en unidades de *cpu*. +Una cpu, en Kubernetes, es equivalente a **1 vCPU/Core** para proveedores de cloud y **1 hyperthread** en procesadores bare-metal Intel. + +Las peticiones fraccionadas están permitidas. Un contenedor con `spec.containers[].resources.requests.cpu` de `0.5` tiene garantizada la mitad, tanto +CPU como otro que requiere 1 CPU. La expresión `0.1` es equivalente a la expresión `100m`, que puede ser leída como "cien millicpus". Algunas personas dicen +"cienmilicores", y se entiende que quiere decir lo mismo. Una solicitud con un punto decimal, como `0.1`, es convertido a `100m` por la API, y no se permite + una precisión mayor que `1m`. Por esta razón, la forma `100m` es la preferente. +CPU es siempre solicitada como una cantidad absoluta, nunca como una cantidad relativa; +0.1 es la misma cantidad de cpu que un core-simple, dual-core, o máquina de 48-core. + +### Significado de memoria + +Los límites y peticiones de `memoria` son medidos en bytes. Puedes expresar la memoria como +un número entero o como un número decimal usando alguno de estos sufijos: +E, P, T, G, M, K. También puedes usar los equivalentes en potencia de dos: Ei, Pi, Ti, Gi, +Mi, Ki. Por ejemplo, los siguientes valores representan lo mismo: + +```shell +128974848, 129e6, 129M, 123Mi +``` + +Aquí un ejemplo. +El siguiente {{< glossary_tooltip text="Pod" term_id="pod" >}} tiene dos contenedores. Cada contenedor tiene una petición de 0.25 cpu +y 64MiB (226 bytes) de memoria. Cada contenedor tiene un límite de 0.5 cpu +y 128MiB de memoria. Puedes decirle al Pod que solicite 0.5 cpu y 128MiB de memoria +y un límite de 1 cpu y 256MiB de memoria. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: frontend +spec: + containers: + - name: app + image: images.my-company.example/app:v4 + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + - name: log-aggregator + image: images.my-company.example/log-aggregator:v6 + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" +``` + +## Cómo son programados los Pods con solicitudes de recursos + +Cuando creas un {{< glossary_tooltip text="Pod" term_id="pod" >}}, el {{< glossary_tooltip text="planificador de Kubernetes " term_id="kube-scheduler" >}} determina el nodo para correr dicho {{< glossary_tooltip text="Pod" term_id="pod" >}}. +Cada nodo tiene una capacidad máxima para cada tipo de recurso: +la cantidad de CPU y memoria que dispone para los Pods. El {{< glossary_tooltip text="planificador de Kubernetes" term_id="kube-scheduler" >}} se asegura de que, + para cada tipo de recurso, la suma de los recursos solicitados de los contenedores programados sea menor a la capacidad del nodo. Cabe mencionar que aunque la memoria actual o CPU +en uso de los nodos sea muy baja, el {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}} todavía rechaza programar un {{< glossary_tooltip text="Pod" term_id="pod" >}} en un nodo si +la comprobación de capacidad falla. Esto protege contra escasez de recursos en un nodo +cuando el uso de recursos posterior crece, por ejemplo, durante un pico diario de +solicitud de recursos. + +## Cómo corren los Pods con límites de recursos + +Cuando el {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} inicia un {{< glossary_tooltip text="contenedor" term_id="container" >}} de un {{< glossary_tooltip text="Pod" term_id="pod" >}}, este pasa los límites de CPU y +memoria al {{< glossary_tooltip text="runtime del contenedor" term_id="container-runtime" >}}. + +Cuando usas Docker: + +- El `spec.containers[].resources.requests.cpu` es convertido a su valor interno, + el cuál es fraccional, y multiplicado por 1024. El mayor valor de este número o + 2 es usado por el valor de + [`--cpu-shares`](https://docs.docker.com/engine/reference/run/#cpu-share-constraint) + en el comando `docker run`. + +- El `spec.containers[].resources.limits.cpu` se convierte a su valor en milicore y + multiplicado por 100. El resultado es el tiempo total de CPU que un contenedor puede usar + cada 100ms. Un contenedor no puede usar más tiempo de CPU que del solicitado durante este intervalo. + + {{< note >}} + El período por defecto es de 100ms. La resolución mínima de cuota mínima es 1ms. + {{}} + +- El `spec.containers[].resources.limits.memory` se convierte a entero, y + se usa como valor de + [`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints) + del comando `docker run`. + +Si el {{< glossary_tooltip text="contenedor" term_id="container" >}} excede su límite de memoria, este quizá se detenga. Si es reiniciable, +el {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} lo reiniciará, así como cualquier otro error. + +Si un Contenedor excede su petición de memoria, es probable que ese Pod sea +desalojado en cualquier momento que el nodo se quede sin memoria. + +Un Contenedor puede o no tener permitido exceder el límite de CPU por +algunos períodos de tiempo. Sin embargo, esto no lo destruirá por uso excesivo de CPU. + +Para conocer cuando un Contenedor no puede ser programado o será destruido debido a +límite de recursos, revisa la sección de [Troubleshooting](#troubleshooting). + +### Monitorización del uso de recursos de computación y memoria. + +El uso de recursos de un Pod es reportado como parte del estado del Pod. + +Si [herramientas opcionales para monitorización](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) +están disponibles en tu cluster, entonces el uso de recursos del Pod puede extraerse directamente de +[Métricas API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api) +o desde tus herramientas de monitorización. + +## Almacenamiento local efímero + + +{{< feature-state for_k8s_version="v1.10" state="beta" >}} + +Los nodos tienen almacenamiento local efímero, respaldado por +dispositivos de escritura agregados o, a veces, por RAM. +"Efímero" significa que no se garantiza la durabilidad a largo plazo. +. +Los Pods usan el almacenamiento local efímero para añadir espacio, caché, y para logs. +Kubelet puede proveer espacio añadido a los Pods usando almacenamiento local efímero para +montar [`emptyDir`](/docs/concepts/storage/volumes/#emptydir) + {{< glossary_tooltip term_id="volume" text="volumes" >}} en los contenedores. + +Kubelet también usa este tipo de almacenamiento para guardar +[logs de contenedores a nivel de nodo](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), +imágenes de contenedores, y la capa de escritura de los contenedores. + +{{< caution >}} +Si un nodo falla, los datos en el almacenamiento efímero se pueden perder. +Tus aplicaciones no pueden esperar ningun SLA (IOPS de disco, por ejemplo) +del almacenamiento local efímero. +{{< /caution >}} + +Como característica beta, Kubernetes te deja probar, reservar y limitar la cantidad +de almacenamiento local efímero que un Pod puede consumir. + +### Configuraciones para almacenamiento local efímero + +Kubernetes soporta 2 maneras de configurar el almacenamiento local efímero en un nodo: +{{< tabs name="local_storage_configurations" >}} +{{% tab name="Single filesystem" %}} +En esta configuración, colocas todos los tipos de datos (`emptyDir` volúmenes, capa de escritura, +imágenes de contenedores, logs) en un solo sistema de ficheros. +La manera más efectiva de configurar Kubelet es dedicando este sistema de archivos para los datos de Kubernetes (kubelet). + +Kubelet también escribe +[logs de contenedores a nivel de nodo](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) +y trata estos de manera similar al almacenamiento efímero. + +Kubelet escribe logs en ficheros dentro del directorio de logs (por defecto `/var/log` +); y tiene un directorio base para otros datos almacenados localmente +(`/var/lib/kubelet` por defecto). + +Por lo general, `/var/lib/kubelet` y `/var/log` están en el sistema de archivos de root, +y Kubelet es diseñado con ese objetivo en mente. + +Tu nodo puede tener tantos otros sistema de archivos, no usados por Kubernetes, +como quieras. +{{% /tab %}} +{{% tab name="Two filesystems" %}} +Tienes un sistema de archivos en el nodo que estás usando para datos efímeros que +provienen de los Pods corriendo: logs, y volúmenes `emptyDir`. +Puedes usar este sistema de archivos para otros datos (por ejemplo: logs del sistema no relacionados + con Kubernetes); estos pueden ser incluso del sistema de archivos root. + +Kubelet también escribe +[logs de contenedores a nivel de nodo](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) +en el primer sistema de archivos, y trata estos de manera similar al almacenamiento efímero. + +También usas un sistema de archivos distinto, respaldado por un dispositivo de almacenamiento lógico diferente. +En esta configuración, el directorio donde le dices a Kubelet que coloque +las capas de imágenes de los contenedores y capas de escritura es este segundo sistema de archivos. + +El primer sistema de archivos no guarda ninguna capa de imágenes o de escritura. + +Tu nodo puede tener tantos sistemas de archivos, no usados por Kubernetes, como quieras. +{{% /tab %}} +{{< /tabs >}} + +Kubelet puede medir la cantidad de almacenamiento local que se está usando. Esto es posible por: + +- el `LocalStorageCapacityIsolation` + [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) + está habilitado (esta caracterísitca está habilitada por defecto), y +- has configurado el nodo usando una de las configuraciones soportadas + para almacenamiento local efímero.. + +Si tienes una configuración diferente, entonces Kubelet no aplica límites de recursos +para almacenamiento local efímero. + +{{< note >}} +Kubelet rastrea `tmpfs` volúmenes emptyDir como uso de memoria de contenedor, en lugar de +almacenamiento local efímero. +{{< /note >}} + +### Configurando solicitudes y límites para almacenamiento local efímero + +Puedes usar _ephemeral-storage_ para manejar almacenamiento local efímero. Cada contenedor de un Pod puede especificar +uno o más de los siguientes: + +* `spec.containers[].resources.limits.ephemeral-storage` +* `spec.containers[].resources.requests.ephemeral-storage` + +Los límites y solicitudes para `almacenamiento-efímero` son medidos en bytes. Puedes expresar el almacenamiento +como un numero entero o flotante usando los siguientes sufijos: +E, P, T, G, M, K. También puedes usar las siguientes equivalencias: Ei, Pi, Ti, Gi, +Mi, Ki. Por ejemplo, los siguientes representan el mismo valor: + +```shell +128974848, 129e6, 129M, 123Mi +``` + +En el siguiente ejemplo, el Pod tiene dos contenedores. Cada contenedor tiene una petición de 2GiB de almacenamiento local efímero. Cada +contenedor tiene un límite de 4GiB de almacenamiento local efímero. Sin embargo, el Pod tiene una petición de 4GiB de almacenamiento efímero +, y un límite de 8GiB de almacenamiento local efímero. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: frontend +spec: + containers: + - name: app + image: images.my-company.example/app:v4 + resources: + requests: + ephemeral-storage: "2Gi" + limits: + ephemeral-storage: "4Gi" + - name: log-aggregator + image: images.my-company.example/log-aggregator:v6 + resources: + requests: + ephemeral-storage: "2Gi" + limits: + ephemeral-storage: "4Gi" +``` + +### Como son programados los Pods con solicitudes de almacenamiento efímero + +Cuando creas un Pod, el planificador de Kubernetes selecciona un nodo para el Pod donde sera creado. +Cada nodo tiene una cantidad máxima de almacenamiento local efímero que puede proveer a los Pods. Para +más información, mira [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable). + +El planificador se asegura de que el total de los recursos solicitados para los contenedores sea menor que la capacidad del nodo. + +### Manejo del consumo de almacenamiento efímero {#resource-emphemeralstorage-consumption} + +Si Kubelet está manejando el almacenamiento efímero local como un recurso, entonces +Kubelet mide el uso de almacenamiento en: + +- volúmenes `emptyDir`, excepto _tmpfs_ volúmenes`emptyDir` +- directorios que guardan logs de nivel de nodo +- capas de escritura de contenedores + +Si un Pod está usando más almacenamiento efímero que el permitido, Kubelet +establece una señal de desalojo que desencadena el desalojo del Pod. + +Para aislamiento a nivel de contenedor, si una capa de escritura del contenedor y +logs excede el límite de uso del almacenamiento, Kubelet marca el Pod para desalojo. + +Para aislamiento a nivel de Pod, Kubelet calcula un límite de almacenamiento +general para el Pod sumando los límites de los contenedores de ese Pod. +En este caso, si la suma del uso de almacenamiento local efímero para todos los contenedores +y los volúmenes `emptyDir` de los Pods excede el límite de almacenamiento general del +Pod, Kubelet marca el Pod para desalojo. + +{{< caution >}} +Si Kubelet no está midiendo el almacenamiento local efímero, entonces el Pod +que excede este límite de almacenamiento, no será desalojado para liberar +el límite del recurso de almacenamiento. + +Sin embargo, si el espacio del sistema de archivos para la capa de escritura del contenedor, +logs a nivel de nodo o volúmenes `emptyDir` decae, el +{{< glossary_tooltip text="taints" term_id="taint" >}} del nodo lanza la desalojo para +cualquier Pod que no tolere dicho taint. + +Mira las [configuraciones soportadas](#configurations-for-local-ephemeral-storage) +para almacenamiento local efímero. +{{< /caution >}} + +Kubelet soporta diferentes maneras de medir el uso de almacenamiento del Pod: + + +{{< tabs name="resource-emphemeralstorage-measurement" >}} +{{% tab name="Periodic scanning" %}} +Kubelet realiza frecuentemente, verificaciones programadas que revisan cada +volumen `emptyDir`, directorio de logs del contenedor, y capa de escritura +del contenedor. + +El escáner mide cuanto espacio está en uso. + +{{< note >}} +En este modo, Kubelet no rastrea descriptores de archivos abiertos +para archivos eliminados. + +Si tú (o un contenedor) creas un archivo dentro de un volumen `emptyDir`, +y algo mas abre ese archivo, y tú lo borras mientras este está abierto, +entonces el inodo para este archivo borrado se mantiene hasta que cierras +el archivo, pero Kubelet no cataloga este espacio como en uso. +{{< /note >}} +{{% /tab %}} +{{% tab name="Filesystem project quota" %}} + +{{< feature-state for_k8s_version="v1.15" state="alpha" >}} + +Las cuotas de proyecto están definidas a nivel de sistema operativo +para el manejo de uso de almacenamiento en uso de sistema de archivos. +Con Kubernetes, puedes habilitar las cuotas de proyecto para el uso +de la monitorización del almacenamiento. Asegúrate que el respaldo del +Sistema de archivos de los volúmenes `emptyDir` , en el nodo, provee soporte de +cuotas de proyecto. +Por ejemplo, XFS y ext4fs ofrecen cuotas de proyecto. + +{{< note >}} +Las cuotas de proyecto te permiten monitorear el uso del almacenamiento; no +fuerzan los límites. +{{< /note >}} + +Kubernetes usa IDs de proyecto empezando por `1048576`. Los IDs en uso +son registrados en `/etc/projects` y `/etc/projid`. Si los IDs de proyecto +en este rango son usados para otros propósitos en el sistema, esos IDs +de proyecto deben ser registrados en `/etc/projects` y `/etc/projid` para +que Kubernetes no los use. + +Las cuotas son más rápidas y más precisas que el escáner de directorios. +Cuando un directorio es asignado a un proyecto, todos los ficheros creados +bajo un directorio son creados en ese proyecto, y el kernel simplemente +tiene que mantener rastreados cuántos bloques están en uso por ficheros +en ese proyecto. Si un fichero es creado y borrado, pero tiene un fichero abierto, +continúa consumiendo espacio. El seguimiento de cuotas registra ese espacio +con precisión mientras que los escaneos de directorios pasan por alto +el almacenamiento utilizado por los archivos eliminados + +Si quieres usar cuotas de proyecto, debes: + +* Habilitar el `LocalStorageCapacityIsolationFSQuotaMonitoring=true` + [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) + en la configuración del kubelet. + +* Asegúrese de que el sistema de archivos raíz (o el sistema de archivos en tiempo de ejecución opcional) + tiene las cuotas de proyectos habilitadas. Todos los sistemas de archivos XFS admiten cuotas de proyectos. + Para los sistemas de archivos ext4, debe habilitar la función de seguimiento de cuotas del proyecto + mientras el sistema de archivos no está montado. + + ```bash + # For ext4, with /dev/block-device not mounted + sudo tune2fs -O project -Q prjquota /dev/block-device + ``` + +* Asegúrese de que el sistema de archivos raíz (o el sistema de archivos de tiempo de ejecución opcional) esté + montado con cuotas de proyecto habilitadas. Tanto para XFS como para ext4fs, la opción de montaje + se llama `prjquota`. + +{{% /tab %}} +{{< /tabs >}} + +## Recursos extendidos + +Los recursos extendidos son nombres de recursos calificados fuera del +dominio `kubernetes.io`. Permiten que los operadores de clústers publiciten y los usuarios +consuman los recursos no integrados de Kubernetes. + +Hay dos pasos necesarios para utilizar los recursos extendidos. Primero, el operador del clúster +debe anunciar un Recurso Extendido. En segundo lugar, los usuarios deben solicitar +el Recurso Extendido en los Pods. + +### Manejando recursos extendidos + +#### Recursos extendido a nivel de nodo + +Los recursos extendidos a nivel de nodo están vinculados a los nodos + +##### Device plugin managed resources +Mira [Plugins de +Dispositivos](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) +para percibir como los plugins de dispositivos manejan los recursos +en cada nodo. + +##### Otros recursos + +Para anunciar un nuevo recurso extendido a nivel de nodo, el operador del clúster puede +enviar una solicitud HTTP `PATCH` al servidor API para especificar la cantidad +disponible en el `status.capacity` para un nodo en el clúster. Después de esta +operación, el `status.capacity` del nodo incluirá un nuevo recurso. El campo +`status.allocatable` se actualiza automáticamente con el nuevo recurso +de forma asíncrona por el {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}. Tenga en cuenta que debido a que el {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}} +utiliza el valor de `status.allocatable` del nodo cuando evalúa la aptitud del {{< glossary_tooltip text="Pod" term_id="pod" >}}, puede haber un breve +retraso entre parchear la capacidad del nodo con un nuevo recurso y el primer Pod +que solicita el recurso en ese nodo. + +**Ejemplo:** + +Aquí hay un ejemplo que muestra cómo usar `curl` para formar una solicitud HTTP que +anuncia cinco recursos "example.com/foo" en el nodo `k8s-node-1` cuyo nodo master +es `k8s-master`. + +```shell +curl --header "Content-Type: application/json-patch+json" \ +--request PATCH \ +--data '[{"op": "add", "path": "/status/capacity/example.com~1foo", "value": "5"}]' \ +http://k8s-master:8080/api/v1/nodes/k8s-node-1/status +``` + +{{< note >}} +En la solicitud anterior, `~ 1` es la codificación del carácter` / ` +en la ruta del parche. El valor de la ruta de operación en JSON-Patch se interpreta como un +puntero JSON. Para obtener más detalles, consulte +[IETF RFC 6901, sección 3](https://tools.ietf.org/html/rfc6901#section-3). +{{< /note >}} + +#### Recursos extendidos a nivel de Clúster + +Los recursos extendidos a nivel de clúster no están vinculados a los nodos. Suelen estar gestionados +por extensores del scheduler, que manejan el consumo de recursos y la cuota de recursos. + +Puedes especificar los recursos extendidos que son mantenidos por los extensores del scheduler en +[configuración de políticas del scheduler](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31). + +**Ejemplo:** + +La siguiente configuración para una política del scheduler indica que el +recurso extendido a nivel de clúster "example.com/foo" es mantenido +por el extensor del scheduler. + +- El scheduler envía un Pod al extensor del scheduler solo si la solicitud del Pod "example.com/foo". +- El campo `ignoredByScheduler` especifica que el schduler no compruba el recurso + "example.com/foo" en su predicado `PodFitsResources`. + +```json +{ + "kind": "Policy", + "apiVersion": "v1", + "extenders": [ + { + "urlPrefix":"", + "bindVerb": "bind", + "managedResources": [ + { + "name": "example.com/foo", + "ignoredByScheduler": true + } + ] + } + ] +} +``` + +### Consumiendo recursos extendidos + +Los usuarios pueden consumir recursos extendidos en las especificaciones del Pod, como la CPU y la memoria. +El {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}} se encarga de la contabilidad de recursos para que no más de +la cantidad disponible sea asignada simultáneamente a los Pods. + +El servidor de API restringe las cantidades de recursos extendidos a números enteros. +Ejemplos de cantidades _validas_ son `3`,` 3000m` y `3Ki`. Ejemplos de +_cantidades no válidas_ son `0.5` y` 1500m`. + +{{< note >}} +Los recursos extendidos reemplazan los Recursos Integrales Opacos. +Los usuarios pueden usar cualquier otro prefijo de dominio que `kubernetes.io` +tenga reservado. +{{< /note >}} + +Para consumir un recurso extendido en un Pod, incluye un nombre de recurso +como clave en `spec.containers[].resources.limits` en las especificaciones del contenedor. + +{{< note >}} +Los Recursos Extendidos no pueden ser sobreescritos, así que solicitudes y límites +deben ser iguales si ambos están presentes en las especificaciones de un contenedor. +{{< /note >}} + +Un pod se programa solo si se satisfacen todas las solicitudes de recursos, incluidas +CPU, memoria y cualquier recurso extendido. El {{< glossary_tooltip text="Pod" term_id="pod" >}} permanece en estado `PENDING` +siempre que no se pueda satisfacer la solicitud de recursos. + +**Ejemplo:** + +El siguiente {{< glossary_tooltip text="Pod" term_id="pod" >}} solicita 2CPUs y 1 "example.com/foo" (un recurso extendido). + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: my-pod +spec: + containers: + - name: my-container + image: myimage + resources: + requests: + cpu: 2 + example.com/foo: 1 + limits: + example.com/foo: 1 +``` + +## Solución de problemas + +### Mis Pods están en estado pendiente con un mensaje de failedScheduling + +Si el {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}} no puede encontrar ningún nodo donde pueda colocar un {{< glossary_tooltip text="Pod" term_id="pod" >}}, el {{< glossary_tooltip text="Pod" term_id="pod" >}} permanece +no programado hasta que se pueda encontrar un lugar. Se produce un evento cada vez que +el {{< glossary_tooltip text="planificador" term_id="kube-scheduler" >}} no encuentra un lugar para el {{< glossary_tooltip text="Pod" term_id="pod" >}}, como este: + +```shell +kubectl describe pod frontend | grep -A 3 Events +``` +``` +Events: + FirstSeen LastSeen Count From Subobject PathReason Message + 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others +``` +En el ejemplo anterior, el Pod llamado "frontend" no se puede programar debido a +recursos de CPU insuficientes en el nodo. Mensajes de error similares también pueden sugerir +fallo debido a memoria insuficiente (PodExceedsFreeMemory). En general, si un Pod +está pendiente con un mensaje de este tipo, hay varias cosas para probar: + +- Añadir más nodos al clúster. +- Terminar Pods innecesarios para hacer hueco a los Pods en estado pendiente. +- Compruebe que el Pod no sea más grande que todos los nodos. Por ejemplo, si todos los + los nodos tienen una capacidad de `cpu: 1`, entonces un Pod con una solicitud de` cpu: 1.1` + nunca se programará. + +Puedes comprobar las capacidades del nodo y cantidad utilizada con el comando +`kubectl describe nodes`. Por ejemplo: + +```shell +kubectl describe nodes e2e-test-node-pool-4lw4 +``` +``` +Name: e2e-test-node-pool-4lw4 +[ ... lines removed for clarity ...] +Capacity: + cpu: 2 + memory: 7679792Ki + pods: 110 +Allocatable: + cpu: 1800m + memory: 7474992Ki + pods: 110 +[ ... lines removed for clarity ...] +Non-terminated Pods: (5 in total) + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits + --------- ---- ------------ ---------- --------------- ------------- + kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%) + kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%) + kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%) + kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%) + kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) +Allocated resources: + (Total limits may be over 100 percent, i.e., overcommitted.) + CPU Requests CPU Limits Memory Requests Memory Limits + ------------ ---------- --------------- ------------- + 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%) +``` + +EN la salida anterior, puedes ver si una solicitud de Pod mayor que 1120m +CPUs o 6.23Gi de memoria, no cabrán en el nodo. + +Echando un vistazo a la sección `Pods`, puedes ver qué Pods están ocupando espacio +en el nodo. + +La cantidad de recursos disponibles para los pods es menor que la capacidad del nodo, porque +los demonios del sistema utilizan una parte de los recursos disponibles. El campo `allocatable` +[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core) +indica la cantidad de recursos que están disponibles para los Pods. Para más información, mira +[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md). + +La característica [resource quota](/docs/concepts/policy/resource-quotas/) se puede configurar +para limitar la cantidad total de recursos que se pueden consumir. Si se usa en conjunto +con espacios de nombres, puede evitar que un equipo acapare todos los recursos. + +### Mi contenedor está terminado + +Es posible que su contenedor se cancele porque carece de recursos. Para verificar +si un contenedor está siendo eliminado porque está alcanzando un límite de recursos, ejecute +`kubectl describe pod` en el Pod de interés: + +```shell +kubectl describe pod simmemleak-hra99 +``` +``` +Name: simmemleak-hra99 +Namespace: default +Image(s): saadali/simmemleak +Node: kubernetes-node-tf0f/10.240.216.66 +Labels: name=simmemleak +Status: Running +Reason: +Message: +IP: 10.244.2.75 +Replication Controllers: simmemleak (1/1 replicas created) +Containers: + simmemleak: + Image: saadali/simmemleak + Limits: + cpu: 100m + memory: 50Mi + State: Running + Started: Tue, 07 Jul 2015 12:54:41 -0700 + Last Termination State: Terminated + Exit Code: 1 + Started: Fri, 07 Jul 2015 12:54:30 -0700 + Finished: Fri, 07 Jul 2015 12:54:33 -0700 + Ready: False + Restart Count: 5 +Conditions: + Type Status + Ready False +Events: + FirstSeen LastSeen Count From SubobjectPath Reason Message + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a +``` + +En el ejemplo anterior, `Restart Count: 5` indica que el contenedor `simmemleak` +del Pod se reinició cinco veces. + +Puedes ejecutar `kubectl get pod` con la opción `-o go-template=...` para extraer el estado +previos de los Contenedores terminados: + +```shell +kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99 +``` +``` +Container Name: simmemleak +LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]] +``` + +Puedes ver que el Contenedor fué terminado a causa de `reason:OOM Killed`, donde `OOM` indica una falta de memoria. + + + + + + +## {{% heading "whatsnext" %}} + + +* Obtén experiencia práctica [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/). + +* Obtén experiencia práctica [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/). + +* Para más detalles sobre la diferencia entre solicitudes y límites, mira + [Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md). + +* Lee [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) referencia de API + +* Lee [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) referencia de API + +* Lee sobre [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) en XFS diff --git a/content/es/docs/concepts/workloads/pods/ephemeral-containers.md b/content/es/docs/concepts/workloads/pods/ephemeral-containers.md index 1b939c969e..a880e1cbc8 100644 --- a/content/es/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/es/docs/concepts/workloads/pods/ephemeral-containers.md @@ -100,7 +100,7 @@ efímero a añadir como una lista de `EphemeralContainers`: "apiVersion": "v1", "kind": "EphemeralContainers", "metadata": { - "name": "example-pod" + "name": "example-pod" }, "ephemeralContainers": [{ "command": [ diff --git a/content/es/docs/tasks/manage-kubernetes-objects/_index.md b/content/es/docs/tasks/manage-kubernetes-objects/_index.md new file mode 100644 index 0000000000..5815f0c69e --- /dev/null +++ b/content/es/docs/tasks/manage-kubernetes-objects/_index.md @@ -0,0 +1,5 @@ +--- +title: "Administrar Objetos en Kubernetes" +description: Interactuando con el API de Kubernetes aplicando paradigmas declarativo e imperativo. +weight: 25 +--- diff --git a/content/es/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/es/docs/tasks/manage-kubernetes-objects/declarative-config.md new file mode 100644 index 0000000000..cb1ef2e7e5 --- /dev/null +++ b/content/es/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -0,0 +1,1016 @@ +--- +title: Administración declarativa de Objetos en Kubernetes usando archivos de Configuración +content_type: task +weight: 10 +--- + + +Objetos en Kubernetes pueden ser creados, actualizados y eliminados utilizando +archivos de configuración almacenados en un directorio. Usando el comando +`kubectl apply` podrá crearlos o actualizarlos de manera recursiva según sea necesario. +Este método retiene cualquier escritura realizada contra objetos activos en el +sistema sin unirlos de regreso a los archivos de configuración. `kubectl diff` le +permite visualizar de manera previa los cambios que `apply` realizará. + +## {{% heading "prerequisites" %}} + + +Instale [`kubectl`](/es/docs/tasks/tools/install-kubectl/). + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + + + + +## Modos de administración + +La herramienta `kubectl` soporta tres modos distintos para la administración de objetos: + +* Comandos imperativos +* Configuración de objetos imperativa +* Configuración de objetos declarativa + +Acceda [Administración de objetos de Kubernetes](/docs/concepts/overview/working-with-objects/object-management/) +para una discusión de las ventajas y desventajas de cada modo distinto de administración. + +## Visión general + +La configuración de objetos declarativa requiere una comprensión firme de la +definición y configuración de objetos de Kubernetes. Si aún no lo ha hecho, lea +y complete los siguientes documentos: + +* [Administración de Objetos de Kubernetes usando comandos imperativos](/docs/tasks/manage-kubernetes-objects/imperative-command/) +* [Administración imperativa de los Objetos de Kubernetes usando archivos de Configuración](/docs/tasks/manage-kubernetes-objects/imperative-config/) + +{{< comment >}} +TODO(lmurillo): Update the links above to the spanish versions of these documents once the +localizations become available +{{< /comment >}} + +A continuación la definición de términos usados en este documento: + +- *archivo de configuración de objeto / archivo de configuración*: Un archivo en el + que se define la configuración de un objeto de Kubernetes. Este tema muestra como + utilizar archivos de configuración con `kubectl apply`. Los archivos de configuración + por lo general se almacenan en un sistema de control de versiones, como Git. +- *configuración activa de objeto / configuración activa*: Los valores de configuración + activos de un objeto, según estén siendo observados por el Clúster. Esta configuración + se almacena en el sistema de almacenamiento de Kubernetes, usualmente etcd. +- *escritor de configuración declarativo / escritor declarativo*: Una persona o + componente de software que actualiza a un objeto activo. Los escritores activos a + los que se refiere este tema aplican cambios a los archivos de configuración de objetos + y ejecutan `kubectl apply` para aplicarlos. + +## Como crear objetos + +Utilice `kubectl apply` para crear todos los objetos definidos en los archivos +de configuración existentes en un directorio específico, con excepción de aquellos que +ya existen: + +```shell +kubectl apply -f / +``` + +Esto definirá la anotación `kubectl.kubernetes.io/last-applied-configuration: '{...}'` +en cada objeto. Esta anotación contiene el contenido del archivo de configuración +utilizado para la creación del objeto. + +{{< note >}} +Agregue la opción `-R` para procesar un directorio de manera recursiva. +{{< /note >}} + +El siguiente es un ejemplo de archivo de configuración para un objeto: + +{{< codenew file="application/simple_deployment.yaml" >}} + +Ejecute `kubectl diff` para visualizar el objeto que será creado: + +```shell +kubectl diff -f https://k8s.io/examples/application/simple_deployment.yaml +``` + +{{< note >}} +`diff` utiliza [server-side dry-run](/docs/reference/using-api/api-concepts/#dry-run), +que debe estar habilitado en el `kube-apiserver`. + +Dado que `diff` ejecuta una solicitud de `apply` en el servidor en modo de simulacro (dry-run), +requiere obtener permisos de `PATCH`, `CREATE`, y `UPDATE`. +Vea [Autorización Dry-Run](/docs/reference/using-api/api-concepts#dry-run-authorization) +para más detalles. + +{{< /note >}} + +Cree el objeto usando `kubectl apply`: + +```shell +kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml +``` + +Despliegue la configuración activa usando `kubectl get`: + +```shell +kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml +``` + +La salida le mostrará que la anotación `kubectl.kubernetes.io/last-applied-configuration` +fue escrita a la configuración activa, y es consistente con los contenidos del archivo +de configuración: + +```yaml +kind: Deployment +metadata: + annotations: + # ... + # Esta es la representación JSON de simple_deployment.yaml + # Fue escrita por kubectl apply cuando el objeto fue creado + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"apps/v1","kind":"Deployment", + "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, + "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", + "ports":[{"containerPort":80}]}]}}}} + # ... +spec: + # ... + minReadySeconds: 5 + selector: + matchLabels: + # ... + app: nginx + template: + metadata: + # ... + labels: + app: nginx + spec: + containers: + - image: nginx:1.14.2 + # ... + name: nginx + ports: + - containerPort: 80 + # ... + # ... + # ... + # ... +``` + +## Como actualizar objetos + +También puede usar `kubectl apply` para actualizar los objetos definidos en un directorio, +aún cuando esos objetos ya existan en la configuración activa. Con este enfoque logrará +lo siguiente: + +1. Definir los campos que aparecerán en la configuración activa. +2. Eliminar aquellos campos eliminados en el archivo de configuración, de la configuración activa. + +```shell +kubectl diff -f / +kubectl apply -f / +``` + +{{< note >}} +Agregue la opción `-R` para procesar directorios de manera recursiva. +{{< /note >}} + +Este es un ejemplo de archivo de configuración: + +{{< codenew file="application/simple_deployment.yaml" >}} + +Cree el objeto usando `kubectl apply`: + +```shell +kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml +``` + +{{< note >}} +Con el propósito de ilustrar, el comando anterior se refiere a un único archivo +de configuración en vez de un directorio. +{{< /note >}} + +Despliegue la configuración activa usando `kubectl get`: + +```shell +kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml +``` + +La salida le mostrará que la anotación `kubectl.kubernetes.io/last-applied-configuration` +fue escrita a la configuración activa, y es consistente con los contenidos del archivo +de configuración: + +```yaml +kind: Deployment +metadata: + annotations: + # ... + # Esta es la representación JSON de simple_deployment.yaml + # Fue escrita por kubectl apply cuando el objeto fue creado + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"apps/v1","kind":"Deployment", + "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, + "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", + "ports":[{"containerPort":80}]}]}}}} + # ... +spec: + # ... + minReadySeconds: 5 + selector: + matchLabels: + # ... + app: nginx + template: + metadata: + # ... + labels: + app: nginx + spec: + containers: + - image: nginx:1.14.2 + # ... + name: nginx + ports: + - containerPort: 80 + # ... + # ... + # ... + # ... +``` + +De manera directa, actualice el campo `replicas` en la configuración activa usando `kubectl scale`. +En este caso no se usa `kubectl apply`: + +```shell +kubectl scale deployment/nginx-deployment --replicas=2 +``` + +Despliegue la configuración activa usando `kubectl get`: + +```shell +kubectl get deployment nginx-deployment -o yaml +``` + +La salida le muestra que el campo `replicas` ha sido definido en 2, y que la +anotación `last-applied-configuration` no contiene el campo `replicas`: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + # ... + # note que la anotación no contiene replicas + # debido a que el objeto no fue actualizado usando apply + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"apps/v1","kind":"Deployment", + "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, + "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", + "ports":[{"containerPort":80}]}]}}}} + # ... +spec: + replicas: 2 # definido por scale + # ... + minReadySeconds: 5 + selector: + matchLabels: + # ... + app: nginx + template: + metadata: + # ... + labels: + app: nginx + spec: + containers: + - image: nginx:1.14.2 + # ... + name: nginx + ports: + - containerPort: 80 + # ... +``` + +Actualice el archivo de configuración `simple_deployment.yaml` para cambiar el campo `image` +de `nginx:1.14.2` a `nginx:1.16.1`, y elimine el campo `minReadySeconds`: + +{{< codenew file="application/update_deployment.yaml" >}} + +Aplique los cambios realizados al archivo de configuración: + +```shell +kubectl diff -f https://k8s.io/examples/application/update_deployment.yaml +kubectl apply -f https://k8s.io/examples/application/update_deployment.yaml +``` + +Despliegue la configuración activa usando `kubectl get`: + +```shell +kubectl get -f https://k8s.io/examples/application/update_deployment.yaml -o yaml +``` + +La salida le mostrará los siguientes cambios hechos a la configuración activa: + +* El campo `replicas` retiene el valor de 2 definido por `kubectl scale`. + Esto es posible ya que el campo fue omitido en el archivo de configuración. +* El campo `image` ha sido actualizado de `nginx:1.16.1` a `nginx:1.14.2`. +* La anotación `last-applied-configuration` ha sido actualizada con la nueva imagen. +* El campo `minReadySeconds` ha sido despejado. +* La anotación `last-applied-configuration` ya no contiene el campo `minReadySeconds` + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + # ... + # La anotación contiene la imagen acutalizada a nginx 1.11.9, + # pero no contiene la actualización de las replicas a 2 + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"apps/v1","kind":"Deployment", + "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, + "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"containers":[{"image":"nginx:1.16.1","name":"nginx", + "ports":[{"containerPort":80}]}]}}}} + # ... +spec: + replicas: 2 # Definido por `kubectl scale`. Ignorado por `kubectl apply`. + # minReadySeconds fue despejado por `kubectl apply` + # ... + selector: + matchLabels: + # ... + app: nginx + template: + metadata: + # ... + labels: + app: nginx + spec: + containers: + - image: nginx:1.16.1 # Definido `kubectl apply` + # ... + name: nginx + ports: + - containerPort: 80 + # ... + # ... + # ... + # ... +``` + +{{< warning >}} +No se puede combinar `kubectl apply` con comandos de configuración imperativa de objetos +como `create` y `replace`. Esto se debe a que `create` +y `replace` no retienen la anotación `kubectl.kubernetes.io/last-applied-configuration` +que `kubectl apply` utiliza para calcular los cambios por realizar. +{{< /warning >}} + +## Como eliminar objetos + +Hay dos opciones diferentes para eliminar objetos gestionados por `kubectl apply`. + +### Manera recomendada: `kubectl delete -f ` + +La manera recomendada de eliminar objetos de manera manual es utilizando el comando +imperativo, ya que es más explícito en relación a lo que será eliminado, y es +menos probable que resulte en algo siendo eliminado sin la intención del usuario. + +```shell +kubectl delete -f +``` + +### Manera alternativa: `kubectl apply -f --prune -l etiqueta=deseada` + +Únicamente utilice esta opción si está seguro de saber lo que está haciendo. + +{{< warning >}} +`kubectl apply --prune` se encuentra aún en alpha, y cambios incompatibles con versiones previas +podrían ser introducidos en lanzamientos futuros. +{{< /warning >}} + +{{< warning >}} +Sea cuidadoso(a) al usar este comando, para evitar eliminar objetos +no intencionalmente. +{{< /warning >}} + +Como una alternativa a `kubectl delete`, puede usar `kubectl apply` para identificar objetos a ser +eliminados, luego de que sus archivos de configuración han sido eliminados del directorio. El commando `apply` con `--prune` +consulta a la API del servidor por todos los objetos que coincidan con un grupo de etiquetas, e intenta relacionar +la configuración obtenida de los objetos activos contra los objetos según sus archivos de configuración. +Si un objeto coincide con la consulta, y no tiene un archivo de configuración en el directorio, pero si +tiene una anotación `last-applied-configuration`, entonces será eliminado. + +{{< comment >}} +TODO(pwittrock): We need to change the behavior to prevent the user from running apply on subdirectories unintentionally. +{{< /comment >}} + +```shell +kubectl apply -f --prune -l +``` + +{{< warning >}} +`apply` con `--prune` debería de ser ejecutado únicamente en contra del directorio +raíz que contiene los archivos de configuración. Ejecutarlo en contra de sub-directorios +podría causar que objetos sean eliminados no intencionalmente, si son retornados en la +consulta por selección de etiqueta usando `-l ` y no existen en el subdirectorio. +{{< /warning >}} + +## Como visualizar un objeto + +Puede usar `kubectl get` con `-o yaml` para ver la configuración de objetos activos: + +```shell +kubectl get -f -o yaml +``` + +## Como son las diferencias calculadas y unidas por apply + +{{< caution >}} +Un *patch* (parche) es una operación de actualización con alcance a campos específicos +de un objeto, y no al objeto completo. Esto permite actualizar únicamente grupos de campos +específicos en un objeto sin tener que leer el objeto primero. +{{< /caution >}} + +Cuando `kubectl apply` actualiza la configuración activa para un objeto, lo hace enviando +una solicitud de patch al servidor de API. El patch define actualizaciones para campos +específicos en la configuración del objeto activo. El comando `kubectl apply` calcula esta solicitud +de patch usando el archivo de configuración, la configuración activa, y la anotación `last-applied-configuration` +almacenada en la configuración activa. + +### Calculando la unión de un patch + +El comando `kubectl apply` escribe los contenidos de la configuración a la anotación +`kubectl.kubernetes.io/last-applied-configuration`. Esto es usado para identificar aquellos campos +que han sido eliminados de la configuración y deben ser limpiados. Los siguientes pasos +son usados para calcular que campos deben ser eliminados o definidos: + +1. Calculo de campos por eliminar. Estos son los campos presentes en `last-applied-configuration` pero ausentes en el archivo de configuración. +2. Calculo de campos por agregar o definir. Estos son los campos presentes en el archivo de configuración, con valores inconsistentes con la configuración activa. + +A continuación un ejemplo. Suponga que este es el archivo de configuración para un objeto de tipo Deployment: + +{{< codenew file="application/update_deployment.yaml" >}} + +También, suponga que esta es la configuración activa para ese mismo objeto de tipo Deployment: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + # ... + # tome nota de que la anotación no contiene un valor para replicas + # dado que no fue actualizado usando el comando apply + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"apps/v1","kind":"Deployment", + "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, + "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", + "ports":[{"containerPort":80}]}]}}}} + # ... +spec: + replicas: 2 # definidas por scale + # ... + minReadySeconds: 5 + selector: + matchLabels: + # ... + app: nginx + template: + metadata: + # ... + labels: + app: nginx + spec: + containers: + - image: nginx:1.14.2 + # ... + name: nginx + ports: + - containerPort: 80 + # ... +``` + +Estos son los cálculos de unión que serían realizados por `kubectl apply`: + +1. Calcular los campos por eliminar, leyendo los valores de `last-applied-configuration` + y comparándolos con los valores en el archivo de configuración. + Limpiar los campos definidos en null de manera explícita en el archivo de configuración + sin tomar en cuenta si se encuentran presentes en la anotación `last-applied-configuration`. + En este ejemplo, `minReadySeconds` aparece en la anotación + `last-applied-configuration` pero no aparece en el archivo de configuración. + **Acción:** Limpiar `minReadySeconds` de la configuración activa. +2. Calcular los campos por ser definidos, al leer los valores del fichero de configuración + y compararlos con los valores en la configuración activa. En este ejemplo, el valor `image` + en el archivo de configuración, no coincide con el valor en la configuración activa. + **Acción:** Definir el campo `image` en la configuración activa. +3. Definir el valor de la anotación `last-applied-configuration` para que sea consistente + con el archivo de configuración. +4. Unir los resultados de 1, 2 y 3, en una única solicitud de patch para enviar al servidor de API. + +Esta es la configuración activa como resultado de esta unión: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + # ... + # La anotación contiene la imágen actualizada a nginx 1.11.9, + # pero no contiene la actualización a 2 replicas + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"apps/v1","kind":"Deployment", + "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, + "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"containers":[{"image":"nginx:1.16.1","name":"nginx", + "ports":[{"containerPort":80}]}]}}}} + # ... +spec: + selector: + matchLabels: + # ... + app: nginx + replicas: 2 # Definido por `kubectl scale`. Ignorado por `kubectl apply`. + # minReadySeconds eliminado por `kubectl apply` + # ... + template: + metadata: + # ... + labels: + app: nginx + spec: + containers: + - image: nginx:1.16.1 # Definido por `kubectl apply` + # ... + name: nginx + ports: + - containerPort: 80 + # ... + # ... + # ... + # ... +``` + +### Como se unen los diferentes tipos de campos + +La manera en la que los campos en un archivo de configuración son unidos con la +configuración activa depende del tipo de campo. Existen varios tipos de campos: + +- *primitivo*: Campos de cadena de texto (string), enteros (integer), o lógicos (boolean). + Por ejemplo, `image` y `replicas` son campos de tipo primitivo. **Acción:** Reemplazarlos. + +- *mapa*, también llamados *objeto*: Campo de tipo mapa o un tipo + complejo que contiene sub-campos. Por ejemplo, `labels`, + `annotations`,`spec` y `metadata` son todos mapas. **Acción:** Unir los elementos o sub-campos. + +- *lista*: Campos que contienen una lista de elementos que pueden ser de tipo primitivo o mapa. + Como ejemplos, `containers`, `ports`, y `args` son listas. **Acción:** Varía. + +Cuando `kubectl apply` actualiza un campo de tipo mapa o lista, típicamente no reemplaza +el campo completo, sino actualiza los sub-elementos individuales. +Por ejemplo, cuando se hace una unión del campo `spec` en un Deployment, el `spec` +completo no es reemplazado, por el contrario, únicamente los sub-campos de `spec` como +`replica` son comparados y unidos. + +### Uniendo cambios en campos primitivos + +Campos primitivos son limpiados o reemplazados. + +{{< note >}} +`-` determina que "no aplica" debido a que el valor no es utilizado. +{{< /note >}} + +| Campo en el archivo de configuración | Campo en la configuración activa | Campo en last-applied-configuration | Acción | +|---------------------------------------|------------------------------------|-------------------------------------|--------------------------------------------------------------| +| Si | Si | - | Define el valor en el archivo de configuración como activo. | +| Si | No | - | Define el valor a la configuración local. | +| No | - | Si | Elimina de la configuración activa. | +| No | - | No | No hacer nada. Mantiene el valor activo. | + +### Uniendo cambios en campos de un mapa + +Los campos que conjuntamente representan un mapa, son unidos al comparar cada uno de los subcampos o elementos del mapa: + +{{< note >}} +`-` determina que "no aplica" debido a que el valor no es utilizado. +{{< /note >}} + +| Propiedad en archivo de configuración | Propiedad en configuración activa | Campo en last-applied-configuration | Acción | +|---------------------------------------|-----------------------------------|-------------------------------------|------------------------------------------| +| Si | Si | - | Comparar valores de sub-propiedades. | +| Si | No | - | Usar configuración local. | +| No | - | Si | Eliminar de la configuración activa. | +| No | - | No | No hacer nada. Mantener el valor activo. | + +### Uniendo cambios en campos de tipo lista + +El unir cambios en una lista utiliza una de tres posibles estrategias: + +* Reemplazar la lista si todos sus elementos son primitivos. +* Unir elementos individuales en líneas de elementos complejos. +* Unir una lista de elementos primitivos. + +Se define la estrategia elegida con base en cada campo. + +#### Reemplazar una lista si todos sus elementos son primitivos + +Trata la lista como si fuese un campo primitivo. Reemplaza o elimina la lista completa. +Esto preserva el orden de los elementos. + + +**Ejemplo:** Usando `kubectl apply` para actualizar el campo `args` de un Contenedor en un Pod. +Esto define el valor de `args` en la configuración activa, al valor en el archivo de configuración. +Cualquier elemento de `args` que haya sido previamente agregado a la configuración activa se perderá. +El orden de los elementos definidos en `args` en el archivo de configuración, serán conservados +en la configuración activa. + +```yaml +# valor en last-applied-configuration + args: ["a", "b"] + +# valores en archivo de configuración + args: ["a", "c"] + +# configuración activa + args: ["a", "b", "d"] + +# resultado posterior a la unión + args: ["a", "c"] +``` + +**Explicación:** La unión utilizó los valores del archivo de configuración para definir los nuevos valores de la lista. + +#### Unir elementos individuales en una lista de elementos complejos + +Trata la lista como un mapa, y trata cada campo específico de cada elemento como una llave. +Agrega, elimina o actualiza elementos individuales. Esta operación no conserva el orden. + +Esta estrategia de unión utiliza una etiqueta especial en cada campo llamada `patchMergeKey`. La etiqueta +`patchMergeKey` es definida para cada campo en el código fuente de Kubernetes: +[types.go](https://github.com/kubernetes/api/blob/d04500c8c3dda9c980b668c57abc2ca61efcf5c4/core/v1/types.go#L2747) +Al unir una lista de mapas, el campo especificado en `patchMergeKey` para el elemento dado +se utiliza como un mapa de llaves para ese elemento. + +**Ejemplo:** Utilice `kubectl apply` para actualizar el campo `containers` de un PodSpec. +Esto une la lista como si fuese un mapa donde cada elemento utiliza `name` por llave. + +```yaml +# valor en last-applied-configuration + containers: + - name: nginx + image: nginx:1.16 + - name: nginx-helper-a # llave: nginx-helper-a; será eliminado en resultado + image: helper:1.3 + - name: nginx-helper-b # llave: nginx-helper-b; será conservado + image: helper:1.3 + +# valor en archivo de configuración + containers: + - name: nginx + image: nginx:1.16 + - name: nginx-helper-b + image: helper:1.3 + - name: nginx-helper-c # llavel: nginx-helper-c; será agregado en el resultado + image: helper:1.3 + +# configuración activa + containers: + - name: nginx + image: nginx:1.16 + - name: nginx-helper-a + image: helper:1.3 + - name: nginx-helper-b + image: helper:1.3 + args: ["run"] # Campo será conservado + - name: nginx-helper-d # llave: nginx-helper-d; será conservado + image: helper:1.3 + +# resultado posterior a la unión + containers: + - name: nginx + image: nginx:1.16 + # Elemento nginx-helper-a fue eliminado + - name: nginx-helper-b + image: helper:1.3 + args: ["run"] # Campo fue conservado + - name: nginx-helper-c # Elemento fue agregado + image: helper:1.3 + - name: nginx-helper-d # Elemento fue ignorado + image: helper:1.3 +``` + +**Explicación:** + +- El contenedor llamado "nginx-helper-a" fué eliminado al no aparecer ningún + contenedor llamado "nginx-helper-a" en el archivo de configuración. +- El contenedor llamado "nginx-helper-b" mantiene los cambios existentes en `args` + en la configuración activa. `kubectl apply` pudo identificar que + el contenedor "nginx-helper-b" en la configuración activa es el mismo + "nginx-helper-b" que aparece en el archivo de configuración, aún teniendo diferentes + valores en los campos (no existe `args` en el archivo de configuración). Esto sucede + debido a que el valor del campo `patchMergeKey` (name) es idéntico en ambos. +- El contenedor llamado "nginx-helper-c" fue agregado ya que no existe ningún contenedor + con ese nombre en la configuración activa, pero si existe uno con ese nombre + en el archivo de configuración. +- El contendor llamado "nginx-helper-d" fue conservado debido a que no aparece + ningún elemento con ese nombre en last-applied-configuration. + +#### Unir una lista de elementos primitivos + +A partir de Kubernetes 1.5, el unir listas de elementos primitivos no es soportado. + +{{< note >}} +La etiqueta `patchStrategy` en [types.go](https://github.com/kubernetes/api/blob/d04500c8c3dda9c980b668c57abc2ca61efcf5c4/core/v1/types.go#L2748) es la que +determina cual de las estrategias aplica para cualquier campo en particular. +Para campos de tipo lista, el campo será reemplazado cuando no exista una especificación de `patchStrategy`. +{{< /note >}} + +{{< comment >}} +TODO(pwittrock): Uncomment this for 1.6 + +- Treat the list as a set of primitives. Replace or delete individual + elements. Does not preserve ordering. Does not preserve duplicates. + +**Example:** Using apply to update the `finalizers` field of ObjectMeta +keeps elements added to the live configuration. Ordering of finalizers +is lost. +{{< /comment >}} + +## Valores de campo por defecto + +El Servidor de API define algunos campos a sus valores por defecto si no son especificados +al momento de crear un objeto. + +Aquí puede ver un archivo de configuración para un Deployment. Este archivo no especifica +el campo `strategy`: + +{{< codenew file="application/simple_deployment.yaml" >}} + +Cree un nuevo objeto `kubectl apply`: + +```shell +kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml +``` + +Despliegue la configuración activa usando `kubectl get`: + +```shell +kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml +``` + +La salida muestra que el servidor de API definió varios campos con los valores por defecto +en la configuración activa. Estos campos no fueron especificados en el archivo de +configuración. + +```yaml +apiVersion: apps/v1 +kind: Deployment +# ... +spec: + selector: + matchLabels: + app: nginx + minReadySeconds: 5 + replicas: 1 # valor por defecto definido por apiserver + strategy: + rollingUpdate: # valor por defecto definido por apiserver - derivado de strategy.type + maxSurge: 1 + maxUnavailable: 1 + type: RollingUpdate # valor por defecto definido por apiserver + template: + metadata: + creationTimestamp: null + labels: + app: nginx + spec: + containers: + - image: nginx:1.14.2 + imagePullPolicy: IfNotPresent # valor por defecto definido por apiserver + name: nginx + ports: + - containerPort: 80 + protocol: TCP # valor por defecto definido por apiserver + resources: {} # valor por defecto definido por apiserver + terminationMessagePath: /dev/termination-log # valor por defecto definido por apiserver + dnsPolicy: ClústerFirst # valor por defecto definido por apiserver + restartPolicy: Always # valor por defecto definido por apiserver + securityContext: {} # valor por defecto definido por apiserver + terminationGracePeriodSeconds: 30 # valor por defecto definido por apiserver +# ... +``` + +En una solicitud de patch, los campos definidos a valores por defecto no son redefinidos a excepción +de cuando hayan sido limpiados de manera explícita como parte de la solicitud de patch. Esto puede +causar comportamientos no esperados para campos cuyo valor por defecto es basado en los valores +de otros campos. Cuando el otro campo ha cambiado, el valor por defecto de ellos no será actualizado +de no ser que sean limpiados de manera explícita. + +Por esta razón, se recomienda que algunos campos que reciben un valor por defecto del +servidor sean definidos de manera explícita en los archivos de configuración, aun cuando +el valor definido sea idéntico al valor por defecto. Esto facilita la identificación +de valores conflictivos que podrían no ser revertidos a valores por defecto por parte +del servidor. + +**Ejemplo:** + +```yaml +# last-applied-configuration +spec: + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + +# archivo de configuración +spec: + strategy: + type: Recreate # valor actualizado + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + +# configuración activa +spec: + strategy: + type: RollingUpdate # valor por defecto + rollingUpdate: # valor por defecto derivado del campo type + maxSurge : 1 + maxUnavailable: 1 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + +# resultado posterior a la unión - ERROR! +spec: + strategy: + type: Recreate # valor actualizado: incompatible con RollingUpdate + rollingUpdate: # valor por defecto: incompatible con "type: Recreate" + maxSurge : 1 + maxUnavailable: 1 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 +``` + +**Explicación:** + +1. El usuario crea un Deployment sin definir `strategy.type`. +2. El servidor define `strategy.type` a su valor por defecto de `RollingUpdate` y + agrega los valores por defecto a `strategy.rollingUpdate`. +3. El usuario cambia `strategy.type` a `Recreate`. Los valores de `strategy.rollingUpdate` + se mantienen en su configuración por defecto, sin embargo el servidor espera que se limpien. + Si los valores de `strategy.rollingUpdate` hubiesen sido definidos inicialmente en el archivo + de configuración, hubiese sido más claro que requerían ser eliminados. +4. Apply fallará debido a que `strategy.rollingUpdate` no fue eliminado. El campo `strategy.rollingupdate` + no puede estar definido, si el valor de `strategy.type` es `Recreate`. + +Recomendación: Estos campos deberían de ser definidos de manera explícita en el archivo de configuración: + +- Etiquetas de Selectors y PodTemplate en cargas de trabajo como Deployment, StatefulSet, Job, DaemonSet, + ReplicaSet, y ReplicationController +- Estrategia de rollout para un Deployment + +### Como limpiar campos definidos a valores por defecto por el servidor, o definidos por otros escritores + +Campos que no aparecen en el archivo de configuración pueden ser limpiados si se define su valor +a `null` y luego se aplica el archivo de configuración. +Para los campos definidos a valores por defecto por el servidor, esto provoca que se reestablezca +a sus valores por defecto. + +## Como cambiar al propietario de un campo entre un archivo de configuración y un escritor imperativo + +Estos son los únicos métodos que debe usar para cambiar un campo individual de un objeto: + +- Usando `kubectl apply`. +- Escribiendo de manera directa a la configuración activa sin modificar el archivo de configuración: +por ejemplo, usando `kubectl scale`. + +### Cambiando al propietario de un campo de un escritor imperativo a un archivo de configuración + +Añada el campo al archivo de configuración, y no realice nuevas actualizaciones a la configuración +activa que no sucedan por medio de `kubectl apply`. + +### Cambiando al propietario de un archivo de configuración a un escritor imperativo + +A partir de Kubernetes 1.5, el cambiar un campo que ha sido definido por medio de un +archivo de configuración para que sea modificado por un escritor imperativo requiere +pasos manuales: + +- Eliminar el campo del archivo de configuración. +- Eliminar el campo de la anotación `kubectl.kubernetes.io/last-applied-configuration` en el objeto activo. + +## Cambiando los métodos de gestión + +Los objetos en Kubernetes deberían de ser gestionados utilizando únicamente un método +a la vez. El alternar de un método a otro es posible, pero es un proceso manual. + +{{< note >}} +Esta bien el usar eliminación imperativa junto a gestión declarativa. +{{< /note >}} + +{{< comment >}} +TODO(pwittrock): We need to make using imperative commands with +declarative object configuration work so that it doesn't write the +fields to the annotation, and instead. Then add this bullet point. + +- using imperative commands with declarative configuration to manage where each manages different fields. +{{< /comment >}} + +### Migrando de gestión imperativa con comandos a configuración declarativa de objetos + +El migrar de gestión imperativa utilizando comandos a la gestión declarativa de objetos +requiere varios pasos manuales: + +1. Exporte el objeto activo a un archivo local de configuración: + + ```shell + kubectl get / -o yaml > _.yaml + ``` + +1. Elimine de manera manual el campo `status` del archivo de configuración. + + {{< note >}} + Este paso es opcional, ya que `kubectl apply` no actualiza el campo `status` + aunque este presente en el archivo de configuración. + {{< /note >}} + +1. Defina la anotación `kubectl.kubernetes.io/last-applied-configuration` en el objeto: + + ```shell + kubectl replace --save-config -f _.yaml + ``` + +1. Modifique el proceso para usar `kubectl apply` para gestionar el objeto de manera exclusiva. + +{{< comment >}} +TODO(pwittrock): Why doesn't export remove the status field? Seems like it should. +{{< /comment >}} + +### Migrando de gestión imperativa de la configuración de objetos a gestión declarativa + +1. Defina la anotación `kubectl.kubernetes.io/last-applied-configuration` en el objeto: + + ```shell + kubectl replace --save-config -f _.yaml + ``` + +1. Modifique el proceso para usar `kubectl apply` para gestionar el objeto de manera exclusiva. + +## Definiendo los selectores para el controlador y las etiquetas de PodTemplate + +{{< warning >}} +Se desaconseja encarecidamente actualizar los selectores en controladores. +{{< /warning >}} + +La forma recomendada es definir una etiqueta única e inmutable para PodTemplate usada +únicamente por el selector del controlador sin tener ningún otro significado semántico. + +**Ejemplo:** + +```yaml +selector: + matchLabels: + controller-selector: "apps/v1/deployment/nginx" +template: + metadata: + labels: + controller-selector: "apps/v1/deployment/nginx" +``` + +## {{% heading "whatsnext" %}} + + +* [Administración de Objetos de Kubernetes usando comandos imperativos](/docs/tasks/manage-kubernetes-objects/imperative-command/) +* [Administración imperativa de los Objetos de Kubernetes usando archivos de configuración](/docs/tasks/manage-kubernetes-objects/imperative-config/) +* [Referencia del comando Kubectl](/docs/reference/generated/kubectl/kubectl-commands/) +* [Referencia de la API de Kubernetes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) + + diff --git a/content/es/docs/tasks/run-application/run-stateless-application-deployment.md b/content/es/docs/tasks/run-application/run-stateless-application-deployment.md index 4bbe221adf..f6e82d85a9 100644 --- a/content/es/docs/tasks/run-application/run-stateless-application-deployment.md +++ b/content/es/docs/tasks/run-application/run-stateless-application-deployment.md @@ -49,7 +49,6 @@ Puedes correr una aplicación creando un `deployment` de Kubernetes, y puedes de El resultado es similar a esto: - user@computer:~/website$ kubectl describe deployment nginx-deployment Name: nginx-deployment Namespace: default CreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700 diff --git a/content/es/examples/application/simple_deployment.yaml b/content/es/examples/application/simple_deployment.yaml new file mode 100644 index 0000000000..d9c74af8c5 --- /dev/null +++ b/content/es/examples/application/simple_deployment.yaml @@ -0,0 +1,19 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment +spec: + selector: + matchLabels: + app: nginx + minReadySeconds: 5 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 diff --git a/content/es/examples/application/update_deployment.yaml b/content/es/examples/application/update_deployment.yaml new file mode 100644 index 0000000000..7230cc4323 --- /dev/null +++ b/content/es/examples/application/update_deployment.yaml @@ -0,0 +1,18 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment +spec: + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.16.1 # actualice el valor de image + ports: + - containerPort: 80 diff --git a/content/id/docs/concepts/cluster-administration/cloud-providers.md b/content/id/docs/concepts/cluster-administration/cloud-providers.md index 9a32af1eb8..4eb6a474b8 100644 --- a/content/id/docs/concepts/cluster-administration/cloud-providers.md +++ b/content/id/docs/concepts/cluster-administration/cloud-providers.md @@ -11,7 +11,7 @@ Laman ini akan menjelaskan bagaimana cara mengelola Kubernetes yang berjalan pad ### Kubeadm -[Kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) merupakan salah satu cara yang banyak digunakan untuk membuat klaster Kubernetes. +[Kubeadm](/docs/reference/setup-tools/kubeadm/) merupakan salah satu cara yang banyak digunakan untuk membuat klaster Kubernetes. Kubeadm memiliki beragam opsi untuk mengatur konfigurasi spesifik untuk penyedia layanan cloud. Salah satu contoh yang biasa digunakan pada penyedia cloud *in-tree* yang dapat diatur dengan kubeadm adalah sebagai berikut: ```yaml @@ -347,4 +347,4 @@ Penyedia layanan IBM Cloud Kubernetes Service memanfaatkan Kubernetes-native *pe ### Nama Node Penyedia layanan cloud Baidu menggunakan alamat IP privat dari *node* (yang ditentukan oleh kubelet atau menggunakan `--hostname-override`) sebagai nama dari objek Kubernetes Node. -Perlu diperhatikan bahwa nama Kubernetes Node harus sesuai dengan alamat IP privat dari Baidu VM. \ No newline at end of file +Perlu diperhatikan bahwa nama Kubernetes Node harus sesuai dengan alamat IP privat dari Baidu VM. diff --git a/content/id/docs/concepts/workloads/pods/ephemeral-containers.md b/content/id/docs/concepts/workloads/pods/ephemeral-containers.md index e952bdd19b..2d0c515859 100644 --- a/content/id/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/id/docs/concepts/workloads/pods/ephemeral-containers.md @@ -106,7 +106,7 @@ deskripsikan kontainer sementara untuk ditambahkan dalam daftar "apiVersion": "v1", "kind": "EphemeralContainers", "metadata": { - "name": "example-pod" + "name": "example-pod" }, "ephemeralContainers": [{ "command": [ diff --git a/content/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 8a345296a3..6bbf23b53e 100644 --- a/content/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -134,7 +134,7 @@ tidak didukung oleh kubeadm. ### Informasi lebih lanjut -Untuk informasi lebih lanjut mengenai argumen-argumen `kubeadm init`, lihat [panduan referensi kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/). +Untuk informasi lebih lanjut mengenai argumen-argumen `kubeadm init`, lihat [panduan referensi kubeadm](/docs/reference/setup-tools/kubeadm/). Untuk daftar pengaturan konfigurasi yang lengkap, lihat [dokumentasi berkas konfigurasi](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file). @@ -569,7 +569,7 @@ opsinya. * Pastikan klaster berjalan dengan benar menggunakan [Sonobuoy](https://github.com/heptio/sonobuoy) * Lihat [Memperbaharui klaster kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) untuk detail mengenai pembaruan klaster menggunakan `kubeadm`. -* Pelajari penggunaan `kubeadm` lebih lanjut pada [dokumentasi referensi kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm) +* Pelajari penggunaan `kubeadm` lebih lanjut pada [dokumentasi referensi kubeadm](/docs/reference/setup-tools/kubeadm) * Pelajari lebih lanjut mengenai [konsep-konsep](/docs/concepts/) Kubernetes dan [`kubectl`](/docs/user-guide/kubectl-overview/). * Lihat halaman [Cluster Networking](/id/docs/concepts/cluster-administration/networking/) untuk daftar _add-on_ jaringan Pod yang lebih banyak. diff --git a/content/id/docs/sitemap.md b/content/id/docs/sitemap.md deleted file mode 100644 index 56b0ac30af..0000000000 --- a/content/id/docs/sitemap.md +++ /dev/null @@ -1,114 +0,0 @@ ---- ---- - - - -Pilih tag atau gunakan drop down untuk melakukan filter. Pilih header pada tabel untuk mengurutkan. - -

-Filter berdasarkan Konsep:
-Filter berdasarkan Obyek:
-Filter berdasarkan Perintah: -

- -
diff --git a/content/id/docs/tasks/tools/_index.md b/content/id/docs/tasks/tools/_index.md index 9bbd67d8fb..8d9056c50f 100755 --- a/content/id/docs/tasks/tools/_index.md +++ b/content/id/docs/tasks/tools/_index.md @@ -1,5 +1,67 @@ --- title: "Menginstal Peralatan" +description: Peralatan untuk melakukan instalasi Kubernetes dalam komputer kamu. weight: 10 +no_list: true --- +## kubectl + + + +Perangkat baris perintah Kubernetes, [kubectl](/id/docs/reference/kubectl/kubectl/), +memungkinkan kamu untuk menjalankan perintah pada klaster Kubernetes. +Kamu dapat menggunakan kubectl untuk menerapkan aplikasi, memeriksa dan mengelola sumber daya klaster, +dan melihat *log* (catatan). Untuk informasi lebih lanjut termasuk daftar lengkap operasi kubectl, lihat +[referensi dokumentasi `kubectl`](/id/docs/reference/kubectl/). + +kubectl dapat diinstal pada berbagai platform Linux, macOS dan Windows. +Pilihlah sistem operasi pilihan kamu di bawah ini. + +- [Instalasi kubectl pada Linux](/en/docs/tasks/tools/install-kubectl-linux) +- [Instalasi kubectl pada macOS](/en/docs/tasks/tools/install-kubectl-macos) +- [Instalasi kubectl pada Windows](/en/docs/tasks/tools/install-kubectl-windows) + +## kind + +[`kind`](https://kind.sigs.k8s.io/docs/) memberikan kamu kemampuan untuk +menjalankan Kubernetes pada komputer lokal kamu. Perangkat ini membutuhkan +[Docker](https://docs.docker.com/get-docker/) yang sudah diinstal dan +terkonfigurasi. + +Halaman [Memulai Cepat](https://kind.sigs.k8s.io/docs/user/quick-start/) `kind` +memperlihatkan kepada kamu tentang apa yang perlu kamu lakukan untuk `kind` +berjalan dan bekerja. + +Melihat Memulai Cepat Kind + +## minikube + +Seperti halnya dengan `kind`, [`minikube`](https://minikube.sigs.k8s.io/) +merupakan perangkat yang memungkinkan kamu untuk menjalankan Kubernetes +secara lokal. `minikube` menjalankan sebuah klaster Kubernetes dengan +satu node saja dalam komputer pribadi (termasuk Windows, macOS dan Linux) +sehingga kamu dapat mencoba Kubernetes atau untuk pekerjaan pengembangan +sehari-hari. + +Kamu bisa mengikuti petunjuk resmi +[Memulai!](https://minikube.sigs.k8s.io/docs/start/) +`minikube` jika kamu ingin fokus agar perangkat ini terinstal. + +Lihat Panduan Memulai! Minikube + +Setelah kamu memiliki `minikube` yang bekerja, kamu bisa menggunakannya +untuk [menjalankan aplikasi contoh](/id/docs/tutorials/hello-minikube/). + +## kubeadm + +Kamu dapat menggunakan {{< glossary_tooltip term_id="kubeadm" text="kubeadm" >}} +untuk membuat dan mengatur klaster Kubernetes. +`kubeadm` menjalankan langkah-langkah yang diperlukan untuk mendapatkan klaster +dengan kelaikan dan keamanan minimum, aktif dan berjalan dengan cara yang mudah +bagi pengguna. + +[Instalasi kubeadm](/id/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) memperlihatkan tentang bagaimana melakukan instalasi kubeadm. +Setelah terinstal, kamu dapat menggunakannya untuk [membuat klaster](/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/). + +Lihat panduan instalasi kubeadm diff --git a/content/id/docs/tasks/tools/install-minikube.md b/content/id/docs/tasks/tools/install-minikube.md deleted file mode 100644 index b674c52b6d..0000000000 --- a/content/id/docs/tasks/tools/install-minikube.md +++ /dev/null @@ -1,254 +0,0 @@ ---- -title: Menginstal Minikube -content_type: task -weight: 20 -card: - name: tasks - weight: 10 ---- - - - -Halaman ini menunjukkan cara instalasi [Minikube](/id/docs/tutorials/hello-minikube), sebuah alat untuk menjalankan sebuah klaster Kubernetes dengan satu Node pada mesin virtual yang ada di komputer kamu. - - - -## {{% heading "prerequisites" %}} - - -{{< tabs name="minikube_before_you_begin" >}} -{{% tab name="Linux" %}} -Untuk mengecek jika virtualisasi didukung pada Linux, jalankan perintah berikut dan pastikan keluarannya tidak kosong: -``` -grep -E --color 'vmx|svm' /proc/cpuinfo -``` -{{% /tab %}} - -{{% tab name="macOS" %}} -Untuk mengecek jika virtualisasi didukung di macOS, jalankan perintah berikut di terminal kamu. -``` -sysctl -a | grep -E --color 'machdep.cpu.features|VMX' -``` -Jika kamu melihat `VMX` pada hasil keluaran (seharusnya berwarna), artinya fitur VT-x sudah diaktifkan di mesin kamu. -{{% /tab %}} - -{{% tab name="Windows" %}} -Untuk mengecek jika virtualisasi didukung di Windows 8 ke atas, jalankan perintah berikut di terminal Windows atau _command prompt_ kamu. - -``` -systeminfo -``` -Jika kamu melihat keluaran berikut, maka virtualisasi didukung di Windows kamu. -``` -Hyper-V Requirements: VM Monitor Mode Extensions: Yes - Virtualization Enabled In Firmware: Yes - Second Level Address Translation: Yes - Data Execution Prevention Available: Yes -``` -Jika kamu melihat keluaran berikut, sistem kamu sudah memiliki sebuah Hypervisor yang terinstal dan kamu bisa melewati langkah berikutnya. -``` -Hyper-V Requirements: A hypervisor has been detected. Features required for Hyper-V will not be displayed. -``` - - -{{% /tab %}} -{{< /tabs >}} - - - - - -## Menginstal minikube - -{{< tabs name="tab_with_md" >}} -{{% tab name="Linux" %}} - -### Instalasi kubectl - -Pastikan kamu sudah menginstal kubectl. Kamu bisa menginstal kubectl dengan mengikuti instruksi pada halaman [Menginstal dan Menyiapkan kubectl](/id/docs/tasks/tools/install-kubectl/#menginstal-kubectl-pada-linux). - -### Menginstal sebuah Hypervisor - -Jika kamu belum menginstal sebuah Hypervisor, silakan instal salah satu dari: - -• [KVM](https://www.linux-kvm.org/), yang juga menggunakan QEMU - -• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - -Minikube juga mendukung sebuah opsi `--driver=none` untuk menjalankan komponen-komponen Kubernetes pada _host_, bukan di dalam VM. Untuk menggunakan _driver_ ini maka diperlukan [Docker](https://www.docker.com/products/docker-desktop) dan sebuah lingkungan Linux, bukan sebuah hypervisor. - -Jika kamu menggunakan _driver_ `none` pada Debian atau turunannya, gunakan paket (_package_) `.deb` untuk Docker daripada menggunakan paket _snap_-nya, karena paket _snap_ tidak berfungsi dengan Minikube. -Kamu bisa mengunduh paket `.deb` dari [Docker](https://www.docker.com/products/docker-desktop). - -{{< caution >}} -*Driver* VM `none` dapat menyebabkan masalah pada keamanan dan kehilangan data. Sebelum menggunakan opsi `--driver=none`, periksa [dokumentasi ini](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) untuk informasi lebih lanjut. -{{< /caution >}} - -Minikube juga mendukung opsi `vm-driver=podman` yang mirip dengan _driver_ Docker. Podman yang berjalan dengan hak istimewa _superuser_ (pengguna _root_) adalah cara terbaik untuk memastikan kontainer-kontainer kamu memiliki akses penuh ke semua fitur yang ada pada sistem kamu. - -{{< caution >}} -_Driver_ `podman` memerlukan kontainer yang berjalan dengan akses _root_ karena akun pengguna biasa tidak memiliki akses penuh ke semua fitur sistem operasi yang mungkin diperlukan oleh kontainer. -{{< /caution >}} - -### Menginstal Minikube menggunakan sebuah paket - -Tersedia paket uji coba untuk Minikube, kamu bisa menemukan paket untuk Linux (AMD64) di laman [rilisnya](https://github.com/kubernetes/minikube/releases) Minikube di GitHub. - -Gunakan alat instalasi paket pada distribusi Linux kamu untuk menginstal paket yang sesuai. - -### Menginstal Minikube melalui pengunduhan langsung - -Jika kamu tidak menginstal melalui sebuah paket, kamu bisa mengunduh sebuah _stand-alone binary_ dan menggunakannya. - - -```shell -curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \ - && chmod +x minikube -``` - -Berikut adalah cara mudah untuk menambahkan program Minikube ke _path_ kamu. - -```shell -sudo mkdir -p /usr/local/bin/ -sudo install minikube /usr/local/bin/ -``` - -### Menginstal Minikube menggunakan Homebrew - -Sebagai alternatif, kamu bisa menginstal Minikube menggunakan Linux [Homebrew](https://docs.brew.sh/Homebrew-on-Linux): - -```shell -brew install minikube -``` - -{{% /tab %}} -{{% tab name="macOS" %}} -### Instalasi kubectl - -Pastikan kamu sudah menginstal kubectl. Kamu bisa menginstal kubectl dengan mengikuti instruksi pada halaman [Menginstal dan Menyiapkan kubectl](/id/docs/tasks/tools/install-kubectl/#menginstal-kubectl-pada-macos). - -### Instalasi sebuah Hypervisor - -Jika kamu belum menginstal sebuah Hypervisor, silakan instal salah satu dari: - -• [HyperKit](https://github.com/moby/hyperkit) - -• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - -• [VMware Fusion](https://www.vmware.com/products/fusion) - -### Instalasi Minikube -Cara paling mudah untuk menginstal Minikube pada macOS adalah menggunakan [Homebrew](https://brew.sh): - -```shell -brew install minikube -``` - -Kamu juga bisa menginstalnya dengan mengunduh _stand-alone binary_-nya: - -```shell -curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \ - && chmod +x minikube -``` - -Berikut adalah cara mudah untuk menambahkan program Minikube ke _path_ kamu. - -```shell -sudo mv minikube /usr/local/bin -``` - -{{% /tab %}} -{{% tab name="Windows" %}} -### Instalasi kubectl - -Pastikan kamu sudah menginstal kubectl. Kamu bisa menginstal kubectl dengan mengikuti instruksi pada halaman [Menginstal dan Menyiapkan kubectl](/id/docs/tasks/tools/install-kubectl/#menginstal-kubectl-pada-windows). - -### Menginstal sebuah Hypervisor - -Jika kamu belum menginstal sebuah Hypervisor, silakan instal salah satu dari: - -• [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install) - -• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - -{{< note >}} -Hyper-V hanya dapat berjalan pada tiga versi dari Windows 10: Windows 10 Enterprise, Windows 10 Professional, dan Windows 10 Education. -{{< /note >}} - -### Menginstal Minikube menggunakan Chocolatey - -Cara paling mudah untuk menginstal Minikube pada Windows adalah menggunakan [Chocolatey](https://chocolatey.org/) (jalankan sebagai administrator): - -```shell -choco install minikube -``` - -Setelah Minikube telah selesai diinstal, tutup sesi CLI dan hidupkan ulang CLI-nya. Minikube akan ditambahkan ke _path_ kamu secara otomatis. - -### Menginstal Minikube menggunakan sebuah program penginstal - -Untuk menginstal Minikube secara manual pada Windows menggunakan [Windows Installer](https://docs.microsoft.com/en-us/windows/desktop/msi/windows-installer-portal), unduh [`minikube-installer.exe`](https://github.com/kubernetes/minikube/releases/latest/download/minikube-installer.exe) dan jalankan program penginstal tersebut. - -### Menginstal Minikube melalui pengunduhan langsung - -Untuk menginstal Minikube secara manual pada Windows, unduh [`minikube-windows-amd64`](https://github.com/kubernetes/minikube/releases/latest), ubah nama menjadi `minikube.exe`, dan tambahkan ke _path_ kamu. - -{{% /tab %}} -{{< /tabs >}} - - -## Memastikan instalasi - -Untuk memastikan keberhasilan kedua instalasi hypervisor dan Minikube, kamu bisa menjalankan perintah berikut untuk memulai sebuah klaster Kubernetes lokal: -{{< note >}} - -Untuk pengaturan `--driver` dengan `minikube start`, masukkan nama hypervisor `` yang kamu instal dengan huruf kecil seperti yang ditunjukan dibawah. Daftar lengkap nilai `--driver` tersedia di [dokumentasi menentukan *driver* VM](/docs/setup/learning-environment/minikube/#specifying-the-vm-driver). - -{{< /note >}} - -```shell -minikube start --driver= -``` - -Setelah `minikube start` selesai, jalankan perintah di bawah untuk mengecek status klaster: - -```shell -minikube status -``` - -Jika klasternya berjalan, keluaran dari `minikube status` akan mirip seperti ini: - -``` -host: Running -kubelet: Running -apiserver: Running -kubeconfig: Configured -``` - -Setelah kamu memastikan bahwa Minikube berjalan sesuai dengan hypervisor yang telah kamu pilih, kamu dapat melanjutkan untuk menggunakan Minikube atau menghentikan klaster kamu. Untuk menghentikan klaster, jalankan: - -```shell -minikube stop -``` - -## Membersihkan *state* lokal {#cleanup-local-state} - -Jika sebelumnya kamu pernah menginstal Minikube, dan menjalankan: -```shell -minikube start -``` - -dan `minikube start` memberikan pesan kesalahan: -``` -machine does not exist -``` - -maka kamu perlu membersihkan _state_ lokal Minikube: -```shell -minikube delete -``` - -## {{% heading "whatsnext" %}} - - -* [Menjalanakan Kubernetes secara lokal dengan Minikube](/docs/setup/learning-environment/minikube/) diff --git a/content/id/docs/templates/feature-state-alpha.txt b/content/id/docs/templates/feature-state-alpha.txt deleted file mode 100644 index 35689778fa..0000000000 --- a/content/id/docs/templates/feature-state-alpha.txt +++ /dev/null @@ -1,7 +0,0 @@ -Fitur ini berada di dalam tingkatan *Alpha*, yang artinya: - -* Nama dari versi ini mengandung string `alpha` (misalnya, `v1alpha1`). -* Bisa jadi terdapat *bug*. Secara *default* fitur ini tidak diekspos. -* Ketersediaan untuk fitur yang ada bisa saja dihilangkan pada suatu waktu tanpa pemberitahuan sebelumnya. -* API yang ada mungkin saja berubah tanpa memperhatikan kompatibilitas dengan versi perangkat lunak sebelumnya. -* Hanya direkomendasikan untuk klaster yang digunakan untuk tujuan *testing*. diff --git a/content/id/docs/templates/feature-state-beta.txt b/content/id/docs/templates/feature-state-beta.txt deleted file mode 100644 index a70034e056..0000000000 --- a/content/id/docs/templates/feature-state-beta.txt +++ /dev/null @@ -1,10 +0,0 @@ -Fitur ini berada dalam tingkatan beta, yang artinya: - -* Nama dari versi ini mengandung string `beta` (misalnya `v2beta3`). -* Kode yang ada sudah melalui mekanisme *testing* yang cukup baik. Menggunakan fitur ini dianggap cukup aman. Fitur ini diekspos secara *default*. -* Ketersediaan untuk fitur secara menyeluruh tidak akan dihapus, meskipun begitu detail untuk suatu fitur bisa saja berubah. -* Skema dan/atau semantik dari suatu obyek mungkin saja berubah tanpa memerhatikan kompatibilitas pada rilis *beta* selanjutnya. - Jika hal ini terjadi, kami akan menyediakan suatu instruksi untuk melakukan migrasi di versi rilis selanjutnya. Hal ini bisa saja terdiri dari penghapusan, pengubahan, ataupun pembuatan - obyek API. Proses pengubahan mungkin saja membutuhkan pemikiran yang matang. Dampak proses ini bisa saja menyebabkan *downtime* aplikasi yang bergantung pada fitur ini. -* **Kami mohon untuk mencoba versi *beta* yang kami sediakan dan berikan masukan terhadap fitur yang kamu pakai! Apabila fitur tersebut sudah tidak lagi berada di dalam tingkatan *beta* perubahan yang kami buat terhadap fitur tersebut bisa jadi tidak lagi dapat digunakan** - diff --git a/content/id/docs/templates/feature-state-deprecated.txt b/content/id/docs/templates/feature-state-deprecated.txt deleted file mode 100644 index 599fe098cd..0000000000 --- a/content/id/docs/templates/feature-state-deprecated.txt +++ /dev/null @@ -1,2 +0,0 @@ - -Fitur ini *deprecated*. Untuk informasi lebih lanjut mengenai tingkatan ini, silahkan merujuk pada [Kubernetes Deprecation Policy](/docs/reference/deprecation-policy/) diff --git a/content/id/docs/templates/feaure-state-stable.txt b/content/id/docs/templates/feaure-state-stable.txt deleted file mode 100644 index ee4e17373f..0000000000 --- a/content/id/docs/templates/feaure-state-stable.txt +++ /dev/null @@ -1,4 +0,0 @@ -Fitur ini berada di dalam tingkatan stabil, yang artinya: - -* Versi ini mengandung string `vX` dimana `X` merupakan bilangan bulat. -* Fitur yang ada pada tingkatan ini akan selalu muncul di rilis berikutnya. diff --git a/content/id/docs/templates/index.md b/content/id/docs/templates/index.md deleted file mode 100644 index 9d7bccd143..0000000000 --- a/content/id/docs/templates/index.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -headless: true - -resources: -- src: "*alpha*" - title: "alpha" -- src: "*beta*" - title: "beta" -- src: "*deprecated*" - title: "deprecated" -- src: "*stable*" - title: "stable" ---- diff --git a/content/id/docs/search.md b/content/id/search.md similarity index 100% rename from content/id/docs/search.md rename to content/id/search.md diff --git a/content/ko/docs/concepts/architecture/controller.md b/content/ko/docs/concepts/architecture/controller.md index 29926d1435..e516dd9cc5 100644 --- a/content/ko/docs/concepts/architecture/controller.md +++ b/content/ko/docs/concepts/architecture/controller.md @@ -102,7 +102,7 @@ weight: 30 온도 조절기 예에서 방이 매우 추우면 다른 컨트롤러가 서리 방지 히터를 켤 수도 있다. 쿠버네티스 클러스터에서는 [쿠버네티스 확장](/ko/docs/concepts/extend-kubernetes/)을 통해 -IP 주소 관리 도구, 스토리지 서비스, 클라우드 제공자의 API들 및 +IP 주소 관리 도구, 스토리지 서비스, 클라우드 제공자의 API 및 기타 서비스 등과 간접적으로 연동하여 이를 구현한다. ## 의도한 상태와 현재 상태 {#desired-vs-current} diff --git a/content/ko/docs/concepts/architecture/nodes.md b/content/ko/docs/concepts/architecture/nodes.md index 2bcdd9faaf..4ad2a7aaaa 100644 --- a/content/ko/docs/concepts/architecture/nodes.md +++ b/content/ko/docs/concepts/architecture/nodes.md @@ -57,7 +57,7 @@ kubelet이 노드의 `metadata.name` 필드와 일치하는 API 서버에 등록 정상적인지 확인한다. 상태 확인을 중지하려면 사용자 또는 {{< glossary_tooltip term_id="controller" text="컨트롤러">}}에서 -노드 오브젝트를 명시적으로 삭제해야한다. +노드 오브젝트를 명시적으로 삭제해야 한다. {{< /note >}} 노드 오브젝트의 이름은 유효한 diff --git a/content/ko/docs/concepts/cluster-administration/_index.md b/content/ko/docs/concepts/cluster-administration/_index.md index 3ec34e9eac..9870704596 100755 --- a/content/ko/docs/concepts/cluster-administration/_index.md +++ b/content/ko/docs/concepts/cluster-administration/_index.md @@ -1,5 +1,8 @@ --- title: 클러스터 관리 + + + weight: 100 content_type: concept description: > @@ -11,6 +14,7 @@ no_list: true 클러스터 관리 개요는 쿠버네티스 클러스터를 생성하거나 관리하는 모든 사람들을 위한 것이다. 핵심 쿠버네티스 [개념](/ko/docs/concepts/)에 어느 정도 익숙하다고 가정한다. + ## 클러스터 계획 @@ -41,7 +45,7 @@ no_list: true ## 클러스터 보안 -* [인증서](/ko/docs/concepts/cluster-administration/certificates/)는 다른 툴 체인을 사용하여 인증서를 생성하는 단계를 설명한다. +* [인증서 생성](/ko/docs/tasks/administer-cluster/certificates/)는 다른 툴 체인을 사용하여 인증서를 생성하는 단계를 설명한다. * [쿠버네티스 컨테이너 환경](/ko/docs/concepts/containers/container-environment/)은 쿠버네티스 노드에서 Kubelet으로 관리하는 컨테이너에 대한 환경을 설명한다. diff --git a/content/ko/docs/concepts/cluster-administration/certificates.md b/content/ko/docs/concepts/cluster-administration/certificates.md index 7b71b9c344..5acb75ea80 100644 --- a/content/ko/docs/concepts/cluster-administration/certificates.md +++ b/content/ko/docs/concepts/cluster-administration/certificates.md @@ -4,247 +4,6 @@ content_type: concept weight: 20 --- - -클라이언트 인증서로 인증을 사용하는 경우 `easyrsa`, `openssl` 또는 `cfssl` -을 통해 인증서를 수동으로 생성할 수 있다. - - - - - - -### easyrsa - -**easyrsa** 는 클러스터 인증서를 수동으로 생성할 수 있다. - -1. easyrsa3의 패치 버전을 다운로드하여 압축을 풀고, 초기화한다. - - curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz - tar xzf easy-rsa.tar.gz - cd easy-rsa-master/easyrsa3 - ./easyrsa init-pki -1. 새로운 인증 기관(CA)을 생성한다. `--batch` 는 자동 모드를 설정한다. - `--req-cn` 는 CA의 새 루트 인증서에 대한 일반 이름(Common Name (CN))을 지정한다. - - ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass -1. 서버 인증서와 키를 생성한다. - `--subject-alt-name` 인수는 API 서버에 접근이 가능한 IP와 DNS - 이름을 설정한다. `MASTER_CLUSTER_IP` 는 일반적으로 API 서버와 - 컨트롤러 관리자 컴포넌트에 대해 `--service-cluster-ip-range` 인수로 - 지정된 서비스 CIDR의 첫 번째 IP이다. `--days` 인수는 인증서가 만료되는 - 일 수를 설정하는데 사용된다. - 또한, 아래 샘플은 기본 DNS 이름으로 `cluster.local` 을 - 사용한다고 가정한다. - - ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ - "IP:${MASTER_CLUSTER_IP},"\ - "DNS:kubernetes,"\ - "DNS:kubernetes.default,"\ - "DNS:kubernetes.default.svc,"\ - "DNS:kubernetes.default.svc.cluster,"\ - "DNS:kubernetes.default.svc.cluster.local" \ - --days=10000 \ - build-server-full server nopass -1. `pki/ca.crt`, `pki/issued/server.crt` 그리고 `pki/private/server.key` 를 디렉터리에 복사한다. -1. API 서버 시작 파라미터에 다음 파라미터를 채우고 추가한다. - - --client-ca-file=/yourdirectory/ca.crt - --tls-cert-file=/yourdirectory/server.crt - --tls-private-key-file=/yourdirectory/server.key - -### openssl - -**openssl** 은 클러스터 인증서를 수동으로 생성할 수 있다. - -1. ca.key를 2048bit로 생성한다. - - openssl genrsa -out ca.key 2048 -1. ca.key에 따라 ca.crt를 생성한다(인증서 유효 기간을 사용하려면 -days를 사용한다). - - openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt -1. server.key를 2048bit로 생성한다. - - openssl genrsa -out server.key 2048 -1. 인증서 서명 요청(Certificate Signing Request (CSR))을 생성하기 위한 설정 파일을 생성한다. - 파일에 저장하기 전에 꺾쇠 괄호(예: ``)로 - 표시된 값을 실제 값으로 대체한다(예: `csr.conf`). - `MASTER_CLUSTER_IP` 의 값은 이전 하위 섹션에서 - 설명한 대로 API 서버의 서비스 클러스터 IP이다. - 또한, 아래 샘플에서는 `cluster.local` 을 기본 DNS 도메인 - 이름으로 사용하고 있다고 가정한다. - - [ req ] - default_bits = 2048 - prompt = no - default_md = sha256 - req_extensions = req_ext - distinguished_name = dn - - [ dn ] - C = <국가(country)> - ST = <도(state)> - L = <시(city)> - O = <조직(organization)> - OU = <조직 단위(organization unit)> - CN = - - [ req_ext ] - subjectAltName = @alt_names - - [ alt_names ] - DNS.1 = kubernetes - DNS.2 = kubernetes.default - DNS.3 = kubernetes.default.svc - DNS.4 = kubernetes.default.svc.cluster - DNS.5 = kubernetes.default.svc.cluster.local - IP.1 = - IP.2 = - - [ v3_ext ] - authorityKeyIdentifier=keyid,issuer:always - basicConstraints=CA:FALSE - keyUsage=keyEncipherment,dataEncipherment - extendedKeyUsage=serverAuth,clientAuth - subjectAltName=@alt_names -1. 설정 파일을 기반으로 인증서 서명 요청을 생성한다. - - openssl req -new -key server.key -out server.csr -config csr.conf -1. ca.key, ca.crt 그리고 server.csr을 사용해서 서버 인증서를 생성한다. - - openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ - -CAcreateserial -out server.crt -days 10000 \ - -extensions v3_ext -extfile csr.conf -1. 인증서를 본다. - - openssl x509 -noout -text -in ./server.crt - -마지막으로, API 서버 시작 파라미터에 동일한 파라미터를 추가한다. - -### cfssl - -**cfssl** 은 인증서 생성을 위한 또 다른 도구이다. - -1. 아래에 표시된 대로 커맨드 라인 도구를 다운로드하여 압축을 풀고 준비한다. - 사용 중인 하드웨어 아키텍처 및 cfssl 버전에 따라 샘플 - 명령을 조정해야 할 수도 있다. - - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl - chmod +x cfssl - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson - chmod +x cfssljson - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo - chmod +x cfssl-certinfo -1. 아티팩트(artifact)를 보유할 디렉터리를 생성하고 cfssl을 초기화한다. - - mkdir cert - cd cert - ../cfssl print-defaults config > config.json - ../cfssl print-defaults csr > csr.json -1. CA 파일을 생성하기 위한 JSON 설정 파일을 `ca-config.json` 예시와 같이 생성한다. - - { - "signing": { - "default": { - "expiry": "8760h" - }, - "profiles": { - "kubernetes": { - "usages": [ - "signing", - "key encipherment", - "server auth", - "client auth" - ], - "expiry": "8760h" - } - } - } - } -1. CA 인증서 서명 요청(CSR)을 위한 JSON 설정 파일을 - `ca-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호로 표시된 - 값을 사용하려는 실제 값으로 변경한다. - - { - "CN": "kubernetes", - "key": { - "algo": "rsa", - "size": 2048 - }, - "names":[{ - "C": "<국가(country)>", - "ST": "<도(state)>", - "L": "<시(city)>", - "O": "<조직(organization)>", - "OU": "<조직 단위(organization unit)>" - }] - } -1. CA 키(`ca-key.pem`)와 인증서(`ca.pem`)을 생성한다. - - ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca -1. API 서버의 키와 인증서를 생성하기 위한 JSON 구성파일을 - `server-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호 안의 값을 - 사용하려는 실제 값으로 변경한다. `MASTER_CLUSTER_IP` 는 - 이전 하위 섹션에서 설명한 API 서버의 클러스터 IP이다. - 아래 샘플은 기본 DNS 도메인 이름으로 `cluster.local` 을 - 사용한다고 가정한다. - - { - "CN": "kubernetes", - "hosts": [ - "127.0.0.1", - "", - "", - "kubernetes", - "kubernetes.default", - "kubernetes.default.svc", - "kubernetes.default.svc.cluster", - "kubernetes.default.svc.cluster.local" - ], - "key": { - "algo": "rsa", - "size": 2048 - }, - "names": [{ - "C": "<국가(country)>", - "ST": "<도(state)>", - "L": "<시(city)>", - "O": "<조직(organization)>", - "OU": "<조직 단위(organization unit)>" - }] - } -1. API 서버 키와 인증서를 생성하면, 기본적으로 - `server-key.pem` 과 `server.pem` 파일에 각각 저장된다. - - ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ - --config=ca-config.json -profile=kubernetes \ - server-csr.json | ../cfssljson -bare server - - -## 자체 서명된 CA 인증서의 배포 - -클라이언트 노드는 자체 서명된 CA 인증서를 유효한 것으로 인식하지 않을 수 있다. -비-프로덕션 디플로이먼트 또는 회사 방화벽 뒤에서 실행되는 -디플로이먼트의 경우, 자체 서명된 CA 인증서를 모든 클라이언트에 -배포하고 유효한 인증서의 로컬 목록을 새로 고칠 수 있다. - -각 클라이언트에서, 다음 작업을 수행한다. - -```bash -sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt -sudo update-ca-certificates -``` - -``` -Updating certificates in /etc/ssl/certs... -1 added, 0 removed; done. -Running hooks in /etc/ca-certificates/update.d.... -done. -``` - -## 인증서 API - -`certificates.k8s.io` API를 사용해서 -[여기](/docs/tasks/tls/managing-tls-in-a-cluster)에 -설명된 대로 인증에 사용할 x509 인증서를 프로비전 할 수 있다. +클러스터를 위한 인증서를 생성하기 위해서는, [인증서](/ko/docs/tasks/administer-cluster/certificates/)를 참고한다. diff --git a/content/ko/docs/concepts/containers/container-environment.md b/content/ko/docs/concepts/containers/container-environment.md index c6cb09965a..58c106fdce 100644 --- a/content/ko/docs/concepts/containers/container-environment.md +++ b/content/ko/docs/concepts/containers/container-environment.md @@ -1,4 +1,7 @@ --- + + + title: 컨테이너 환경 변수 content_type: concept weight: 20 @@ -24,11 +27,11 @@ weight: 20 ### 컨테이너 정보 컨테이너의 *호스트네임* 은 컨테이너가 동작 중인 파드의 이름과 같다. -그것은 `hostname` 커맨드 또는 libc의 -[`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html) +그것은 `hostname` 커맨드 또는 libc의 +[`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html) 함수 호출을 통해서 구할 수 있다. -파드 이름과 네임스페이스는 +파드 이름과 네임스페이스는 [다운워드(Downward) API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)를 통해 환경 변수로 구할 수 있다. Docker 이미지에 정적으로 명시된 환경 변수와 마찬가지로, @@ -36,11 +39,12 @@ Docker 이미지에 정적으로 명시된 환경 변수와 마찬가지로, ### 클러스터 정보 -컨테이너가 생성될 때 실행 중이던 모든 서비스의 목록은 환경 변수로 해당 컨테이너에서 사용할 수 +컨테이너가 생성될 때 실행 중이던 모든 서비스의 목록은 환경 변수로 해당 컨테이너에서 사용할 수 있다. +이 목록은 새로운 컨테이너의 파드 및 쿠버네티스 컨트롤 플레인 서비스와 동일한 네임스페이스 내에 있는 서비스로 한정된다. 이러한 환경 변수는 Docker 링크 구문과 일치한다. -*bar* 라는 이름의 컨테이너에 매핑되는 *foo* 라는 이름의 서비스에 대해서는, +*bar* 라는 이름의 컨테이너에 매핑되는 *foo* 라는 이름의 서비스에 대해서는, 다음의 형태로 변수가 정의된다. ```shell @@ -58,5 +62,3 @@ FOO_SERVICE_PORT=<서비스가 동작 중인 포트> * [컨테이너 라이프사이클 훅(hooks)](/ko/docs/concepts/containers/container-lifecycle-hooks/)에 대해 더 배워 보기. * [컨테이너 라이프사이클 이벤트에 핸들러 부착](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) 실제 경험 얻기. - - diff --git a/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index 7e8a10e8b1..a2326c71dd 100644 --- a/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -1,5 +1,9 @@ --- title: 애그리게이션 레이어(aggregation layer)로 쿠버네티스 API 확장하기 + + + + content_type: concept weight: 10 --- @@ -25,8 +29,6 @@ Extension-apiserver는 kube-apiserver로 오가는 연결의 레이턴시가 낮 kube-apiserver로 부터의 디스커버리 요청은 왕복 레이턴시가 5초 이내여야 한다. extention API server가 레이턴시 요구 사항을 달성할 수 없는 경우 이를 충족할 수 있도록 변경하는 것을 고려한다. -`EnableAggregatedDiscoveryTimeout=false` [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)를 설정해서 타임아웃 -제한을 비활성화 할 수 있다. 이 사용 중단(deprecated)된 기능 게이트는 향후 릴리스에서 제거될 예정이다. ## {{% heading "whatsnext" %}} diff --git a/content/ko/docs/concepts/extend-kubernetes/operator.md b/content/ko/docs/concepts/extend-kubernetes/operator.md index f6c80d8067..57a4f3d9d2 100644 --- a/content/ko/docs/concepts/extend-kubernetes/operator.md +++ b/content/ko/docs/concepts/extend-kubernetes/operator.md @@ -124,5 +124,5 @@ kubectl edit SampleDB/example-database # 일부 설정을 수동으로 변경하 사용하여 직접 구현하기 * [오퍼레이터 프레임워크](https://operatorframework.io) 사용하기 * 다른 사람들이 사용할 수 있도록 자신의 오퍼레이터를 [게시](https://operatorhub.io/)하기 -* 오퍼레이터 패턴을 소개한 [CoreOS 원본 기사](https://coreos.com/blog/introducing-operators.html) 읽기 +* 오퍼레이터 패턴을 소개한 [CoreOS 원본 글](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html) 읽기 (이 링크는 원본 글에 대한 보관 버전임) * 오퍼레이터 구축을 위한 모범 사례에 대한 구글 클라우드(Google Cloud)의 [기사](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) 읽기 diff --git a/content/ko/docs/concepts/overview/what-is-kubernetes.md b/content/ko/docs/concepts/overview/what-is-kubernetes.md index 4086e27cda..344c266d1e 100644 --- a/content/ko/docs/concepts/overview/what-is-kubernetes.md +++ b/content/ko/docs/concepts/overview/what-is-kubernetes.md @@ -55,7 +55,7 @@ sitemap: ## 쿠버네티스가 왜 필요하고 무엇을 할 수 있나 {#why-you-need-kubernetes-and-what-can-it-do} -컨테이너는 애플리케이션을 포장하고 실행하는 좋은 방법이다. 프로덕션 환경에서는 애플리케이션을 실행하는 컨테이너를 관리하고 가동 중지 시간이 없는지 확인해야한다. 예를 들어 컨테이너가 다운되면 다른 컨테이너를 다시 시작해야한다. 이 문제를 시스템에 의해 처리한다면 더 쉽지 않을까? +컨테이너는 애플리케이션을 포장하고 실행하는 좋은 방법이다. 프로덕션 환경에서는 애플리케이션을 실행하는 컨테이너를 관리하고 가동 중지 시간이 없는지 확인해야 한다. 예를 들어 컨테이너가 다운되면 다른 컨테이너를 다시 시작해야 한다. 이 문제를 시스템에 의해 처리한다면 더 쉽지 않을까? 그것이 쿠버네티스가 필요한 이유이다! 쿠버네티스는 분산 시스템을 탄력적으로 실행하기 위한 프레임 워크를 제공한다. 애플리케이션의 확장과 장애 조치를 처리하고, 배포 패턴 등을 제공한다. 예를 들어, 쿠버네티스는 시스템의 카나리아 배포를 쉽게 관리 할 수 있다. diff --git a/content/ko/docs/concepts/overview/working-with-objects/labels.md b/content/ko/docs/concepts/overview/working-with-objects/labels.md index 12f02ccee4..2044cf5135 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/labels.md +++ b/content/ko/docs/concepts/overview/working-with-objects/labels.md @@ -52,6 +52,11 @@ _레이블_ 은 키와 값의 쌍이다. 유효한 레이블 키에는 슬래시 `kubernetes.io/`와 `k8s.io/` 접두사는 쿠버네티스의 핵심 컴포넌트로 예약되어있다. +유효한 레이블 값은 다음과 같다. +* 63 자 이하 여야 하고(공백이면 안 됨), +* 시작과 끝은 알파벳과 숫자(`[a-z0-9A-Z]`)이며, +* 알파벳과 숫자, 대시(`-`), 밑줄(`_`), 점(`.`)를 중간에 포함할 수 있다. + 유효한 레이블 값은 63자 미만 또는 공백이며 시작과 끝은 알파벳과 숫자(`[a-z0-9A-Z]`)이며, 대시(`-`), 밑줄(`_`), 점(`.`)과 함께 사용할 수 있다. 다음의 예시는 파드에 `environment: production` 과 `app: nginx` 2개의 레이블이 있는 구성 파일이다. diff --git a/content/ko/docs/concepts/policy/resource-quotas.md b/content/ko/docs/concepts/policy/resource-quotas.md index ff1f4a75c8..df23da5d57 100644 --- a/content/ko/docs/concepts/policy/resource-quotas.md +++ b/content/ko/docs/concepts/policy/resource-quotas.md @@ -58,7 +58,7 @@ weight: 20 ## 리소스 쿼터 활성화 많은 쿠버네티스 배포판에 기본적으로 리소스 쿼터 지원이 활성화되어 있다. -API 서버 `--enable-admission-plugins=` 플래그의 인수 중 하나로 +{{< glossary_tooltip text="API 서버" term_id="kube-apiserver" >}} `--enable-admission-plugins=` 플래그의 인수 중 하나로 `ResourceQuota`가 있는 경우 활성화된다. 해당 네임스페이스에 리소스쿼터가 있는 경우 특정 네임스페이스에 diff --git a/content/ko/docs/concepts/services-networking/dns-pod-service.md b/content/ko/docs/concepts/services-networking/dns-pod-service.md index fc1074a86c..006ffba99c 100644 --- a/content/ko/docs/concepts/services-networking/dns-pod-service.md +++ b/content/ko/docs/concepts/services-networking/dns-pod-service.md @@ -1,11 +1,14 @@ --- + + + title: 서비스 및 파드용 DNS content_type: concept weight: 20 --- -이 페이지는 쿠버네티스의 DNS 지원에 대한 개요를 설명한다. - +쿠버네티스는 파드와 서비스를 위한 DNS 레코드를 생성한다. 사용자는 IP 주소 대신에 +일관된 DNS 네임을 통해서 서비스에 접속할 수 있다. @@ -15,23 +18,51 @@ weight: 20 개별 컨테이너들이 DNS 네임을 해석할 때 DNS 서비스의 IP를 사용하도록 kubelets를 구성한다. -### DNS 네임이 할당되는 것들 - 클러스터 내의 모든 서비스(DNS 서버 자신도 포함하여)에는 DNS 네임이 할당된다. 기본적으로 클라이언트 파드의 DNS 검색 리스트는 파드 자체의 네임스페이스와 클러스터의 기본 도메인을 포함한다. -이 예시는 다음과 같다. -쿠버네티스 네임스페이스 `bar`에 `foo`라는 서비스가 있다. 네임스페이스 `bar`에서 running 상태인 파드는 -단순하게 `foo`를 조회하는 DNS 쿼리를 통해서 서비스 `foo`를 찾을 수 있다. -네임스페이스 `quux`에서 실행 중인 파드는 -`foo.bar`를 조회하는 DNS 쿼리를 통해서 이 서비스를 찾을 수 있다. +### 서비스의 네임스페이스 -다음 절에서는 쿠버네티스 DNS에서 지원하는 레코드 유형과 레이아웃을 자세히 설명한다. -이 외에 동작하는 레이아웃, 네임 또는 쿼리는 구현 세부 정보로 간주하며 -경고 없이 변경될 수 있다. -최신 업데이트에 대한 자세한 설명은 다음 링크를 통해 참조할 수 있다. -[쿠버네티스 DNS 기반 서비스 디스커버리](https://github.com/kubernetes/dns/blob/master/docs/specification.md). +DNS 쿼리는 그것을 생성하는 파드의 네임스페이스에 따라 다른 결과를 반환할 수 +있다. 네임스페이스를 지정하지 않은 DNS 쿼리는 파드의 네임스페이스에 +국한된다. DNS 쿼리에 네임스페이스를 명시하여 다른 네임스페이스에 있는 서비스에 접속한다. + +예를 들어, `test` 네임스페이스에 있는 파드를 생각해보자. `data` 서비스는 +`prod` 네임스페이스에 있다. + +이 경우, `data` 에 대한 쿼리는 파드의 `test` 네임스페이스를 사용하기 때문에 결과를 반환하지 않을 것이다. + +`data.prod` 로 쿼리하면 의도한 결과를 반환할 것이다. 왜냐하면 +네임스페이스가 명시되어 있기 때문이다. + +DNS 쿼리는 파드의 `/etc/resolv.conf` 를 사용하여 확장될 수 있을 것이다. Kubelet은 +각 파드에 대해서 파일을 설정한다. 예를 들어, `data` 만을 위한 쿼리는 +`data.test.cluster.local` 로 확장된다. `search` 옵션의 값은 +쿼리를 확장하기 위해서 사용된다. DNS 쿼리에 대해 더 자세히 알고 싶은 경우, +[`resolv.conf` 설명 페이지.](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html)를 참고한다. + +``` +nameserver 10.32.0.10 +search .svc.cluster.local svc.cluster.local cluster.local +options ndots:5 +``` + +요약하면, _test_ 네임스페이스에 있는 파드는 `data.prod` 또는 +`data.prod.cluster.local` 중 하나를 통해 성공적으로 해석될 수 있다. + +### DNS 레코드 + +어떤 오브젝트가 DNS 레코드를 가지는가? + +1. 서비스 +2. 파드 + +다음 섹션은 지원되는 DNS 레코드의 종류 및 레이아웃에 대한 상세 +내용이다. 혹시 동작시킬 필요가 있는 다른 레이아웃, 네임, 또는 쿼리는 +구현 세부 사항으로 간주되며 경고 없이 변경될 수 있다. +최신 명세 확인을 위해서는, +[쿠버네티스 DNS-기반 서비스 디스커버리](https://github.com/kubernetes/dns/blob/master/docs/specification.md)를 본다. ## 서비스 diff --git a/content/ko/docs/concepts/services-networking/ingress-controllers.md b/content/ko/docs/concepts/services-networking/ingress-controllers.md index 8554d9a87d..41524039f0 100644 --- a/content/ko/docs/concepts/services-networking/ingress-controllers.md +++ b/content/ko/docs/concepts/services-networking/ingress-controllers.md @@ -66,7 +66,7 @@ weight: 40 다양한 인그레스 컨트롤러는 약간 다르게 작동한다. {{< note >}} -인그레스 컨트롤러의 설명서를 검토하여 선택 시 주의 사항을 이해해야한다. +인그레스 컨트롤러의 설명서를 검토하여 선택 시 주의 사항을 이해해야 한다. {{< /note >}} diff --git a/content/ko/docs/concepts/services-networking/ingress.md b/content/ko/docs/concepts/services-networking/ingress.md index a55302f059..ec705e6a7c 100644 --- a/content/ko/docs/concepts/services-networking/ingress.md +++ b/content/ko/docs/concepts/services-networking/ingress.md @@ -167,7 +167,7 @@ Events: ### 예제 -| 종류 | 경로 | 요청 경로 | 일치 여부 | +| 종류 | 경로 | 요청 경로 | 일치 여부 | |--------|---------------------------------|-------------------------------|------------------------------------| | Prefix | `/` | (모든 경로) | 예 | | Exact | `/foo` | `/foo` | 예 | diff --git a/content/ko/docs/concepts/services-networking/service.md b/content/ko/docs/concepts/services-networking/service.md index be1f3ecaae..b01a971cff 100644 --- a/content/ko/docs/concepts/services-networking/service.md +++ b/content/ko/docs/concepts/services-networking/service.md @@ -311,7 +311,7 @@ IPVS는 트래픽을 백엔드 파드로 밸런싱하기 위한 추가 옵션을 {{< note >}} IPVS 모드에서 kube-proxy를 실행하려면, kube-proxy를 시작하기 전에 노드에서 IPVS를 -사용 가능하도록 해야한다. +사용 가능하도록 해야 한다. kube-proxy가 IPVS 프록시 모드에서 시작될 때, IPVS 커널 모듈을 사용할 수 있는지 확인한다. IPVS 커널 모듈이 감지되지 않으면, kube-proxy는 @@ -1120,7 +1120,7 @@ VIP용 유저스페이스 프록시를 사용하면 중소 규모의 스케일 않아도 된다. 그것은 격리 실패이다. 서비스에 대한 포트 번호를 선택할 수 있도록 하기 위해, 두 개의 -서비스가 충돌하지 않도록 해야한다. 쿠버네티스는 각 서비스에 고유한 IP 주소를 +서비스가 충돌하지 않도록 해야 한다. 쿠버네티스는 각 서비스에 고유한 IP 주소를 할당하여 이를 수행한다. 각 서비스가 고유한 IP를 받도록 하기 위해, 내부 할당기는 diff --git a/content/ko/docs/concepts/storage/volume-snapshot-classes.md b/content/ko/docs/concepts/storage/volume-snapshot-classes.md index e5b6002e6e..862c900fee 100644 --- a/content/ko/docs/concepts/storage/volume-snapshot-classes.md +++ b/content/ko/docs/concepts/storage/volume-snapshot-classes.md @@ -68,7 +68,7 @@ parameters: ### 드라이버 볼륨 스냅샷 클래스에는 볼륨스냅샷의 프로비저닝에 사용되는 CSI 볼륨 플러그인을 -결정하는 드라이버를 가지고 있다. 이 필드는 반드시 지정해야한다. +결정하는 드라이버를 가지고 있다. 이 필드는 반드시 지정해야 한다. ### 삭제정책(DeletionPolicy) diff --git a/content/ko/docs/concepts/storage/volumes.md b/content/ko/docs/concepts/storage/volumes.md index 7e1646820e..c9b3ac80d9 100644 --- a/content/ko/docs/concepts/storage/volumes.md +++ b/content/ko/docs/concepts/storage/volumes.md @@ -924,7 +924,7 @@ CSI 는 쿠버네티스 내에서 Quobyte 볼륨을 사용하기 위해 권장 ### rbd `rbd` 볼륨을 사용하면 -[Rados Block Device](https://ceph.com/docs/master/rbd/rbd/)(RBD) 볼륨을 파드에 마운트할 수 +[Rados Block Device](https://docs.ceph.com/en/latest/rbd/)(RBD) 볼륨을 파드에 마운트할 수 있다. 파드를 제거할 때 지워지는 `emptyDir` 와는 다르게 `rbd` 볼륨의 내용은 유지되고, 볼륨은 마운트 해제만 된다. 이 의미는 RBD 볼륨에 데이터를 미리 채울 수 있으며, 데이터를 @@ -1332,7 +1332,7 @@ CSI 호환 볼륨 드라이버가 쿠버네티스 클러스터에 배포되면 * `controllerPublishSecretRef`: CSI의 `ControllerPublishVolume` 그리고 `ControllerUnpublishVolume` 호출을 완료하기 위해 CSI 드라이버에 전달하려는 민감한 정보가 포함된 시크릿 오브젝트에 대한 참조이다. 이 필드는 - 선택사항이며, 시크릿이 필요하지 않은 경우 비어있을 수 있다. 만약 시크릿에 + 선택 사항이며, 시크릿이 필요하지 않은 경우 비어있을 수 있다. 만약 시크릿에 둘 이상의 시크릿이 포함된 경우에도 모든 시크릿이 전달된다. * `nodeStageSecretRef`: CSI의 `NodeStageVolume` 호출을 완료하기위해 CSI 드라이버에 전달하려는 민감한 정보가 포함 된 시크릿 diff --git a/content/ko/docs/concepts/workloads/controllers/deployment.md b/content/ko/docs/concepts/workloads/controllers/deployment.md index 1407bb28f0..f6b9979d47 100644 --- a/content/ko/docs/concepts/workloads/controllers/deployment.md +++ b/content/ko/docs/concepts/workloads/controllers/deployment.md @@ -341,7 +341,7 @@ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 이후에는 변경할 수 없다. {{< /note >}} -* 셀렉터 추가 시 디플로이먼트의 사양에 있는 파드 템플릿 레이블도 새 레이블로 업데이트 해야한다. +* 셀렉터 추가 시 디플로이먼트의 사양에 있는 파드 템플릿 레이블도 새 레이블로 업데이트해야 한다. 그렇지 않으면 유효성 검사 오류가 반환된다. 이 변경은 겹치지 않는 변경으로 새 셀렉터가 이전 셀렉터로 만든 레플리카셋과 파드를 선택하지 않게 되고, 그 결과로 모든 기존 레플리카셋은 고아가 되며, 새로운 레플리카셋을 생성하게 된다. @@ -1060,7 +1060,7 @@ echo $? 이것은 {{< glossary_tooltip text="파드" term_id="pod" >}}와 정확하게 동일한 스키마를 가지고 있고, 중첩된 것을 제외하면 `apiVersion` 과 `kind` 를 가지고 있지 않는다. 파드에 필요한 필드 외에 디플로이먼트 파드 템플릿은 적절한 레이블과 적절한 재시작 정책을 명시해야 한다. -레이블의 경우 다른 컨트롤러와 겹치지 않도록 해야한다. 자세한 것은 [셀렉터](#셀렉터)를 참조한다. +레이블의 경우 다른 컨트롤러와 겹치지 않도록 해야 한다. 자세한 것은 [셀렉터](#셀렉터)를 참조한다. [`.spec.template.spec.restartPolicy`](/ko/docs/concepts/workloads/pods/pod-lifecycle/#재시작-정책) 에는 오직 `Always` 만 허용되고, 명시되지 않으면 기본값이 된다. diff --git a/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md b/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md index ff48730a13..06ce543012 100644 --- a/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md @@ -49,7 +49,7 @@ kubectl 명령에서 숏컷으로 사용된다. {{< codenew file="controllers/replication.yaml" >}} -예제 파일을 다운로드 한 후 다음 명령을 실행하여 예제 작업을 실행하라. +예제 파일을 다운로드한 후 다음 명령을 실행하여 예제 작업을 실행하라. ```shell kubectl apply -f https://k8s.io/examples/controllers/replication.yaml diff --git a/content/ko/docs/concepts/workloads/controllers/statefulset.md b/content/ko/docs/concepts/workloads/controllers/statefulset.md index 6b8299a0c3..3a1f784259 100644 --- a/content/ko/docs/concepts/workloads/controllers/statefulset.md +++ b/content/ko/docs/concepts/workloads/controllers/statefulset.md @@ -107,7 +107,7 @@ spec: ## 파드 셀렉터 -스테이트풀셋의 `.spec.selector` 필드는 `.spec.template.metadata.labels` 레이블과 일치하도록 설정 해야 한다. 쿠버네티스 1.8 이전에서는 생략시에 `.spec.selector` 필드가 기본 설정 되었다. 1.8 과 이후 버전에서는 파드 셀렉터를 명시하지 않으면 스테이트풀셋 생성시 유효성 검증 오류가 발생하는 결과가 나오게 된다. +스테이트풀셋의 `.spec.selector` 필드는 `.spec.template.metadata.labels` 레이블과 일치하도록 설정해야 한다. 쿠버네티스 1.8 이전에서는 생략시에 `.spec.selector` 필드가 기본 설정 되었다. 1.8 과 이후 버전에서는 파드 셀렉터를 명시하지 않으면 스테이트풀셋 생성시 유효성 검증 오류가 발생하는 결과가 나오게 된다. ## 파드 신원 @@ -173,7 +173,7 @@ N개의 레플리카가 있는 스테이트풀셋은 스테이트풀셋에 있 파드의 `volumeMounts` 는 퍼시스턴트 볼륨 클레임과 관련된 퍼시스턴트 볼륨이 마운트 된다. 참고로, 파드 퍼시스턴트 볼륨 클레임과 관련된 퍼시스턴트 볼륨은 파드 또는 스테이트풀셋이 삭제되더라도 삭제되지 않는다. -이것은 반드시 수동으로 해야한다. +이것은 반드시 수동으로 해야 한다. ### 파드 이름 레이블 diff --git a/content/ko/docs/concepts/workloads/pods/disruptions.md b/content/ko/docs/concepts/workloads/pods/disruptions.md index 02647adb70..bcfde559cb 100644 --- a/content/ko/docs/concepts/workloads/pods/disruptions.md +++ b/content/ko/docs/concepts/workloads/pods/disruptions.md @@ -103,7 +103,7 @@ PDB는 자발적 중단으로 일정 비율 이하로 떨어지지 않도록 보장할 수 있다. 클러스터 관리자와 호스팅 공급자는 직접적으로 파드나 디플로이먼트를 제거하는 대신 -[Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api)로 +[Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#eviction-api)로 불리는 PodDisruptionBudget을 준수하는 도구를 이용해야 한다. 예를 들어, `kubectl drain` 하위 명령을 사용하면 노드를 서비스 중단으로 표시할 수 diff --git a/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md index 1066e3eb83..aa154a4b42 100644 --- a/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md @@ -38,8 +38,7 @@ ID([UID](/ko/docs/concepts/overview/working-with-objects/names/#uids))가 타임아웃 기간 후에 [삭제되도록 스케줄된다](#pod-garbage-collection). 파드는 자체적으로 자가 치유되지 않는다. 파드가 -{{< glossary_tooltip text="노드" term_id="node" >}}에 스케줄된 후에 실패하거나, -스케줄 작업 자체가 실패하면, 파드는 삭제된다. 마찬가지로, 파드는 +{{< glossary_tooltip text="노드" term_id="node" >}}에 스케줄된 후에 해당 노드가 실패하면, 파드는 삭제된다. 마찬가지로, 파드는 리소스 부족 또는 노드 유지 관리 작업으로 인해 축출되지 않는다. 쿠버네티스는 {{< glossary_tooltip term_id="controller" text="컨트롤러" >}}라 부르는 하이-레벨 추상화를 사용하여 diff --git a/content/ko/docs/reference/command-line-tools-reference/feature-gates.md b/content/ko/docs/reference/command-line-tools-reference/feature-gates.md index cece4022ef..93683ae9b1 100644 --- a/content/ko/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/ko/docs/reference/command-line-tools-reference/feature-gates.md @@ -351,7 +351,7 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `VolumeScheduling` | `false` | 알파 | 1.9 | 1.9 | | `VolumeScheduling` | `true` | 베타 | 1.10 | 1.12 | | `VolumeScheduling` | `true` | GA | 1.13 | - | -| `VolumeSubpath` | `true` | GA | 1.13 | - | +| `VolumeSubpath` | `true` | GA | 1.10 | - | | `VolumeSubpathEnvExpansion` | `false` | 알파 | 1.14 | 1.14 | | `VolumeSubpathEnvExpansion` | `true` | 베타 | 1.15 | 1.16 | | `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - | diff --git a/content/ko/docs/reference/glossary/persistent-volume-claim.md b/content/ko/docs/reference/glossary/persistent-volume-claim.md new file mode 100644 index 0000000000..122b754d23 --- /dev/null +++ b/content/ko/docs/reference/glossary/persistent-volume-claim.md @@ -0,0 +1,18 @@ +--- +title: 퍼시스턴트 볼륨 클레임(Persistent Volume Claim) +id: persistent-volume-claim +date: 2018-04-12 +full_link: /ko/docs/concepts/storage/persistent-volumes/ +short_description: > + 컨테이너의 볼륨으로 마운트될 수 있도록 퍼시스턴트볼륨(PersistentVolume)에 정의된 스토리지 리소스를 요청한다. + +aka: +tags: +- core-object +- storage +--- + {{< glossary_tooltip text="컨테이너" term_id="container" >}}의 볼륨으로 마운트될 수 있도록 {{< glossary_tooltip text="퍼시스턴트볼륨(PersistentVolume)" term_id="persistent-volume" >}}에 정의된 스토리지 리소스를 요청한다. + + + +스토리지의 양, 스토리지에 엑세스하는 방법(읽기 전용, 읽기 그리고/또는 쓰기) 및 재확보(보존, 재활용 혹은 삭제) 방법을 지정한다. 스토리지 자체에 관한 내용은 퍼시스턴트볼륨 오브젝트에 설명되어 있다. diff --git a/content/ko/docs/reference/kubectl/cheatsheet.md b/content/ko/docs/reference/kubectl/cheatsheet.md index 0020d7b574..d5870bba30 100644 --- a/content/ko/docs/reference/kubectl/cheatsheet.md +++ b/content/ko/docs/reference/kubectl/cheatsheet.md @@ -293,12 +293,12 @@ kubectl get pods -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{pr ## 실행 중인 파드와 상호 작용 ```bash -kubectl logs my-pod # 파드 로그(stdout) 덤프 +kubectl logs my-pod # 파드 로그 덤프 (stdout) kubectl logs -l name=myLabel # name이 myLabel인 파드 로그 덤프 (stdout) -kubectl logs my-pod --previous # 컨테이너의 이전 인스턴스 생성에 대한 파드 로그(stdout) 덤프 -kubectl logs my-pod -c my-container # 파드 로그(stdout, 멀티-컨테이너 경우) 덤프 +kubectl logs my-pod --previous # 컨테이너의 이전 인스턴스 생성에 대한 파드 로그 덤프 (stdout) +kubectl logs my-pod -c my-container # 파드 로그 덤프 (stdout, 멀티-컨테이너 경우) kubectl logs -l name=myLabel -c my-container # name이 myLabel인 파드 로그 덤프 (stdout) -kubectl logs my-pod -c my-container --previous # 컨테이너의 이전 인스턴스 생성에 대한 파드 로그(stdout, 멀티-컨테이너 경우) 덤프 +kubectl logs my-pod -c my-container --previous # 컨테이너의 이전 인스턴스 생성에 대한 파드 로그 덤프 (stdout, 멀티-컨테이너 경우) kubectl logs -f my-pod # 실시간 스트림 파드 로그(stdout) kubectl logs -f my-pod -c my-container # 실시간 스트림 파드 로그(stdout, 멀티-컨테이너 경우) kubectl logs -f -l name=myLabel --all-containers # name이 myLabel인 모든 파드의 로그 스트리밍 (stdout) @@ -317,6 +317,18 @@ kubectl top pod POD_NAME --containers # 특정 파드와 해당 kubectl top pod POD_NAME --sort-by=cpu # 지정한 파드에 대한 메트릭을 표시하고 'cpu' 또는 'memory'별로 정렬 ``` +## 디플로이먼트, 서비스와 상호 작용 +```bash +kubectl logs deploy/my-deployment # 디플로이먼트에 대한 파드 로그 덤프 (단일-컨테이너 경우) +kubectl logs deploy/my-deployment -c my-container # 디플로이먼트에 대한 파드 로그 덤프 (멀티-컨테이너 경우) + +kubectl port-forward svc/my-service 5000 # 로컬 머신의 5000번 포트를 리스닝하고, my-service의 동일한(5000번) 포트로 전달 +kubectl port-forward svc/my-service 5000:my-service-port # 로컬 머신의 5000번 포트를 리스닝하고, my-service의 라는 이름을 가진 포트로 전달 + +kubectl port-forward deploy/my-deployment 5000:6000 # 로컬 머신의 5000번 포트를 리스닝하고, 에 의해 생성된 파드의 6000번 포트로 전달 +kubectl exec deploy/my-deployment -- ls # 에 의해 생성된 첫번째 파드의 첫번째 컨테이너에 명령어 실행 (단일- 또는 다중-컨테이너 경우) +``` + ## 노드, 클러스터와 상호 작용 ```bash diff --git a/content/ko/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/ko/docs/reference/kubectl/docker-cli-to-kubectl.md index 1679059871..12b41b1d98 100644 --- a/content/ko/docs/reference/kubectl/docker-cli-to-kubectl.md +++ b/content/ko/docs/reference/kubectl/docker-cli-to-kubectl.md @@ -7,7 +7,7 @@ content_type: concept --- -당신은 쿠버네티스 커맨드 라인 도구인 kubectl을 사용하여 API 서버와 상호 작용할 수 있다. 만약 도커 커맨드 라인 도구에 익숙하다면 kubectl을 사용하는 것은 간단하다. 다음 섹션에서는 도커의 하위 명령을 보여주고 kubectl과 같은 명령어를 설명한다. +당신은 쿠버네티스 커맨드 라인 도구인 `kubectl`을 사용하여 API 서버와 상호 작용할 수 있다. 만약 도커 커맨드 라인 도구에 익숙하다면 `kubectl`을 사용하는 것은 간단하다. 다음 섹션에서는 도커의 하위 명령을 보여주고 `kubectl`과 같은 명령어를 설명한다. diff --git a/content/ko/docs/reference/using-api/client-libraries.md b/content/ko/docs/reference/using-api/client-libraries.md index ae0404239d..f8c1cb91c8 100644 --- a/content/ko/docs/reference/using-api/client-libraries.md +++ b/content/ko/docs/reference/using-api/client-libraries.md @@ -65,12 +65,13 @@ API 호출 또는 요청/응답 타입을 직접 구현할 필요는 없다. | Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) | | Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) | | Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) | +| Python | [github.com/Frankkkkk/pykorm](https://github.com/Frankkkkk/pykorm) | | Ruby | [github.com/abonas/kubeclient](https://github.com/abonas/kubeclient) | | Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) | | Ruby | [github.com/kontena/k8s-client](https://github.com/kontena/k8s-client) | | Rust | [github.com/clux/kube-rs](https://github.com/clux/kube-rs) | | Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) | -| Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) | +| Scala | [github.com/hagay3/skuber](https://github.com/hagay3/skuber) | | Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) | | DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) | | Swift | [github.com/swiftkube/client](https://github.com/swiftkube/client) | diff --git a/content/ko/docs/setup/best-practices/certificates.md b/content/ko/docs/setup/best-practices/certificates.md index 5595e0ac3d..71e16b7675 100644 --- a/content/ko/docs/setup/best-practices/certificates.md +++ b/content/ko/docs/setup/best-practices/certificates.md @@ -7,7 +7,7 @@ weight: 40 쿠버네티스는 TLS 위에 인증을 위해 PKI 인증서가 필요하다. -만약 [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)으로 쿠버네티스를 설치했다면, 클러스터에 필요한 인증서는 자동으로 생성된다. +만약 [kubeadm](/docs/reference/setup-tools/kubeadm/)으로 쿠버네티스를 설치했다면, 클러스터에 필요한 인증서는 자동으로 생성된다. 또한 더 안전하게 자신이 소유한 인증서를 생성할 수 있다. 이를 테면, 개인키를 API 서버에 저장하지 않으므로 더 안전하게 보관할 수 있다. 이 페이지는 클러스터에 필요한 인증서를 설명한다. @@ -72,7 +72,7 @@ etcd 역시 클라이언트와 피어 간에 상호 TLS 인증을 구현한다. | kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | | | front-proxy-client | kubernetes-front-proxy-ca | | client | | -[1]: 클러스터에 접속한 다른 IP 또는 DNS 이름([kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) 이 사용하는 로드 밸런서 안정 IP 또는 DNS 이름, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`, +[1]: 클러스터에 접속한 다른 IP 또는 DNS 이름([kubeadm](/docs/reference/setup-tools/kubeadm/) 이 사용하는 로드 밸런서 안정 IP 또는 DNS 이름, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`, `kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`) `kind`는 하나 이상의 [x509 키 사용](https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage) 종류를 가진다. @@ -97,7 +97,7 @@ kubeadm 사용자만 해당: ### 인증서 파일 경로 -인증서는 권고하는 파일 경로에 존재해야 한다([kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)에서 사용되는 것처럼). 경로는 위치에 관계없이 주어진 파라미터를 사용하여 지정되야 한다. +인증서는 권고하는 파일 경로에 존재해야 한다([kubeadm](/docs/reference/setup-tools/kubeadm/)에서 사용되는 것처럼). 경로는 위치에 관계없이 주어진 파라미터를 사용하여 지정되야 한다. | 기본 CN | 권고되는 키 파일 경로 | 권고하는 인증서 파일 경로 | 명령어 | 키 파라미터 | 인증서 파라미터 | |------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------| @@ -155,5 +155,5 @@ KUBECONFIG= kubectl config use-context default-system |-------------------------|-------------------------|-----------------------------------------------------------------------| | admin.conf | kubectl | 클러스터 관리자를 설정한다. | | kubelet.conf | kubelet | 클러스터 각 노드를 위해 필요하다. | -| controller-manager.conf | kube-controller-manager | 반드시 매니페스트를 `manifests/kube-controller-manager.yaml`에 추가해야한다. | -| scheduler.conf | kube-scheduler | 반드시 매니페스트를 `manifests/kube-scheduler.yaml`에 추가해야한다. | +| controller-manager.conf | kube-controller-manager | 반드시 매니페스트를 `manifests/kube-controller-manager.yaml`에 추가해야 한다. | +| scheduler.conf | kube-scheduler | 반드시 매니페스트를 `manifests/kube-scheduler.yaml`에 추가해야 한다. | diff --git a/content/ko/docs/setup/production-environment/container-runtimes.md b/content/ko/docs/setup/production-environment/container-runtimes.md index 42c8bd8095..827b407b84 100644 --- a/content/ko/docs/setup/production-environment/container-runtimes.md +++ b/content/ko/docs/setup/production-environment/container-runtimes.md @@ -219,11 +219,16 @@ sudo systemctl restart containerd ``` {{% /tab %}} {{% tab name="Windows (PowerShell)" %}} + +
+Powershell 세션을 띄우고, `$Version` 환경 변수를 원하는 버전으로 설정(예: `$Version=1.4.3`)한 뒤, 다음 명령어를 실행한다. +
+ ```powershell # (containerd 설치) # containerd 다운로드 -cmd /c curl -OL https://github.com/containerd/containerd/releases/download/v1.4.1/containerd-1.4.1-windows-amd64.tar.gz -cmd /c tar xvf .\containerd-1.4.1-windows-amd64.tar.gz +curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz +tar.exe xvf .\containerd-windows-amd64.tar.gz ``` ```powershell @@ -236,7 +241,9 @@ cd $Env:ProgramFiles\containerd\ # - sandbox_image (쿠버네티스 pause 이미지) # - cni bin_dir 및 conf_dir locations Get-Content config.toml -``` + +# (선택 사항이지만, 강력히 권장됨) containerd를 Windows Defender 검사 예외에 추가 +Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe" ``` ```powershell # containerd 시작 diff --git a/content/ko/docs/setup/production-environment/tools/kops.md b/content/ko/docs/setup/production-environment/tools/kops.md index 05fc21f32a..dbea03b735 100644 --- a/content/ko/docs/setup/production-environment/tools/kops.md +++ b/content/ko/docs/setup/production-environment/tools/kops.md @@ -39,7 +39,7 @@ kops는 자동화된 프로비저닝 시스템인데, #### 설치 -[releases page](https://github.com/kubernetes/kops/releases)에서 kops를 다운로드 한다(소스 코드로부터 빌드하는 것도 역시 편리하다). +[releases page](https://github.com/kubernetes/kops/releases)에서 kops를 다운로드한다(소스 코드로부터 빌드하는 것도 역시 편리하다). {{< tabs name="kops_installation" >}} {{% tab name="macOS" %}} diff --git a/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 9e28b5b509..39f31dd8af 100644 --- a/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -18,14 +18,7 @@ card: ## {{% heading "prerequisites" %}} -* 다음 중 하나를 실행하는 하나 이상의 머신이 필요하다. - - Ubuntu 16.04+ - - Debian 9+ - - CentOS 7+ - - Red Hat Enterprise Linux (RHEL) 7+ - - Fedora 25+ - - HypriotOS v1.0.1+ - - Flatcar Container Linux (2512.3.0으로 테스트됨) +* 호환되는 리눅스 머신. 쿠버네티스 프로젝트는 데비안 기반 배포판, 레드햇 기반 배포판, 그리고 패키지 매니저를 사용하지 않는 경우에 대한 일반적인 가이드를 제공한다. * 2 GB 이상의 램을 장착한 머신. (이 보다 작으면 사용자의 앱을 위한 공간이 거의 남지 않음) * 2 이상의 CPU. * 클러스터의 모든 머신에 걸친 전체 네트워크 연결. (공용 또는 사설 네트워크면 괜찮음) @@ -121,7 +114,7 @@ etcd 포트가 컨트롤 플레인 노드에 포함되어 있지만, 외부 또 {{< table caption = "컨테이너 런타임과 소켓 경로" >}} | 런타임 | 유닉스 도메인 소켓 경로 | |------------|-----------------------------------| -| 도커 | `/var/run/docker.sock` | +| 도커 | `/var/run/dockershim.sock` | | containerd | `/run/containerd/containerd.sock` | | CRI-O | `/var/run/crio/crio.sock` | {{< /table >}} @@ -180,7 +173,7 @@ kubeadm은 `kubelet` 또는 `kubectl` 을 설치하거나 관리하지 **않으 * Kubeadm 관련 [버전 차이 정책](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy) {{< tabs name="k8s_install" >}} -{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} +{{% tab name="데비안 기반 배포판" %}} ```bash sudo apt-get update && sudo apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - @@ -192,7 +185,7 @@ sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl ``` {{% /tab %}} -{{% tab name="CentOS, RHEL 또는 Fedora" %}} +{{% tab name="레드햇 기반 배포판" %}} ```bash cat < diff --git a/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 67283774af..bd4a3503e8 100644 --- a/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -303,8 +303,9 @@ CSI 노드 플러그인(특히 블록 디바이스 또는 공유 파일시스템 다음 네트워킹 기능은 윈도우 노드에서 지원되지 않는다. * 윈도우 파드에서는 호스트 네트워킹 모드를 사용할 수 없다. -* 노드 자체에서 로컬 NodePort 접근은 실패한다. (다른 노드 또는 외부 클라이언트에서 작동) +* 노드 자체에서 로컬 NodePort 접근은 실패한다. (다른 노드 또는 외부 클라이언트에서는 가능) * 노드에서 서비스 VIP에 접근하는 것은 향후 윈도우 서버 릴리스에서 사용할 수 있다. +* 한 서비스는 최대 64개의 백엔드 파드 또는 고유한 목적지 IP를 지원할 수 있다. * kube-proxy의 오버레이 네트워킹 지원은 알파 릴리스이다. 또한 윈도우 서버 2019에 [KB4482887](https://support.microsoft.com/ko-kr/help/4482887/windows-10-update-kb4482887)을 설치해야 한다. * 로컬 트래픽 정책 및 DSR 모드 * l2bridge, l2tunnel 또는 오버레이 네트워크에 연결된 윈도우 컨테이너는 IPv6 스택을 통한 통신을 지원하지 않는다. 이러한 네트워크 드라이버가 IPv6 주소를 사용하고 kubelet, kube-proxy 및 CNI 플러그인에서 후속 쿠버네티스 작업을 사용할 수 있도록 하는데 필요한 뛰어난 윈도우 플랫폼 작업이 있다. diff --git a/content/ko/docs/tasks/administer-cluster/access-cluster-api.md b/content/ko/docs/tasks/administer-cluster/access-cluster-api.md index d6e567e674..b2ea227718 100644 --- a/content/ko/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/ko/docs/tasks/administer-cluster/access-cluster-api.md @@ -353,99 +353,6 @@ exampleWithKubeConfig = do >>= print ``` +## {{% heading "whatsnext" %}} -### 파드 내에서 API에 접근 {#accessing-the-api-from-within-a-pod} - -파드 내에서 API에 접근할 때, API 서버를 찾아 인증하는 것은 -위에서 설명한 외부 클라이언트 사례와 약간 다르다. - -파드에서 쿠버네티스 API를 사용하는 가장 쉬운 방법은 -공식 [클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 중 하나를 사용하는 것이다. 이러한 -라이브러리는 API 서버를 자동으로 감지하고 인증할 수 있다. - -#### 공식 클라이언트 라이브러리 사용 - -파드 내에서, 쿠버네티스 API에 연결하는 권장 방법은 다음과 같다. - - - Go 클라이언트의 경우, 공식 [Go 클라이언트 라이브러리](https://github.com/kubernetes/client-go/)를 사용한다. - `rest.InClusterConfig()` 기능은 API 호스트 검색과 인증을 자동으로 처리한다. - [여기 예제](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)를 참고한다. - - - Python 클라이언트의 경우, 공식 [Python 클라이언트 라이브러리](https://github.com/kubernetes-client/python/)를 사용한다. - `config.load_incluster_config()` 기능은 API 호스트 검색과 인증을 자동으로 처리한다. - [여기 예제](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py)를 참고한다. - - - 사용할 수 있는 다른 라이브러리가 많이 있다. [클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 페이지를 참고한다. - -각각의 경우, 파드의 서비스 어카운트 자격 증명은 API 서버와 -안전하게 통신하는 데 사용된다. - -#### REST API에 직접 접근 - -파드에서 실행되는 동안, 쿠버네티스 apiserver는 `default` 네임스페이스에서 `kubernetes`라는 -서비스를 통해 접근할 수 있다. 따라서, 파드는 `kubernetes.default.svc` -호스트 이름을 사용하여 API 서버를 쿼리할 수 있다. 공식 클라이언트 라이브러리는 -이를 자동으로 수행한다. - -API 서버를 인증하는 권장 방법은 [서비스 어카운트](/docs/tasks/configure-pod-container/configure-service-account/) -자격 증명을 사용하는 것이다. 기본적으로, 파드는 -서비스 어카운트와 연결되어 있으며, 해당 서비스 어카운트에 대한 자격 증명(토큰)은 -해당 파드에 있는 각 컨테이너의 파일시스템 트리의 -`/var/run/secrets/kubernetes.io/serviceaccount/token` 에 있다. - -사용 가능한 경우, 인증서 번들은 각 컨테이너의 -파일시스템 트리의 `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` 에 배치되며, -API 서버의 제공 인증서를 확인하는 데 사용해야 한다. - -마지막으로, 네임스페이스가 지정된 API 작업에 사용되는 기본 네임스페이스는 각 컨테이너의 -`/var/run/secrets/kubernetes.io/serviceaccount/namespace` 에 있는 파일에 배치된다. - -#### kubectl 프록시 사용 - -공식 클라이언트 라이브러리 없이 API를 쿼리하려면, 파드에서 -새 사이드카 컨테이너의 [명령](/ko/docs/tasks/inject-data-application/define-command-argument-container/)으로 -`kubectl proxy` 를 실행할 수 있다. 이런 식으로, `kubectl proxy` 는 -API를 인증하고 이를 파드의 `localhost` 인터페이스에 노출시켜서, 파드의 -다른 컨테이너가 직접 사용할 수 있도록 한다. - -#### 프록시를 사용하지 않고 접근 - -인증 토큰을 API 서버에 직접 전달하여 kubectl 프록시 사용을 -피할 수 있다. 내부 인증서는 연결을 보호한다. - -```shell -# 내부 API 서버 호스트 이름을 가리킨다 -APISERVER=https://kubernetes.default.svc - -# ServiceAccount 토큰 경로 -SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount - -# 이 파드의 네임스페이스를 읽는다 -NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) - -# ServiceAccount 베어러 토큰을 읽는다 -TOKEN=$(cat ${SERVICEACCOUNT}/token) - -# 내부 인증 기관(CA)을 참조한다 -CACERT=${SERVICEACCOUNT}/ca.crt - -# TOKEN으로 API를 탐색한다 -curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api -``` - -출력은 다음과 비슷하다. - -```json -{ - "kind": "APIVersions", - "versions": [ - "v1" - ], - "serverAddressByClientCIDRs": [ - { - "clientCIDR": "0.0.0.0/0", - "serverAddress": "10.0.1.149:443" - } - ] -} -``` +* [파드 내에서 쿠버네티스 API에 접근](/ko/docs/tasks/run-application/access-api-from-pod/) diff --git a/content/ko/docs/tasks/administer-cluster/certificates.md b/content/ko/docs/tasks/administer-cluster/certificates.md new file mode 100644 index 0000000000..8c8f6a148b --- /dev/null +++ b/content/ko/docs/tasks/administer-cluster/certificates.md @@ -0,0 +1,250 @@ +--- +title: 인증서 +content_type: task +weight: 20 +--- + + + + +클라이언트 인증서로 인증을 사용하는 경우 `easyrsa`, `openssl` 또는 `cfssl` +을 통해 인증서를 수동으로 생성할 수 있다. + + + + + + +### easyrsa + +**easyrsa** 는 클러스터 인증서를 수동으로 생성할 수 있다. + +1. easyrsa3의 패치 버전을 다운로드하여 압축을 풀고, 초기화한다. + + curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz + tar xzf easy-rsa.tar.gz + cd easy-rsa-master/easyrsa3 + ./easyrsa init-pki +1. 새로운 인증 기관(CA)을 생성한다. `--batch` 는 자동 모드를 설정한다. + `--req-cn` 는 CA의 새 루트 인증서에 대한 일반 이름(Common Name (CN))을 지정한다. + + ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass +1. 서버 인증서와 키를 생성한다. + `--subject-alt-name` 인수는 API 서버에 접근이 가능한 IP와 DNS + 이름을 설정한다. `MASTER_CLUSTER_IP` 는 일반적으로 API 서버와 + 컨트롤러 관리자 컴포넌트에 대해 `--service-cluster-ip-range` 인수로 + 지정된 서비스 CIDR의 첫 번째 IP이다. `--days` 인수는 인증서가 만료되는 + 일 수를 설정하는데 사용된다. + 또한, 아래 샘플은 기본 DNS 이름으로 `cluster.local` 을 + 사용한다고 가정한다. + + ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ + "IP:${MASTER_CLUSTER_IP},"\ + "DNS:kubernetes,"\ + "DNS:kubernetes.default,"\ + "DNS:kubernetes.default.svc,"\ + "DNS:kubernetes.default.svc.cluster,"\ + "DNS:kubernetes.default.svc.cluster.local" \ + --days=10000 \ + build-server-full server nopass +1. `pki/ca.crt`, `pki/issued/server.crt` 그리고 `pki/private/server.key` 를 디렉터리에 복사한다. +1. API 서버 시작 파라미터에 다음 파라미터를 채우고 추가한다. + + --client-ca-file=/yourdirectory/ca.crt + --tls-cert-file=/yourdirectory/server.crt + --tls-private-key-file=/yourdirectory/server.key + +### openssl + +**openssl** 은 클러스터 인증서를 수동으로 생성할 수 있다. + +1. ca.key를 2048bit로 생성한다. + + openssl genrsa -out ca.key 2048 +1. ca.key에 따라 ca.crt를 생성한다(인증서 유효 기간을 사용하려면 -days를 사용한다). + + openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt +1. server.key를 2048bit로 생성한다. + + openssl genrsa -out server.key 2048 +1. 인증서 서명 요청(Certificate Signing Request (CSR))을 생성하기 위한 설정 파일을 생성한다. + 파일에 저장하기 전에 꺾쇠 괄호(예: ``)로 + 표시된 값을 실제 값으로 대체한다(예: `csr.conf`). + `MASTER_CLUSTER_IP` 의 값은 이전 하위 섹션에서 + 설명한 대로 API 서버의 서비스 클러스터 IP이다. + 또한, 아래 샘플에서는 `cluster.local` 을 기본 DNS 도메인 + 이름으로 사용하고 있다고 가정한다. + + [ req ] + default_bits = 2048 + prompt = no + default_md = sha256 + req_extensions = req_ext + distinguished_name = dn + + [ dn ] + C = <국가(country)> + ST = <도(state)> + L = <시(city)> + O = <조직(organization)> + OU = <조직 단위(organization unit)> + CN = + + [ req_ext ] + subjectAltName = @alt_names + + [ alt_names ] + DNS.1 = kubernetes + DNS.2 = kubernetes.default + DNS.3 = kubernetes.default.svc + DNS.4 = kubernetes.default.svc.cluster + DNS.5 = kubernetes.default.svc.cluster.local + IP.1 = + IP.2 = + + [ v3_ext ] + authorityKeyIdentifier=keyid,issuer:always + basicConstraints=CA:FALSE + keyUsage=keyEncipherment,dataEncipherment + extendedKeyUsage=serverAuth,clientAuth + subjectAltName=@alt_names +1. 설정 파일을 기반으로 인증서 서명 요청을 생성한다. + + openssl req -new -key server.key -out server.csr -config csr.conf +1. ca.key, ca.crt 그리고 server.csr을 사용해서 서버 인증서를 생성한다. + + openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ + -CAcreateserial -out server.crt -days 10000 \ + -extensions v3_ext -extfile csr.conf +1. 인증서를 본다. + + openssl x509 -noout -text -in ./server.crt + +마지막으로, API 서버 시작 파라미터에 동일한 파라미터를 추가한다. + +### cfssl + +**cfssl** 은 인증서 생성을 위한 또 다른 도구이다. + +1. 아래에 표시된 대로 커맨드 라인 도구를 다운로드하여 압축을 풀고 준비한다. + 사용 중인 하드웨어 아키텍처 및 cfssl 버전에 따라 샘플 + 명령을 조정해야 할 수도 있다. + + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl + chmod +x cfssl + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson + chmod +x cfssljson + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo + chmod +x cfssl-certinfo +1. 아티팩트(artifact)를 보유할 디렉터리를 생성하고 cfssl을 초기화한다. + + mkdir cert + cd cert + ../cfssl print-defaults config > config.json + ../cfssl print-defaults csr > csr.json +1. CA 파일을 생성하기 위한 JSON 설정 파일을 `ca-config.json` 예시와 같이 생성한다. + + { + "signing": { + "default": { + "expiry": "8760h" + }, + "profiles": { + "kubernetes": { + "usages": [ + "signing", + "key encipherment", + "server auth", + "client auth" + ], + "expiry": "8760h" + } + } + } + } +1. CA 인증서 서명 요청(CSR)을 위한 JSON 설정 파일을 + `ca-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호로 표시된 + 값을 사용하려는 실제 값으로 변경한다. + + { + "CN": "kubernetes", + "key": { + "algo": "rsa", + "size": 2048 + }, + "names":[{ + "C": "<국가(country)>", + "ST": "<도(state)>", + "L": "<시(city)>", + "O": "<조직(organization)>", + "OU": "<조직 단위(organization unit)>" + }] + } +1. CA 키(`ca-key.pem`)와 인증서(`ca.pem`)을 생성한다. + + ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca +1. API 서버의 키와 인증서를 생성하기 위한 JSON 구성파일을 + `server-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호 안의 값을 + 사용하려는 실제 값으로 변경한다. `MASTER_CLUSTER_IP` 는 + 이전 하위 섹션에서 설명한 API 서버의 클러스터 IP이다. + 아래 샘플은 기본 DNS 도메인 이름으로 `cluster.local` 을 + 사용한다고 가정한다. + + { + "CN": "kubernetes", + "hosts": [ + "127.0.0.1", + "", + "", + "kubernetes", + "kubernetes.default", + "kubernetes.default.svc", + "kubernetes.default.svc.cluster", + "kubernetes.default.svc.cluster.local" + ], + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [{ + "C": "<국가(country)>", + "ST": "<도(state)>", + "L": "<시(city)>", + "O": "<조직(organization)>", + "OU": "<조직 단위(organization unit)>" + }] + } +1. API 서버 키와 인증서를 생성하면, 기본적으로 + `server-key.pem` 과 `server.pem` 파일에 각각 저장된다. + + ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ + --config=ca-config.json -profile=kubernetes \ + server-csr.json | ../cfssljson -bare server + + +## 자체 서명된 CA 인증서의 배포 + +클라이언트 노드는 자체 서명된 CA 인증서를 유효한 것으로 인식하지 않을 수 있다. +비-프로덕션 디플로이먼트 또는 회사 방화벽 뒤에서 실행되는 +디플로이먼트의 경우, 자체 서명된 CA 인증서를 모든 클라이언트에 +배포하고 유효한 인증서의 로컬 목록을 새로 고칠 수 있다. + +각 클라이언트에서, 다음 작업을 수행한다. + +```bash +sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt +sudo update-ca-certificates +``` + +``` +Updating certificates in /etc/ssl/certs... +1 added, 0 removed; done. +Running hooks in /etc/ca-certificates/update.d.... +done. +``` + +## 인증서 API + +`certificates.k8s.io` API를 사용해서 +[여기](/docs/tasks/tls/managing-tls-in-a-cluster)에 +설명된 대로 인증에 사용할 x509 인증서를 프로비전 할 수 있다. diff --git a/content/ko/docs/tasks/administer-cluster/change-default-storage-class.md b/content/ko/docs/tasks/administer-cluster/change-default-storage-class.md index 08c2e2cef4..ff6379ee1f 100644 --- a/content/ko/docs/tasks/administer-cluster/change-default-storage-class.md +++ b/content/ko/docs/tasks/administer-cluster/change-default-storage-class.md @@ -70,7 +70,7 @@ content_type: task 1. 스토리지클래스를 기본값으로 표시한다. - 이전 과정과 유사하게, 어노테이션을 추가/설정 해야 한다. + 이전 과정과 유사하게, 어노테이션을 추가/설정해야 한다. `storageclass.kubernetes.io/is-default-class=true`. ```bash diff --git a/content/ko/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/ko/docs/tasks/inject-data-application/define-environment-variable-container.md index 22372813e9..5e23ba831e 100644 --- a/content/ko/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/ko/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -70,8 +70,9 @@ weight: 20 {{< /note >}} {{< note >}} -환경 변수는 서로를 참조할 수 있으며 사이클이 가능하다. -사용하기 전에 순서에 주의한다. +환경 변수는 서로를 참조할 수 있는데, 이 때 순서에 주의해야 한다. +동일한 컨텍스트에서 정의된 다른 변수를 참조하는 변수는 목록의 뒤쪽에 나와야 한다. +또한, 순환 참조는 피해야 한다. {{< /note >}} ## 설정 안에서 환경 변수 사용하기 diff --git a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md index c9ed347c56..16b2cd05f0 100644 --- a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -7,7 +7,7 @@ weight: 20 [Kustomize](https://github.com/kubernetes-sigs/kustomize)는 -[kustomization 파일](https://kubernetes-sigs.github.io/kustomize/api-reference/glossary/#kustomization)을 +[kustomization 파일](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization)을 통해 쿠버네티스 오브젝트를 사용자가 원하는 대로 변경하는(customize) 독립형 도구이다. 1.14 이후로, kubectl도 diff --git a/content/ko/docs/tasks/run-application/access-api-from-pod.md b/content/ko/docs/tasks/run-application/access-api-from-pod.md new file mode 100644 index 0000000000..d12f3b2f00 --- /dev/null +++ b/content/ko/docs/tasks/run-application/access-api-from-pod.md @@ -0,0 +1,111 @@ +--- +title: 파드 내에서 쿠버네티스 API에 접근 +content_type: task +weight: 120 +--- + + + +이 페이지는 파드 내에서 쿠버네티스 API에 접근하는 방법을 보여준다. + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + +## 파드 내에서 API에 접근 {#accessing-the-api-from-within-a-pod} + +파드 내에서 API에 접근할 때, API 서버를 찾아 인증하는 것은 +위에서 설명한 외부 클라이언트 사례와 약간 다르다. + +파드에서 쿠버네티스 API를 사용하는 가장 쉬운 방법은 +공식 [클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 중 하나를 사용하는 것이다. 이러한 +라이브러리는 API 서버를 자동으로 감지하고 인증할 수 있다. + +### 공식 클라이언트 라이브러리 사용 + +파드 내에서, 쿠버네티스 API에 연결하는 권장 방법은 다음과 같다. + + - Go 클라이언트의 경우, 공식 [Go 클라이언트 라이브러리](https://github.com/kubernetes/client-go/)를 사용한다. + `rest.InClusterConfig()` 기능은 API 호스트 검색과 인증을 자동으로 처리한다. + [여기 예제](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)를 참고한다. + + - Python 클라이언트의 경우, 공식 [Python 클라이언트 라이브러리](https://github.com/kubernetes-client/python/)를 사용한다. + `config.load_incluster_config()` 기능은 API 호스트 검색과 인증을 자동으로 처리한다. + [여기 예제](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py)를 참고한다. + + - 사용할 수 있는 다른 라이브러리가 많이 있다. [클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 페이지를 참고한다. + +각각의 경우, 파드의 서비스 어카운트 자격 증명은 API 서버와 +안전하게 통신하는 데 사용된다. + +### REST API에 직접 접근 + +파드에서 실행되는 동안, 쿠버네티스 apiserver는 `default` 네임스페이스에서 `kubernetes`라는 +서비스를 통해 접근할 수 있다. 따라서, 파드는 `kubernetes.default.svc` +호스트 이름을 사용하여 API 서버를 쿼리할 수 있다. 공식 클라이언트 라이브러리는 +이를 자동으로 수행한다. + +API 서버를 인증하는 권장 방법은 [서비스 어카운트](/docs/tasks/configure-pod-container/configure-service-account/) +자격 증명을 사용하는 것이다. 기본적으로, 파드는 +서비스 어카운트와 연결되어 있으며, 해당 서비스 어카운트에 대한 자격 증명(토큰)은 +해당 파드에 있는 각 컨테이너의 파일시스템 트리의 +`/var/run/secrets/kubernetes.io/serviceaccount/token` 에 있다. + +사용 가능한 경우, 인증서 번들은 각 컨테이너의 +파일시스템 트리의 `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` 에 배치되며, +API 서버의 제공 인증서를 확인하는 데 사용해야 한다. + +마지막으로, 네임스페이스가 지정된 API 작업에 사용되는 기본 네임스페이스는 각 컨테이너의 +`/var/run/secrets/kubernetes.io/serviceaccount/namespace` 에 있는 파일에 배치된다. + +### kubectl 프록시 사용 + +공식 클라이언트 라이브러리 없이 API를 쿼리하려면, 파드에서 +새 사이드카 컨테이너의 [명령](/ko/docs/tasks/inject-data-application/define-command-argument-container/)으로 +`kubectl proxy` 를 실행할 수 있다. 이런 식으로, `kubectl proxy` 는 +API를 인증하고 이를 파드의 `localhost` 인터페이스에 노출시켜서, 파드의 +다른 컨테이너가 직접 사용할 수 있도록 한다. + +### 프록시를 사용하지 않고 접근 + +인증 토큰을 API 서버에 직접 전달하여 kubectl 프록시 사용을 +피할 수 있다. 내부 인증서는 연결을 보호한다. + +```shell +# 내부 API 서버 호스트 이름을 가리킨다 +APISERVER=https://kubernetes.default.svc + +# 서비스어카운트(ServiceAccount) 토큰 경로 +SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount + +# 이 파드의 네임스페이스를 읽는다 +NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) + +# 서비스어카운트 베어러 토큰을 읽는다 +TOKEN=$(cat ${SERVICEACCOUNT}/token) + +# 내부 인증 기관(CA)을 참조한다 +CACERT=${SERVICEACCOUNT}/ca.crt + +# TOKEN으로 API를 탐색한다 +curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api +``` + +출력은 다음과 비슷하다. + +```json +{ + "kind": "APIVersions", + "versions": [ + "v1" + ], + "serverAddressByClientCIDRs": [ + { + "clientCIDR": "0.0.0.0/0", + "serverAddress": "10.0.1.149:443" + } + ] +} +``` diff --git a/content/ko/docs/tasks/tls/certificate-rotation.md b/content/ko/docs/tasks/tls/certificate-rotation.md index b23bf2a600..037f99d87a 100644 --- a/content/ko/docs/tasks/tls/certificate-rotation.md +++ b/content/ko/docs/tasks/tls/certificate-rotation.md @@ -70,6 +70,7 @@ kubelet은 쿠버네티스 API로 서명된 인증서를 가져와서 서명된 인증서의 만료가 다가오면 kubelet은 쿠버네티스 API를 사용하여 새로운 인증서 서명 요청을 자동으로 발행한다. +이는 인증서 유효 기간이 30%-10% 남은 시점에 언제든지 실행될 수 있다. 또한, 컨트롤러 관리자는 인증서 요청을 자동으로 승인하고 서명된 인증서를 인증서 서명 요청에 첨부한다. kubelet은 쿠버네티스 API로 서명된 새로운 인증서를 가져와서 디스크에 쓴다. diff --git a/content/ko/docs/tasks/tools/_index.md b/content/ko/docs/tasks/tools/_index.md index 74abf8d981..47c90ec7fd 100755 --- a/content/ko/docs/tasks/tools/_index.md +++ b/content/ko/docs/tasks/tools/_index.md @@ -7,18 +7,19 @@ no_list: true ## kubectl -쿠버네티스 커맨드 라인 도구인 `kubectl` 사용하면 쿠버네티스 클러스터에 대해 명령을 -실행할 수 있다. `kubectl` 을 사용하여 애플리케이션을 배포하고, 클러스터 리소스를 검사 및 -관리하고, 로그를 볼 수 있다. + +쿠버네티스 커맨드 라인 도구인 [`kubectl`](/ko/docs/reference/kubectl/kubectl/)을 사용하면 +쿠버네티스 클러스터에 대해 명령을 실행할 수 있다. +`kubectl` 을 사용하여 애플리케이션을 배포하고, 클러스터 리소스를 검사 및 관리하고, +로그를 볼 수 있다. kubectl 전체 명령어를 포함한 추가 정보는 +[`kubectl` 레퍼런스 문서](/ko/docs/reference/kubectl/)에서 확인할 수 있다. -클러스터에 접근하기 위해 `kubectl` 을 다운로드 및 설치하고 설정하는 방법에 대한 정보는 -[`kubectl` 설치 및 설정](/ko/docs/tasks/tools/install-kubectl/)을 -참고한다. +`kubectl` 은 다양한 리눅스 플랫폼, macOS, 그리고 윈도우에 설치할 수 있다. +각각에 대한 설치 가이드는 다음과 같다. -kubectl 설치 및 설정 가이드 보기 - -[`kubectl` 레퍼런스 문서](/ko/docs/reference/kubectl/)를 -읽어볼 수도 있다. +- [리눅스에 `kubectl` 설치하기](install-kubectl-linux) +- [macOS에 `kubectl` 설치하기](install-kubectl-macos) +- [윈도우에 `kubectl` 설치하기](install-kubectl-windows) ## kind diff --git a/content/ko/docs/tasks/tools/included/_index.md b/content/ko/docs/tasks/tools/included/_index.md new file mode 100644 index 0000000000..4ba9445002 --- /dev/null +++ b/content/ko/docs/tasks/tools/included/_index.md @@ -0,0 +1,6 @@ +--- +title: "포함된 도구들" +description: "메인 kubectl-installs-*.md 페이지에 포함될 스니펫." +headless: true +toc_hide: true +--- \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/included/install-kubectl-gcloud.md b/content/ko/docs/tasks/tools/included/install-kubectl-gcloud.md new file mode 100644 index 0000000000..f3deae981c --- /dev/null +++ b/content/ko/docs/tasks/tools/included/install-kubectl-gcloud.md @@ -0,0 +1,21 @@ +--- +title: "gcloud kubectl install" +description: "gcloud를 이용하여 kubectl을 설치하는 방법을 각 OS별 탭에 포함하기 위한 스니펫." +headless: true +--- + +Google Cloud SDK를 사용하여 kubectl을 설치할 수 있다. + +1. [Google Cloud SDK](https://cloud.google.com/sdk/)를 설치한다. + +1. `kubectl` 설치 명령을 실행한다. + + ```shell + gcloud components install kubectl + ``` + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```shell + kubectl version --client + ``` \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/included/kubectl-whats-next.md b/content/ko/docs/tasks/tools/included/kubectl-whats-next.md new file mode 100644 index 0000000000..70532cd2eb --- /dev/null +++ b/content/ko/docs/tasks/tools/included/kubectl-whats-next.md @@ -0,0 +1,12 @@ +--- +title: "다음 단계는 무엇인가?" +description: "kubectl을 설치한 다음 해야 하는 것에 대해 설명한다." +headless: true +--- + +* [Minikube 설치](https://minikube.sigs.k8s.io/docs/start/) +* 클러스터 생성에 대한 자세한 내용은 [시작하기](/ko/docs/setup/)를 참고한다. +* [애플리케이션을 시작하고 노출하는 방법에 대해 배운다.](/ko/docs/tasks/access-application-cluster/service-access-application-cluster/) +* 직접 생성하지 않은 클러스터에 접근해야 하는 경우, + [클러스터 접근 공유 문서](/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)를 참고한다. +* [kubectl 레퍼런스 문서](/ko/docs/reference/kubectl/kubectl/) 읽기 diff --git a/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md new file mode 100644 index 0000000000..b9597857bb --- /dev/null +++ b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md @@ -0,0 +1,54 @@ +--- +title: "리눅스에서 bash 자동 완성 사용하기" +description: "리눅스에서 bash 자동 완성을 위한 몇 가지 선택적 구성에 대해 설명한다." +headless: true +--- + +### 소개 + +Bash의 kubectl 자동 완성 스크립트는 `kubectl completion bash` 명령으로 생성할 수 있다. 셸에서 자동 완성 스크립트를 소싱(sourcing)하면 kubectl 자동 완성 기능이 활성화된다. + +그러나, 자동 완성 스크립트는 [**bash-completion**](https://github.com/scop/bash-completion)에 의존하고 있으며, 이 소프트웨어를 먼저 설치해야 한다(`type _init_completion` 을 실행하여 bash-completion이 이미 설치되어 있는지 확인할 수 있음). + +### bash-completion 설치 + +bash-completion은 많은 패키지 관리자에 의해 제공된다([여기](https://github.com/scop/bash-completion#installation) 참고). `apt-get install bash-completion` 또는 `yum install bash-completion` 등으로 설치할 수 있다. + +위의 명령은 bash-completion의 기본 스크립트인 `/usr/share/bash-completion/bash_completion` 을 생성한다. 패키지 관리자에 따라, `~/.bashrc` 파일에서 이 파일을 수동으로 소스(source)해야 한다. + +확인하려면, 셸을 다시 로드하고 `type _init_completion` 을 실행한다. 명령이 성공하면, 이미 설정된 상태이고, 그렇지 않으면 `~/.bashrc` 파일에 다음을 추가한다. + +```bash +source /usr/share/bash-completion/bash_completion +``` + +셸을 다시 로드하고 `type _init_completion` 을 입력하여 bash-completion이 올바르게 설치되었는지 확인한다. + +### kubectl 자동 완성 활성화 + +이제 kubectl 자동 완성 스크립트가 모든 셸 세션에서 제공되도록 해야 한다. 이를 수행할 수 있는 두 가지 방법이 있다. + +- `~/.bashrc` 파일에서 자동 완성 스크립트를 소싱한다. + + ```bash + echo 'source <(kubectl completion bash)' >>~/.bashrc + ``` + +- 자동 완성 스크립트를 `/etc/bash_completion.d` 디렉터리에 추가한다. + + ```bash + kubectl completion bash >/etc/bash_completion.d/kubectl + ``` + +kubectl에 대한 앨리어스(alias)가 있는 경우, 해당 앨리어스로 작업하도록 셸 자동 완성을 확장할 수 있다. + +```bash +echo 'alias k=kubectl' >>~/.bashrc +echo 'complete -F __start_kubectl k' >>~/.bashrc +``` + +{{< note >}} +bash-completion은 `/etc/bash_completion.d` 에 있는 모든 자동 완성 스크립트를 소싱한다. +{{< /note >}} + +두 방법 모두 동일하다. 셸을 다시 로드하면, kubectl 자동 완성 기능이 작동할 것이다. diff --git a/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md new file mode 100644 index 0000000000..7acb5d3621 --- /dev/null +++ b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md @@ -0,0 +1,89 @@ +--- +title: "macOS에서 bash 자동 완성 사용하기" +description: "macOS에서 bash 자동 완성을 위한 몇 가지 선택적 구성에 대해 설명한다." +headless: true +--- + +### 소개 + +Bash의 kubectl 자동 완성 스크립트는 `kubectl completion bash` 로 생성할 수 있다. 이 스크립트를 셸에 소싱하면 kubectl 자동 완성이 가능하다. + +그러나 kubectl 자동 완성 스크립트는 미리 [**bash-completion**](https://github.com/scop/bash-completion)을 설치해야 동작한다. + +{{< warning>}} +bash-completion에는 v1과 v2 두 가지 버전이 있다. v1은 Bash 3.2(macOS의 기본 설치 버전) 버전용이고, v2는 Bash 4.1 이상 버전용이다. kubectl 자동 완성 스크립트는 bash-completion v1과 Bash 3.2 버전에서는 **작동하지 않는다**. **bash-completion v2** 와 **Bash 4.1 이상 버전** 이 필요하다. 따라서, macOS에서 kubectl 자동 완성 기능을 올바르게 사용하려면, Bash 4.1 이상을 설치하고 사용해야 한다([*지침*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). 다음의 내용에서는 Bash 4.1 이상(즉, 모든 Bash 버전 4.1 이상)을 사용한다고 가정한다. +{{< /warning >}} + +### Bash 업그레이드 + +여기의 지침에서는 Bash 4.1 이상을 사용한다고 가정한다. 다음을 실행하여 Bash 버전을 확인할 수 있다. + +```bash +echo $BASH_VERSION +``` + +너무 오래된 버전인 경우, Homebrew를 사용하여 설치/업그레이드할 수 있다. + +```bash +brew install bash +``` + +셸을 다시 로드하고 원하는 버전을 사용 중인지 확인한다. + +```bash +echo $BASH_VERSION $SHELL +``` + +Homebrew는 보통 `/usr/local/bin/bash` 에 설치한다. + +### bash-completion 설치 + +{{< note >}} +언급한 바와 같이, 이 지침에서는 Bash 4.1 이상을 사용한다고 가정한다. 이는 bash-completion v2를 설치한다는 것을 의미한다(Bash 3.2 및 bash-completion v1의 경우, kubectl 자동 완성이 작동하지 않음). +{{< /note >}} + +bash-completion v2가 이미 설치되어 있는지 `type_init_completion` 으로 확인할 수 있다. 그렇지 않은 경우, Homebrew로 설치할 수 있다. + +```bash +brew install bash-completion@2 +``` + +이 명령의 출력에 명시된 바와 같이, `~/.bash_profile` 파일에 다음을 추가한다. + +```bash +export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" +[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" +``` + +셸을 다시 로드하고 bash-completion v2가 올바르게 설치되었는지 `type _init_completion` 으로 확인한다. + +### kubectl 자동 완성 활성화 + +이제 kubectl 자동 완성 스크립트가 모든 셸 세션에서 제공되도록 해야 한다. 이를 수행하는 방법에는 여러 가지가 있다. + +- 자동 완성 스크립트를 `~/.bash_profile` 파일에서 소싱한다. + + ```bash + echo 'source <(kubectl completion bash)' >>~/.bash_profile + ``` + +- 자동 완성 스크립트를 `/usr/local/etc/bash_completion.d` 디렉터리에 추가한다. + + ```bash + kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl + ``` + +- kubectl에 대한 앨리어스가 있는 경우, 해당 앨리어스로 작업하기 위해 셸 자동 완성을 확장할 수 있다. + + ```bash + echo 'alias k=kubectl' >>~/.bash_profile + echo 'complete -F __start_kubectl k' >>~/.bash_profile + ``` + +- Homebrew로 kubectl을 설치한 경우([여기](/ko/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)의 설명을 참고), kubectl 자동 완성 스크립트가 이미 `/usr/local/etc/bash_completion.d/kubectl` 에 있을 것이다. 이 경우, 아무 것도 할 필요가 없다. + + {{< note >}} + bash-completion v2의 Homebrew 설치는 `BASH_COMPLETION_COMPAT_DIR` 디렉터리의 모든 파일을 소싱하므로, 후자의 두 가지 방법이 적용된다. + {{< /note >}} + +어떤 경우든, 셸을 다시 로드하면, kubectl 자동 완성 기능이 작동할 것이다. \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md new file mode 100644 index 0000000000..e81403300b --- /dev/null +++ b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md @@ -0,0 +1,29 @@ +--- +title: "zsh 자동 완성" +description: "zsh 자동 완성을 위한 몇 가지 선택적 구성에 대해 설명한다." +headless: true +--- + +Zsh용 kubectl 자동 완성 스크립트는 `kubectl completion zsh` 명령으로 생성할 수 있다. 셸에서 자동 완성 스크립트를 소싱하면 kubectl 자동 완성 기능이 활성화된다. + +모든 셸 세션에서 사용하려면, `~/.zshrc` 파일에 다음을 추가한다. + +```zsh +source <(kubectl completion zsh) +``` + +kubectl에 대한 앨리어스가 있는 경우, 해당 앨리어스로 작업하도록 셸 자동 완성을 확장할 수 있다. + +```zsh +echo 'alias k=kubectl' >>~/.zshrc +echo 'complete -F __start_kubectl k' >>~/.zshrc +``` + +셸을 다시 로드하면, kubectl 자동 완성 기능이 작동할 것이다. + +`complete:13: command not found: compdef` 와 같은 오류가 발생하면, `~/.zshrc` 파일의 시작 부분에 다음을 추가한다. + +```zsh +autoload -Uz compinit +compinit +``` \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/included/verify-kubectl.md b/content/ko/docs/tasks/tools/included/verify-kubectl.md new file mode 100644 index 0000000000..b935582b7a --- /dev/null +++ b/content/ko/docs/tasks/tools/included/verify-kubectl.md @@ -0,0 +1,34 @@ +--- +title: "kubectl 설치 검증하기" +description: "kubectl을 검증하는 방법에 대해 설명한다." +headless: true +--- + +kubectl이 쿠버네티스 클러스터를 찾아 접근하려면, +[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh)를 +사용하여 클러스터를 생성하거나 Minikube 클러스터를 성공적으로 배포할 때 자동으로 생성되는 +[kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)이 +필요하다. +기본적으로, kubectl 구성은 `~/.kube/config` 에 있다. + +클러스터 상태를 가져와서 kubectl이 올바르게 구성되어 있는지 확인한다. + +```shell +kubectl cluster-info +``` + +URL 응답이 표시되면, kubectl이 클러스터에 접근하도록 올바르게 구성된 것이다. + +다음과 비슷한 메시지가 표시되면, kubectl이 올바르게 구성되지 않았거나 쿠버네티스 클러스터에 연결할 수 없다. + +``` +The connection to the server was refused - did you specify the right host or port? +``` + +예를 들어, 랩톱에서 로컬로 쿠버네티스 클러스터를 실행하려면, Minikube와 같은 도구를 먼저 설치한 다음 위에서 언급한 명령을 다시 실행해야 한다. + +kubectl cluster-info가 URL 응답을 반환하지만 클러스터에 접근할 수 없는 경우, 올바르게 구성되었는지 확인하려면 다음을 사용한다. + +```shell +kubectl cluster-info dump +``` \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/install-kubectl-linux.md b/content/ko/docs/tasks/tools/install-kubectl-linux.md new file mode 100644 index 0000000000..0e8a6ac6ee --- /dev/null +++ b/content/ko/docs/tasks/tools/install-kubectl-linux.md @@ -0,0 +1,174 @@ +--- + + +title: 리눅스에 kubectl 설치 및 설정 +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: 리눅스에 kubectl 설치하기 +--- + +## {{% heading "prerequisites" %}} + +클러스터의 마이너(minor) 버전 차이 내에 있는 kubectl 버전을 사용해야 한다. +예를 들어, v1.2 클라이언트는 v1.1, v1.2 및 v1.3의 마스터와 함께 작동해야 한다. +최신 버전의 kubectl을 사용하면 예기치 않은 문제를 피할 수 있다. + +## 리눅스에 kubectl 설치 + +다음과 같은 방법으로 리눅스에 kubectl을 설치할 수 있다. + +- [리눅스에서 curl을 사용하여 kubectl 바이너리 설치](#install-kubectl-binary-with-curl-on-linux) +- [기본 패키지 관리 도구를 사용하여 설치](#install-using-native-package-management) +- [다른 패키지 관리 도구를 사용하여 설치](#install-using-other-package-management) +- [Google Cloud SDK를 사용하여 설치](#install-on-linux-as-part-of-the-google-cloud-sdk) + +### 리눅스에서 curl을 사용하여 kubectl 바이너리 설치 {#install-kubectl-binary-with-curl-on-linux} + +1. 다음 명령으로 최신 릴리스를 다운로드한다. + + ```bash + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" + ``` + + {{< note >}} +특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다. + +예를 들어, 리눅스에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다. + + ```bash + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl + ``` + {{< /note >}} + +1. 바이너리를 검증한다. (선택 사항) + + kubectl 체크섬(checksum) 파일을 다운로드한다. + + ```bash + curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" + ``` + + kubectl 바이너리를 체크섬 파일을 통해 검증한다. + + ```bash + echo "$(}} + 동일한 버전의 바이너리와 체크섬을 다운로드한다. + {{< /note >}} + +1. kubectl 설치 + + ```bash + sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl + ``` + + {{< note >}} + 대상 시스템에 root 접근 권한을 가지고 있지 않더라도, `~/.local/bin` 디렉터리에 kubectl을 설치할 수 있다. + + ```bash + mkdir -p ~/.local/bin/kubectl + mv ./kubectl ~/.local/bin/kubectl + # 그리고 ~/.local/bin/kubectl을 $PATH에 추가 + ``` + + {{< /note >}} + +1. 설치한 버전이 최신인지 확인한다. + + ```bash + kubectl version --client + ``` + +### 기본 패키지 관리 도구를 사용하여 설치 {#install-using-native-package-management} + +{{< tabs name="kubectl_install" >}} +{{< tab name="Ubuntu, Debian 또는 HypriotOS" codelang="bash" >}} +sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl +curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - +echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list +sudo apt-get update +sudo apt-get install -y kubectl +{{< /tab >}} + +{{< tab name="CentOS, RHEL 또는 Fedora" codelang="bash" >}}cat < /etc/yum.repos.d/kubernetes.repo +[kubernetes] +name=Kubernetes +baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 +enabled=1 +gpgcheck=1 +repo_gpgcheck=1 +gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +EOF +yum install -y kubectl +{{< /tab >}} +{{< /tabs >}} + +### 다른 패키지 관리 도구를 사용하여 설치 {#install-using-other-package-management} + +{{< tabs name="other_kubectl_install" >}} +{{% tab name="Snap" %}} +[snap](https://snapcraft.io/docs/core/install) 패키지 관리자를 지원하는 Ubuntu 또는 다른 리눅스 배포판을 사용하는 경우, kubectl을 [snap](https://snapcraft.io/) 애플리케이션으로 설치할 수 있다. + +```shell +snap install kubectl --classic + +kubectl version --client +``` + +{{% /tab %}} + +{{% tab name="Homebrew" %}} +리눅스 상에서 [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) 패키지 관리자를 사용한다면, [설치](https://docs.brew.sh/Homebrew-on-Linux#install)를 통해 kubectl을 사용할 수 있다. + +```shell +brew install kubectl + +kubectl version --client +``` + +{{% /tab %}} + +{{< /tabs >}} + +### Google Cloud SDK를 사용하여 설치 {#install-on-linux-as-part-of-the-google-cloud-sdk} + +{{< include "included/install-kubectl-gcloud.md" >}} + +## kubectl 구성 확인 + +{{< include "included/verify-kubectl.md" >}} + +## 선택적 kubectl 구성 + +### 셸 자동 완성 활성화 + +kubectl은 Bash 및 Zsh에 대한 자동 완성 지원을 제공하므로 입력을 위한 타이핑을 많이 절약할 수 있다. + +다음은 Bash 및 Zsh에 대한 자동 완성을 설정하는 절차이다. + +{{< tabs name="kubectl_autocompletion" >}} +{{< tab name="Bash" include="included/optional-kubectl-configs-bash-linux.md" />}} +{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}} +{{< /tabs >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} diff --git a/content/ko/docs/tasks/tools/install-kubectl-macos.md b/content/ko/docs/tasks/tools/install-kubectl-macos.md new file mode 100644 index 0000000000..b0747f8a1c --- /dev/null +++ b/content/ko/docs/tasks/tools/install-kubectl-macos.md @@ -0,0 +1,160 @@ +--- + + +title: macOS에 kubectl 설치 및 설정 +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: macOS에 kubectl 설치하기 +--- + +## {{% heading "prerequisites" %}} + +클러스터의 마이너(minor) 버전 차이 내에 있는 kubectl 버전을 사용해야 한다. +예를 들어, v1.2 클라이언트는 v1.1, v1.2 및 v1.3의 마스터와 함께 작동해야 한다. +최신 버전의 kubectl을 사용하면 예기치 않은 문제를 피할 수 있다. + +## macOS에 kubectl 설치 + +다음과 같은 방법으로 macOS에 kubectl을 설치할 수 있다. + +- [macOS에서 curl을 사용하여 kubectl 바이너리 설치](#install-kubectl-binary-with-curl-on-macos) +- [macOS에서 Homebrew를 사용하여 설치](#install-with-homebrew-on-macos) +- [macOS에서 Macports를 사용하여 설치](#install-with-macports-on-macos) +- [Google Cloud SDK를 사용하여 설치](#install-on-macos-as-part-of-the-google-cloud-sdk) + +### macOS에서 curl을 사용하여 kubectl 바이너리 설치 {#install-kubectl-binary-with-curl-on-macos} + +1. 최신 릴리스를 다운로드한다. + + ```bash + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl" + ``` + + {{< note >}} + 특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다. + + 예를 들어, macOS에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다. + + ```bash + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl + ``` + + {{< /note >}} + +1. 바이너리를 검증한다. (선택 사항) + + kubectl 체크섬 파일을 다운로드한다. + + ```bash + curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256" + ``` + + kubectl 바이너리를 체크섬 파일을 통해 검증한다. + + ```bash + echo "$(}} + 동일한 버전의 바이너리와 체크섬을 다운로드한다. + {{< /note >}} + +1. kubectl 바이너리를 실행 가능하게 한다. + + ```bash + chmod +x ./kubectl + ``` + +1. kubectl 바이너리를 시스템 `PATH` 의 파일 위치로 옮긴다. + + ```bash + sudo mv ./kubectl /usr/local/bin/kubectl && \ + sudo chown root: /usr/local/bin/kubectl + ``` + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```bash + kubectl version --client + ``` + +### macOS에서 Homebrew를 사용하여 설치 {#install-with-homebrew-on-macos} + +macOS에서 [Homebrew](https://brew.sh/) 패키지 관리자를 사용하는 경우, Homebrew로 kubectl을 설치할 수 있다. + +1. 설치 명령을 실행한다. + + ```bash + brew install kubectl + ``` + + 또는 + + ```bash + brew install kubernetes-cli + ``` + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```bash + kubectl version --client + ``` + +### macOS에서 Macports를 사용하여 설치 {#install-with-macports-on-macos} + +macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하는 경우, Macports로 kubectl을 설치할 수 있다. + +1. 설치 명령을 실행한다. + + ```bash + sudo port selfupdate + sudo port install kubectl + ``` + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```bash + kubectl version --client + ``` + + +### Google Cloud SDK를 사용하여 설치 {#install-on-macos-as-part-of-the-google-cloud-sdk} + +{{< include "included/install-kubectl-gcloud.md" >}} + +## kubectl 구성 확인 + +{{< include "included/verify-kubectl.md" >}} + +## 선택적 kubectl 구성 + +### 셸 자동 완성 활성화 + +kubectl은 Bash 및 Zsh에 대한 자동 완성 지원을 제공하므로 입력을 위한 타이핑을 많이 절약할 수 있다. + +다음은 Bash 및 Zsh에 대한 자동 완성을 설정하는 절차이다. + +{{< tabs name="kubectl_autocompletion" >}} +{{< tab name="Bash" include="included/optional-kubectl-configs-bash-mac.md" />}} +{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}} +{{< /tabs >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/install-kubectl-windows.md b/content/ko/docs/tasks/tools/install-kubectl-windows.md new file mode 100644 index 0000000000..e1c67af9ce --- /dev/null +++ b/content/ko/docs/tasks/tools/install-kubectl-windows.md @@ -0,0 +1,179 @@ +--- + + +title: 윈도우에 kubectl 설치 및 설정 +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: 윈도우에 kubectl 설치하기 +--- + +## {{% heading "prerequisites" %}} + +클러스터의 마이너(minor) 버전 차이 내에 있는 kubectl 버전을 사용해야 한다. +예를 들어, v1.2 클라이언트는 v1.1, v1.2 및 v1.3의 마스터와 함께 작동해야 한다. +최신 버전의 kubectl을 사용하면 예기치 않은 문제를 피할 수 있다. + +## 윈도우에 kubectl 설치 + +다음과 같은 방법으로 윈도우에 kubectl을 설치할 수 있다. + +- [윈도우에서 curl을 사용하여 kubectl 바이너리 설치](#install-kubectl-binary-with-curl-on-windows) +- [PSGallery에서 PowerShell로 설치](#install-with-powershell-from-psgallery) +- [Chocolatey 또는 Scoop을 사용하여 윈도우에 설치](#install-on-windows-using-chocolatey-or-scoop) +- [Google Cloud SDK를 사용하여 설치](#install-on-windows-as-part-of-the-google-cloud-sdk) + + +### 윈도우에서 curl을 사용하여 kubectl 바이너리 설치 {#install-kubectl-binary-with-curl-on-windows} + +1. [최신 릴리스 {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)를 다운로드한다. + + 또는 `curl` 을 설치한 경우, 다음 명령을 사용한다. + + ```powershell + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe + ``` + + {{< note >}} + 최신의 안정 버전(예: 스크립팅을 위한)을 찾으려면, [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt)를 참고한다. + {{< /note >}} + +1. 바이너리를 검증한다. (선택 사항) + + kubectl 체크섬 파일을 다운로드한다. + + ```powershell + curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256 + ``` + + kubectl 바이너리를 체크섬 파일을 통해 검증한다. + + - 수동으로 `CertUtil` 의 출력과 다운로드한 체크섬 파일을 비교하기 위해서 커맨드 프롬프트를 사용한다. + + ```cmd + CertUtil -hashfile kubectl.exe SHA256 + type kubectl.exe.sha256 + ``` + + - `-eq` 연산자를 통해 `True` 또는 `False` 결과를 얻는 자동 검증을 위해서 PowerShell을 사용한다. + + ```powershell + $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) + ``` + +1. 바이너리를 `PATH` 가 설정된 디렉터리에 추가한다. + +1. `kubectl` 의 버전이 다운로드한 버전과 같은지 확인한다. + + ```cmd + kubectl version --client + ``` + +{{< note >}} +[윈도우용 도커 데스크톱](https://docs.docker.com/docker-for-windows/#kubernetes)은 자체 버전의 `kubectl` 을 `PATH` 에 추가한다. +도커 데스크톱을 이전에 설치한 경우, 도커 데스크톱 설치 프로그램에서 추가한 `PATH` 항목 앞에 `PATH` 항목을 배치하거나 도커 데스크톱의 `kubectl` 을 제거해야 할 수도 있다. +{{< /note >}} + +### PSGallery에서 PowerShell로 설치 {#install-with-powershell-from-psgallery} + +윈도우에서 [Powershell Gallery](https://www.powershellgallery.com/) 패키지 관리자를 사용하는 경우, Powershell로 kubectl을 설치하고 업데이트할 수 있다. + +1. 설치 명령을 실행한다(`DownloadLocation` 을 지정해야 한다). + + ```powershell + Install-Script -Name install-kubectl -Scope CurrentUser -Force + install-kubectl.ps1 [-DownloadLocation ] + ``` + + {{< note >}} + `DownloadLocation` 을 지정하지 않으면, `kubectl` 은 사용자의 `temp` 디렉터리에 설치된다. + {{< /note >}} + + 설치 프로그램은 `$HOME/.kube` 를 생성하고 구성 파일을 작성하도록 지시한다. + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```powershell + kubectl version --client + ``` + +{{< note >}} +설치 업데이트는 1 단계에서 나열한 두 명령을 다시 실행하여 수행한다. +{{< /note >}} + +### Chocolatey 또는 Scoop을 사용하여 윈도우에 설치 {#install-on-windows-using-chocolatey-or-scoop} + +1. 윈도우에 kubectl을 설치하기 위해서 [Chocolatey](https://chocolatey.org) 패키지 관리자나 [Scoop](https://scoop.sh) 커맨드 라인 설치 프로그램을 사용할 수 있다. + + {{< tabs name="kubectl_win_install" >}} + {{% tab name="choco" %}} + ```powershell + choco install kubernetes-cli + ``` + {{% /tab %}} + {{% tab name="scoop" %}} + ```powershell + scoop install kubectl + ``` + {{% /tab %}} + {{< /tabs >}} + + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```powershell + kubectl version --client + ``` + +1. 홈 디렉터리로 이동한다. + + ```powershell + # cmd.exe를 사용한다면, 다음을 실행한다. cd %USERPROFILE% + cd ~ + ``` + +1. `.kube` 디렉터리를 생성한다. + + ```powershell + mkdir .kube + ``` + +1. 금방 생성한 `.kube` 디렉터리로 이동한다. + + ```powershell + cd .kube + ``` + +1. 원격 쿠버네티스 클러스터를 사용하도록 kubectl을 구성한다. + + ```powershell + New-Item config -type file + ``` + +{{< note >}} +메모장과 같은 텍스트 편집기를 선택하여 구성 파일을 편집한다. +{{< /note >}} + +### Google Cloud SDK를 사용하여 설치 {#install-on-windows-as-part-of-the-google-cloud-sdk} + +{{< include "included/install-kubectl-gcloud.md" >}} + +## kubectl 구성 확인 + +{{< include "included/verify-kubectl.md" >}} + +## 선택적 kubectl 구성 + +### 셸 자동 완성 활성화 + +kubectl은 Bash 및 Zsh에 대한 자동 완성 지원을 제공하므로 입력을 위한 타이핑을 많이 절약할 수 있다. + +다음은 Zsh에 대한 자동 완성을 설정하는 절차이다. + +{{< include "included/optional-kubectl-configs-zsh.md" >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/install-kubectl.md b/content/ko/docs/tasks/tools/install-kubectl.md deleted file mode 100644 index 70a6a60409..0000000000 --- a/content/ko/docs/tasks/tools/install-kubectl.md +++ /dev/null @@ -1,633 +0,0 @@ ---- - - -title: kubectl 설치 및 설정 -content_type: task -weight: 10 -card: - name: tasks - weight: 20 - title: kubectl 설치 ---- - - -쿠버네티스 커맨드 라인 도구인 [kubectl](/ko/docs/reference/kubectl/kubectl/)을 사용하면, -쿠버네티스 클러스터에 대해 명령을 실행할 수 있다. -kubectl을 사용하여 애플리케이션을 배포하고, 클러스터 리소스를 검사 및 관리하며 -로그를 볼 수 있다. kubectl 작업의 전체 목록에 대해서는, -[kubectl 개요](/ko/docs/reference/kubectl/overview/)를 참고한다. - - -## {{% heading "prerequisites" %}} - -클러스터의 마이너(minor) 버전 차이 내에 있는 kubectl 버전을 사용해야 한다. -예를 들어, v1.2 클라이언트는 v1.1, v1.2 및 v1.3의 마스터와 함께 작동해야 한다. -최신 버전의 kubectl을 사용하면 예기치 않은 문제를 피할 수 있다. - - - -## 리눅스에 kubectl 설치 - -### 리눅스에서 curl을 사용하여 kubectl 바이너리 설치 - -1. 다음 명령으로 최신 릴리스를 다운로드한다. - - ```bash - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" - ``` - - {{< note >}} -특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다. - -예를 들어, 리눅스에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다. - - ```bash - curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl - ``` - {{< /note >}} - -1. 바이너리를 검증한다. (선택 사항) - - kubectl 체크섬(checksum) 파일을 다운로드한다. - - ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" - ``` - - kubectl 바이너리를 체크섬 파일을 통해 검증한다. - - ```bash - echo "$(}} - 동일한 버전의 바이너리와 체크섬을 다운로드한다. - {{< /note >}} - -1. kubectl 설치 - - ```bash - sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl - ``` - - {{< note >}} - 대상 시스템에 root 접근 권한을 가지고 있지 않더라도, `~/.local/bin` 디렉터리에 kubectl을 설치할 수 있다. - - ```bash - mkdir -p ~/.local/bin/kubectl - mv ./kubectl ~/.local/bin/kubectl - # 그리고 ~/.local/bin/kubectl을 $PATH에 추가 - ``` - - {{< /note >}} - -1. 설치한 버전이 최신인지 확인한다. - - ```bash - kubectl version --client - ``` - -### 기본 패키지 관리 도구를 사용하여 설치 - -{{< tabs name="kubectl_install" >}} -{{< tab name="Ubuntu, Debian 또는 HypriotOS" codelang="bash" >}} -sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl -curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - -echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list -sudo apt-get update -sudo apt-get install -y kubectl -{{< /tab >}} - -{{< tab name="CentOS, RHEL 또는 Fedora" codelang="bash" >}}cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF -yum install -y kubectl -{{< /tab >}} -{{< /tabs >}} - -### 다른 패키지 관리 도구를 사용하여 설치 - -{{< tabs name="other_kubectl_install" >}} -{{% tab name="Snap" %}} -[snap](https://snapcraft.io/docs/core/install) 패키지 관리자를 지원하는 Ubuntu 또는 다른 리눅스 배포판을 사용하는 경우, kubectl을 [snap](https://snapcraft.io/) 애플리케이션으로 설치할 수 있다. - -```shell -snap install kubectl --classic - -kubectl version --client -``` - -{{% /tab %}} - -{{% tab name="Homebrew" %}} -리눅스 상에서 [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) 패키지 관리자를 사용한다면, [설치](https://docs.brew.sh/Homebrew-on-Linux#install)를 통해 kubectl을 사용할 수 있다. - -```shell -brew install kubectl - -kubectl version --client -``` - -{{% /tab %}} - -{{< /tabs >}} - - -## macOS에 kubectl 설치 - -### macOS에서 curl을 사용하여 kubectl 바이너리 설치 - -1. 최신 릴리스를 다운로드한다. - - ```bash - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl" - ``` - - {{< note >}} - 특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다. - - 예를 들어, macOS에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다. - - ```bash - curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl - ``` - - {{< /note >}} - -1. 바이너리를 검증한다. (선택 사항) - - kubectl 체크섬 파일을 다운로드한다. - - ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256" - ``` - - kubectl 바이너리를 체크섬 파일을 통해 검증한다. - - ```bash - echo "$(}} - 동일한 버전의 바이너리와 체크섬을 다운로드한다. - {{< /note >}} - -1. kubectl 바이너리를 실행 가능하게 한다. - - ```bash - chmod +x ./kubectl - ``` - -1. kubectl 바이너리를 시스템 `PATH` 의 파일 위치로 옮긴다. - - ```bash - sudo mv ./kubectl /usr/local/bin/kubectl && \ - sudo chown root: /usr/local/bin/kubectl - ``` - -1. 설치한 버전이 최신 버전인지 확인한다. - - ```bash - kubectl version --client - ``` - -### macOS에서 Homebrew를 사용하여 설치 - -macOS에서 [Homebrew](https://brew.sh/) 패키지 관리자를 사용하는 경우, Homebrew로 kubectl을 설치할 수 있다. - -1. 설치 명령을 실행한다. - - ```bash - brew install kubectl - ``` - - 또는 - - ```bash - brew install kubernetes-cli - ``` - -1. 설치한 버전이 최신 버전인지 확인한다. - - ```bash - kubectl version --client - ``` - -### macOS에서 Macports를 사용하여 설치 - -macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하는 경우, Macports로 kubectl을 설치할 수 있다. - -1. 설치 명령을 실행한다. - - ```bash - sudo port selfupdate - sudo port install kubectl - ``` - -1. 설치한 버전이 최신 버전인지 확인한다. - - ```bash - kubectl version --client - ``` - -## 윈도우에 kubectl 설치 - -### 윈도우에서 curl을 사용하여 kubectl 바이너리 설치 - -1. [최신 릴리스 {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)를 다운로드한다. - - 또는 `curl` 을 설치한 경우, 다음 명령을 사용한다. - - ```powershell - curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe - ``` - - {{< note >}} - 최신의 안정 버전(예: 스크립팅을 위한)을 찾으려면, [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt)를 참고한다. - {{< /note >}} - -1. 바이너리를 검증한다. (선택 사항) - - kubectl 체크섬 파일을 다운로드한다. - - ```powershell - curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256 - ``` - - kubectl 바이너리를 체크섬 파일을 통해 검증한다. - - - 수동으로 `CertUtil` 의 출력과 다운로드한 체크섬 파일을 비교하기 위해서 커맨드 프롬프트를 사용한다. - - ```cmd - CertUtil -hashfile kubectl.exe SHA256 - type kubectl.exe.sha256 - ``` - - - `-eq` 연산자를 통해 `True` 또는 `False` 결과를 얻는 자동 검증을 위해서 PowerShell을 사용한다. - - ```powershell - $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) - ``` - -1. 바이너리를 `PATH` 가 설정된 디렉터리에 추가한다. - -1. `kubectl` 의 버전이 다운로드한 버전과 같은지 확인한다. - - ```cmd - kubectl version --client - ``` - -{{< note >}} -[윈도우용 도커 데스크톱](https://docs.docker.com/docker-for-windows/#kubernetes)은 자체 버전의 `kubectl` 을 `PATH` 에 추가한다. -도커 데스크톱을 이전에 설치한 경우, 도커 데스크톱 설치 프로그램에서 추가한 `PATH` 항목 앞에 `PATH` 항목을 배치하거나 도커 데스크톱의 `kubectl` 을 제거해야 할 수도 있다. -{{< /note >}} - -### PSGallery에서 PowerShell로 설치 - -윈도우에서 [Powershell Gallery](https://www.powershellgallery.com/) 패키지 관리자를 사용하는 경우, Powershell로 kubectl을 설치하고 업데이트할 수 있다. - -1. 설치 명령을 실행한다(`DownloadLocation` 을 지정해야 한다). - - ```powershell - Install-Script -Name install-kubectl -Scope CurrentUser -Force - install-kubectl.ps1 [-DownloadLocation ] - ``` - - {{< note >}} - `DownloadLocation` 을 지정하지 않으면, `kubectl` 은 사용자의 `temp` 디렉터리에 설치된다. - {{< /note >}} - - 설치 프로그램은 `$HOME/.kube` 를 생성하고 구성 파일을 작성하도록 지시한다. - -1. 설치한 버전이 최신 버전인지 확인한다. - - ```powershell - kubectl version --client - ``` - -{{< note >}} -설치 업데이트는 1 단계에서 나열한 두 명령을 다시 실행하여 수행한다. -{{< /note >}} - -### Chocolatey 또는 Scoop을 사용하여 윈도우에 설치 - -1. 윈도우에 kubectl을 설치하기 위해서 [Chocolatey](https://chocolatey.org) 패키지 관리자나 [Scoop](https://scoop.sh) 커맨드 라인 설치 프로그램을 사용할 수 있다. - - {{< tabs name="kubectl_win_install" >}} - {{% tab name="choco" %}} - ```powershell - choco install kubernetes-cli - ``` - {{% /tab %}} - {{% tab name="scoop" %}} - ```powershell - scoop install kubectl - ``` - {{% /tab %}} - {{< /tabs >}} - - -1. 설치한 버전이 최신 버전인지 확인한다. - - ```powershell - kubectl version --client - ``` - -1. 홈 디렉터리로 이동한다. - - ```powershell - # cmd.exe를 사용한다면, 다음을 실행한다. cd %USERPROFILE% - cd ~ - ``` - -1. `.kube` 디렉터리를 생성한다. - - ```powershell - mkdir .kube - ``` - -1. 금방 생성한 `.kube` 디렉터리로 이동한다. - - ```powershell - cd .kube - ``` - -1. 원격 쿠버네티스 클러스터를 사용하도록 kubectl을 구성한다. - - ```powershell - New-Item config -type file - ``` - -{{< note >}} -메모장과 같은 텍스트 편집기를 선택하여 구성 파일을 편집한다. -{{< /note >}} - -## Google Cloud SDK의 일부로 다운로드 - -kubectl을 Google Cloud SDK의 일부로 설치할 수 있다. - -1. [Google Cloud SDK](https://cloud.google.com/sdk/)를 설치한다. - -1. `kubectl` 설치 명령을 실행한다. - - ```shell - gcloud components install kubectl - ``` - -1. 설치한 버전이 최신 버전인지 확인한다. - - ```shell - kubectl version --client - ``` - -## kubectl 구성 확인 - -kubectl이 쿠버네티스 클러스터를 찾아 접근하려면, -[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh)를 -사용하여 클러스터를 생성하거나 Minikube 클러스터를 성공적으로 배포할 때 자동으로 생성되는 -[kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)이 -필요하다. -기본적으로, kubectl 구성은 `~/.kube/config` 에 있다. - -클러스터 상태를 가져와서 kubectl이 올바르게 구성되어 있는지 확인한다. - -```shell -kubectl cluster-info -``` - -URL 응답이 표시되면, kubectl이 클러스터에 접근하도록 올바르게 구성된 것이다. - -다음과 비슷한 메시지가 표시되면, kubectl이 올바르게 구성되지 않았거나 쿠버네티스 클러스터에 연결할 수 없다. - -``` -The connection to the server was refused - did you specify the right host or port? -``` - -예를 들어, 랩톱에서 로컬로 쿠버네티스 클러스터를 실행하려면, Minikube와 같은 도구를 먼저 설치한 다음 위에서 언급한 명령을 다시 실행해야 한다. - -kubectl cluster-info가 URL 응답을 반환하지만 클러스터에 접근할 수 없는 경우, 올바르게 구성되었는지 확인하려면 다음을 사용한다. - -```shell -kubectl cluster-info dump -``` - -## 선택적 kubectl 구성 - -### 셸 자동 완성 활성화 - -kubectl은 Bash 및 Zsh에 대한 자동 완성 지원을 제공하므로 입력을 위한 타이핑을 많이 절약할 수 있다. - -다음은 Bash(리눅스와 macOS의 다른 점 포함) 및 Zsh에 대한 자동 완성을 설정하는 절차이다. - -{{< tabs name="kubectl_autocompletion" >}} - -{{% tab name="리눅스에서의 Bash" %}} - -### 소개 - -Bash의 kubectl 완성 스크립트는 `kubectl completion bash` 명령으로 생성할 수 있다. 셸에서 완성 스크립트를 소싱(sourcing)하면 kubectl 자동 완성 기능이 활성화된다. - -그러나, 완성 스크립트는 [**bash-completion**](https://github.com/scop/bash-completion)에 의존하고 있으며, 이 소프트웨어를 먼저 설치해야 한다(`type _init_completion` 을 실행하여 bash-completion이 이미 설치되어 있는지 확인할 수 있음). - -### bash-completion 설치 - -bash-completion은 많은 패키지 관리자에 의해 제공된다([여기](https://github.com/scop/bash-completion#installation) 참고). `apt-get install bash-completion` 또는 `yum install bash-completion` 등으로 설치할 수 있다. - -위의 명령은 bash-completion의 기본 스크립트인 `/usr/share/bash-completion/bash_completion` 을 생성한다. 패키지 관리자에 따라, `~/.bashrc` 파일에서 이 파일을 수동으로 소스(source)해야 한다. - -확인하려면, 셸을 다시 로드하고 `type _init_completion` 을 실행한다. 명령이 성공하면, 이미 설정된 상태이고, 그렇지 않으면 `~/.bashrc` 파일에 다음을 추가한다. - -```bash -source /usr/share/bash-completion/bash_completion -``` - -셸을 다시 로드하고 `type _init_completion` 을 입력하여 bash-completion이 올바르게 설치되었는지 확인한다. - -### kubectl 자동 완성 활성화 - -이제 kubectl 완성 스크립트가 모든 셸 세션에서 제공되도록 해야 한다. 이를 수행할 수 있는 두 가지 방법이 있다. - -- `~/.bashrc` 파일에서 완성 스크립트를 소싱한다. - - ```bash - echo 'source <(kubectl completion bash)' >>~/.bashrc - ``` - -- 완성 스크립트를 `/etc/bash_completion.d` 디렉터리에 추가한다. - - ```bash - kubectl completion bash >/etc/bash_completion.d/kubectl - ``` - -kubectl에 대한 앨리어스(alias)가 있는 경우, 해당 앨리어스로 작업하도록 셸 완성을 확장할 수 있다. - -```bash -echo 'alias k=kubectl' >>~/.bashrc -echo 'complete -F __start_kubectl k' >>~/.bashrc -``` - -{{< note >}} -bash-completion은 `/etc/bash_completion.d` 에 있는 모든 완성 스크립트를 소싱한다. -{{< /note >}} - -두 방법 모두 동일하다. 셸을 다시 로드한 후, kubectl 자동 완성 기능이 작동해야 한다. - -{{% /tab %}} - - -{{% tab name="macOS에서의 Bash" %}} - - -### 소개 - -Bash의 kubectl 완성 스크립트는 `kubectl completion bash` 로 생성할 수 있다. 이 스크립트를 셸에 소싱하면 kubectl 완성이 가능하다. - -그러나 kubectl 완성 스크립트는 미리 [**bash-completion**](https://github.com/scop/bash-completion)을 설치해야 동작한다. - -{{< warning>}} -bash-completion에는 v1과 v2 두 가지 버전이 있다. v1은 Bash 3.2(macOS의 기본 설치 버전) 버전용이고, v2는 Bash 4.1 이상 버전용이다. kubectl 완성 스크립트는 bash-completion v1과 Bash 3.2 버전에서는 **작동하지 않는다**. **bash-completion v2** 와 **Bash 4.1 이상 버전** 이 필요하다. 따라서, macOS에서 kubectl 완성 기능을 올바르게 사용하려면, Bash 4.1 이상을 설치하고 사용해야한다([*지침*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). 다음의 내용에서는 Bash 4.1 이상(즉, 모든 Bash 버전 4.1 이상)을 사용한다고 가정한다. -{{< /warning >}} - -### Bash 업그레이드 - -여기의 지침에서는 Bash 4.1 이상을 사용한다고 가정한다. 다음을 실행하여 Bash 버전을 확인할 수 있다. - -```bash -echo $BASH_VERSION -``` - -너무 오래된 버전인 경우, Homebrew를 사용하여 설치/업그레이드할 수 있다. - -```bash -brew install bash -``` - -셸을 다시 로드하고 원하는 버전을 사용 중인지 확인한다. - -```bash -echo $BASH_VERSION $SHELL -``` - -Homebrew는 보통 `/usr/local/bin/bash` 에 설치한다. - -### bash-completion 설치 - -{{< note >}} -언급한 바와 같이, 이 지침에서는 Bash 4.1 이상을 사용한다고 가정한다. 이는 bash-completion v2를 설치한다는 것을 의미한다(Bash 3.2 및 bash-completion v1의 경우, kubectl 완성이 작동하지 않음). -{{< /note >}} - -bash-completion v2가 이미 설치되어 있는지 `type_init_completion` 으로 확인할 수 있다. 그렇지 않은 경우, Homebrew로 설치할 수 있다. - -```bash -brew install bash-completion@2 -``` - -이 명령의 출력에 명시된 바와 같이, `~/.bash_profile` 파일에 다음을 추가한다. - -```bash -export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" -[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" -``` - -셸을 다시 로드하고 bash-completion v2가 올바르게 설치되었는지 `type _init_completion` 으로 확인한다. - -### kubectl 자동 완성 활성화 - -이제 kubectl 완성 스크립트가 모든 셸 세션에서 제공되도록 해야 한다. 이를 수행하는 방법에는 여러 가지가 있다. - -- 완성 스크립트를 `~/.bash_profile` 파일에서 소싱한다. - - ```bash - echo 'source <(kubectl completion bash)' >>~/.bash_profile - ``` - -- 완성 스크립트를 `/usr/local/etc/bash_completion.d` 디렉터리에 추가한다. - - ```bash - kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl - ``` - -- kubectl에 대한 앨리어스가 있는 경우, 해당 앨리어스로 작업하기 위해 셸 완성을 확장할 수 있다. - - ```bash - echo 'alias k=kubectl' >>~/.bash_profile - echo 'complete -F __start_kubectl k' >>~/.bash_profile - ``` - -- Homebrew로 kubectl을 설치한 경우([위](#macos에서-homebrew를-사용하여-설치)의 설명을 참고), kubectl 완성 스크립트는 이미 `/usr/local/etc/bash_completion.d/kubectl` 에 있어야 한다. 이 경우, 아무 것도 할 필요가 없다. - - {{< note >}} - bash-completion v2의 Homebrew 설치는 `BASH_COMPLETION_COMPAT_DIR` 디렉터리의 모든 파일을 소싱하므로, 후자의 두 가지 방법이 적용된다. - {{< /note >}} - -어쨌든, 셸을 다시 로드 한 후에, kubectl 완성이 작동해야 한다. -{{% /tab %}} - -{{% tab name="Zsh" %}} - -Zsh용 kubectl 완성 스크립트는 `kubectl completion zsh` 명령으로 생성할 수 있다. 셸에서 완성 스크립트를 소싱하면 kubectl 자동 완성 기능이 활성화된다. - -모든 셸 세션에서 사용하려면, `~/.zshrc` 파일에 다음을 추가한다. - -```zsh -source <(kubectl completion zsh) -``` - -kubectl에 대한 앨리어스가 있는 경우, 해당 앨리어스로 작업하도록 셸 완성을 확장할 수 있다. - -```zsh -echo 'alias k=kubectl' >>~/.zshrc -echo 'complete -F __start_kubectl k' >>~/.zshrc -``` - -셸을 다시 로드 한 후, kubectl 자동 완성 기능이 작동해야 한다. - -`complete:13: command not found: compdef` 와 같은 오류가 발생하면, `~/.zshrc` 파일의 시작 부분에 다음을 추가한다. - -```zsh -autoload -Uz compinit -compinit -``` -{{% /tab %}} -{{< /tabs >}} - -## {{% heading "whatsnext" %}} - -* [Minikube 설치](https://minikube.sigs.k8s.io/docs/start/) -* 클러스터 생성에 대한 자세한 내용은 [시작하기](/ko/docs/setup/)를 참고한다. -* [애플리케이션을 시작하고 노출하는 방법에 대해 배운다.](/ko/docs/tasks/access-application-cluster/service-access-application-cluster/) -* 직접 생성하지 않은 클러스터에 접근해야하는 경우, - [클러스터 접근 공유 문서](/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)를 참고한다. -* [kubectl 레퍼런스 문서](/ko/docs/reference/kubectl/kubectl/) 읽기 diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md index 8f3f515f31..39e57ff501 100644 --- a/content/ko/docs/tutorials/hello-minikube.md +++ b/content/ko/docs/tutorials/hello-minikube.md @@ -61,6 +61,22 @@ Katacode는 무료로 브라우저에서 쿠버네티스 환경을 제공한다. 4. Katacoda 환경에서는: 30000 을 입력하고 **Display Port** 를 클릭. +{{< note >}} +`minikube dashboard` 명령을 내리면 대시보드 애드온과 프록시가 활성화되고 해당 프록시로 접속하는 기본 웹 브라우저 창이 열린다. 대시보드에서 디플로이먼트나 서비스와 같은 쿠버네티스 자원을 생성할 수 있다. + +root 환경에서 명령어를 실행하고 있다면, [URL을 이용하여 대시보드 접속하기](#open-dashboard-with-url)를 참고한다. + +`Ctrl+C` 를 눌러 프록시를 종료할 수 있다. 대시보드는 종료되지 않고 실행 상태로 남아 있다. +{{< /note >}} + +## URL을 이용하여 대시보드 접속하기 {#open-dashboard-with-url} + +자동으로 웹 브라우저가 열리는 것을 원치 않는다면, 다음과 같은 명령어를 실행하여 대시보드 접속 URL을 출력할 수 있다: + +```shell +minikube dashboard --url +``` + ## 디플로이먼트 만들기 쿠버네티스 [*파드*](/ko/docs/concepts/workloads/pods/)는 관리와 diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index f51d68e866..da8cce3e17 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -33,7 +33,7 @@ weight: 10

쿠버네티스 클러스터는 두 가지 형태의 자원으로 구성된다.

    -
  • 마스터는 클러스터를 조율한다.
  • +
  • 컨트롤 플레인은 클러스터를 조율한다.
  • 노드는 애플리케이션을 구동하는 작업자(worker)이다.

@@ -71,20 +71,20 @@ weight: 10
-

마스터는 클러스터 관리를 담당한다. 마스터는 애플리케이션을 스케줄링하거나, 애플리케이션의 항상성을 유지하거나, 애플리케이션을 스케일링하고, 새로운 변경사항을 순서대로 반영(rolling out)하는 일과 같은 클러스터 내 모든 활동을 조율한다.

-

노드는 쿠버네티스 클러스터 내 워커 머신으로 동작하는 VM 또는 물리적인 컴퓨터다. 각 노드는 노드를 관리하고 쿠버네티스 마스터와 통신하는 Kubelet이라는 에이전트를 갖는다. 노드는 컨테이너 운영을 담당하는 containerd 또는 도커와 같은 툴도 갖는다. 운영 트래픽을 처리하는 쿠버네티스 클러스터는 최소 세 대의 노드를 가져야 한다.

+

컨트롤 플레인은 클러스터 관리를 담당한다. 컨트롤 플레인은 애플리케이션을 스케줄링하거나, 애플리케이션의 항상성을 유지하거나, 애플리케이션을 스케일링하고, 새로운 변경사항을 순서대로 반영(rolling out)하는 일과 같은 클러스터 내 모든 활동을 조율한다.

+

노드는 쿠버네티스 클러스터 내 워커 머신으로 동작하는 VM 또는 물리적인 컴퓨터다. 각 노드는 노드를 관리하고 쿠버네티스 컨트롤 플레인과 통신하는 Kubelet이라는 에이전트를 갖는다. 노드는 컨테이너 운영을 담당하는 containerd 또는 도커와 같은 툴도 갖는다. 운영 트래픽을 처리하는 쿠버네티스 클러스터는 최소 세 대의 노드를 가져야 한다.

-

마스터는 실행 중인 애플리케이션을 호스팅하기 위해 사용되는 노드와 클러스터를 관리한다.

+

컨트롤 플레인은 실행 중인 애플리케이션을 호스팅하기 위해 사용되는 노드와 클러스터를 관리한다.

-

애플리케이션을 쿠버네티스에 배포하기 위해서는, 마스터에 애플리케이션 컨테이너의 구동을 지시하면 된다. 그러면 마스터는 컨테이너를 클러스터의 어느 노드에 구동시킬지 스케줄한다. 노드는 마스터가 제공하는 쿠버네티스 API를 통해서 마스터와 통신한다. 최종 사용자도 쿠버네티스 API를 사용해서 클러스터와 직접 상호작용(interact)할 수 있다.

+

애플리케이션을 쿠버네티스에 배포하기 위해서는, 컨트롤 플레인에 애플리케이션 컨테이너의 구동을 지시하면 된다. 그러면 컨트롤 플레인은 컨테이너를 클러스터의 어느 노드에 구동시킬지 스케줄한다. 노드는 컨트롤 플레인이 제공하는 쿠버네티스 API를 통해서 컨트롤 플레인과 통신한다. 최종 사용자도 쿠버네티스 API를 사용해서 클러스터와 직접 상호작용(interact)할 수 있다.

쿠버네티스 클러스터는 물리 및 가상 머신 모두에 설치될 수 있다. 쿠버네티스 개발을 시작하려면 Minikube를 사용할 수 있다. Minikube는 가벼운 쿠버네티스 구현체이며, 로컬 머신에 VM을 만들고 하나의 노드로 구성된 간단한 클러스터를 생성한다. Minikube는 리눅스, 맥, 그리고 윈도우 시스템에서 구동이 가능하다. Minikube CLI는 클러스터에 대해 시작, 중지, 상태 조회 및 삭제 등의 기본적인 부트스트래핑(bootstrapping) 기능을 제공한다. 하지만, 본 튜토리얼에서는 Minikube가 미리 설치된 채로 제공되는 온라인 터미널을 사용할 것이다.

diff --git a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 5b41fe207a..4c250c1272 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -31,7 +31,7 @@ weight: 10 일단 쿠버네티스 클러스터를 구동시키면, 그 위에 컨테이너화된 애플리케이션을 배포할 수 있다. 그러기 위해서, 쿠버네티스 디플로이먼트 설정을 만들어야 한다. 디플로이먼트는 쿠버네티스가 애플리케이션의 인스턴스를 어떻게 생성하고 업데이트해야 하는지를 지시한다. 디플로이먼트가 만들어지면, - 쿠버네티스 마스터가 해당 디플로이먼트에 포함된 애플리케이션 인스턴스가 클러스터의 개별 노드에서 실행되도록 스케줄한다. + 쿠버네티스 컨트롤 플레인이 해당 디플로이먼트에 포함된 애플리케이션 인스턴스가 클러스터의 개별 노드에서 실행되도록 스케줄한다.

애플리케이션 인스턴스가 생성되면, 쿠버네티스 디플로이먼트 컨트롤러는 지속적으로 이들 인스턴스를 diff --git a/content/ko/docs/tutorials/services/source-ip.md b/content/ko/docs/tutorials/services/source-ip.md index 9d47599589..dec30dc54b 100644 --- a/content/ko/docs/tutorials/services/source-ip.md +++ b/content/ko/docs/tutorials/services/source-ip.md @@ -412,7 +412,7 @@ client_address=198.51.100.79 HTTP [Forwarded](https://tools.ietf.org/html/rfc7239#section-5.2) 또는 [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For) 헤더 또는 -[프록시 프로토콜](https://www.haproxy.org/download/1.5/doc/proxy-protocol.txt)과 +[프록시 프로토콜](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)과 같은 로드밸런서와 백엔드 간에 합의된 프로토콜을 사용해야 한다. 두 번째 범주의 로드밸런서는 서비스의 `service.spec.healthCheckNodePort` 필드의 저장된 포트를 가르키는 HTTP 헬스 체크를 생성하여 diff --git a/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md b/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md index a2df8806a8..a17ae9f320 100644 --- a/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md @@ -921,7 +921,7 @@ web-2 0/1 Terminating 0 3m `web` 스테이트풀셋이 다시 생성될 때 먼저 `web-0` 시작한다. `web-1`은 이미 Running과 Ready 상태이므로 `web-0`이 Running과 Ready 상태로 -전환될 때는 이 파드에 적용됬다. 스테이트풀셋에`replicas`를 2로 하고 +전환될 때는 이 파드에 적용됐다. 스테이트풀셋에 `replicas`를 2로 하고 `web-0`을 재생성했다면 `web-1`이 이미 Running과 Ready 상태이고, `web-2`은 종료되었을 것이다. diff --git a/content/ko/docs/tutorials/stateful-application/cassandra.md b/content/ko/docs/tutorials/stateful-application/cassandra.md index 8273f3bcd9..0a420100ce 100644 --- a/content/ko/docs/tutorials/stateful-application/cassandra.md +++ b/content/ko/docs/tutorials/stateful-application/cassandra.md @@ -114,7 +114,7 @@ cassandra ClusterIP None 9042/TCP 45s kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml ``` -클러스터에 맞게 `cassandra-statefulset.yaml` 를 수정해야 하는 경우 다음을 다운로드 한 다음 +클러스터에 맞게 `cassandra-statefulset.yaml` 를 수정해야 하는 경우 다음을 다운로드한 다음 수정된 버전을 저장한 폴더에서 해당 매니페스트를 적용한다. https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml ```shell diff --git a/content/ko/docs/tutorials/stateless-application/guestbook.md b/content/ko/docs/tutorials/stateless-application/guestbook.md index c91ecd2e1b..24e05ab77f 100644 --- a/content/ko/docs/tutorials/stateless-application/guestbook.md +++ b/content/ko/docs/tutorials/stateless-application/guestbook.md @@ -133,7 +133,7 @@ kubectl apply -f ./content/en/examples/application/guestbook/frontend-deployment 1. 파드의 목록을 질의하여 세 개의 프론트엔드 복제본이 실행되고 있는지 확인한다. ```shell - kubectl get pods -l app=guestbook -l tier=frontend + kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend ``` 결과는 아래와 같은 형태로 나타난다. diff --git a/content/pt/docs/concepts/containers/container-lifecycle-hooks.md b/content/pt/docs/concepts/containers/container-lifecycle-hooks.md new file mode 100644 index 0000000000..984f248256 --- /dev/null +++ b/content/pt/docs/concepts/containers/container-lifecycle-hooks.md @@ -0,0 +1,114 @@ +--- +title: Hooks de Ciclo de Vida do Contêiner +content_type: concept +weight: 30 +--- + + + +Essa página descreve como os contêineres gerenciados pelo _kubelet_ podem usar a estrutura de _hook_ de ciclo de vida do contêiner para executar código acionado por eventos durante seu ciclo de vida de gerenciamento. + + + + +## Visão Geral + +Análogo a muitas estruturas de linguagem de programação que tem _hooks_ de ciclo de vida de componentes, como angular, +o Kubernetes fornece aos contêineres _hooks_ de ciclo de vida. +Os _hooks_ permitem que os contêineres estejam cientes dos eventos em seu ciclo de vida de gerenciamento +e executem código implementado em um manipulador quando o _hook_ de ciclo de vida correspondente é executado. + +## Hooks do contêiner + +Existem dois _hooks_ que são expostos para os contêiners: + +`PostStart` + +Este _hook_ é executado imediatamente após um contêiner ser criado. +Entretanto, não há garantia que o _hook_ será executado antes do ENTRYPOINT do contêiner. +Nenhum parâmetro é passado para o manipulador. + +`PreStop` + +Esse _hook_ é chamado imediatamente antes de um contêiner ser encerrado devido a uma solicitação de API ou um gerenciamento de evento como liveness/startup probe failure, preemption, resource contention e outros. +Uma chamada ao _hook_ `PreStop` falha se o contêiner já está em um estado finalizado ou concluído e o _hook_ deve ser concluído antes que o sinal TERM seja enviado para parar o contêiner. A contagem regressiva do período de tolerância de término do Pod começa antes que o _hook_ `PreStop` seja executado, portanto, independentemente do resultado do manipulador, o contêiner será encerrado dentro do período de tolerância de encerramento do Pod. Nenhum parâmetro é passado para o manipulador. + +Uma descrição mais detalhada do comportamento de término pode ser encontrada em [Término de Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination). + +### Implementações de manipulador de hook + +Os contêineres podem acessar um _hook_ implementando e registrando um manipulador para esse _hook_. +Existem dois tipos de manipuladores de _hooks_ que podem ser implementados para contêineres: + +* Exec - Executa um comando específico, como `pre-stop.sh`, dentro dos cgroups e Namespaces do contêiner. +* HTTP - Executa uma requisição HTTP em um endpoint específico do contêiner. + +### Execução do manipulador de hook + + +Quando um _hook_ de gerenciamento de ciclo de vida do contêiner é chamado, o sistema de gerenciamento do Kubernetes executa o manipulador de acordo com a ação do _hook_, `httpGet` e `tcpSocket` são executados pelo processo kubelet e `exec` é executado pelo contêiner. + +As chamadas do manipulador do _hook_ são síncronas no contexto do Pod que contém o contêiner. +Isso significa que para um _hook_ `PostStart`, o ENTRYPOINT do contêiner e o _hook_ disparam de forma assíncrona. +No entanto, se o _hook_ demorar muito para ser executado ou travar, o contêiner não consegue atingir o estado `running`. + + +Os _hooks_ `PreStop` não são executados de forma assíncrona a partir do sinal para parar o contêiner, o _hook_ precisa finalizar a sua execução antes que o sinal TERM possa ser enviado. +Se um _hook_ `PreStop` travar durante a execução, a fase do Pod será `Terminating` e permanecerá até que o Pod seja morto após seu `terminationGracePeriodSeconds` expirar. Esse período de tolerância se aplica ao tempo total necessário +para o _hook_ `PreStop`executar e para o contêiner parar normalmente. +Se por exemplo, o `terminationGracePeriodSeconds` é 60, e o _hook_ leva 55 segundos para ser concluído, e o contêiner leva 10 segundos para parar normalmente após receber o sinal, então o contêiner será morto antes que possa parar +normalmente, uma vez que o `terminationGracePeriodSeconds` é menor que o tempo total (55 + 10) que é necessário para que essas duas coisas aconteçam. + +Se um _hook_ `PostStart` ou `PreStop` falhar, ele mata o contêiner. + +Os usuários devem tornar seus _hooks_ o mais leve possíveis. +Há casos, no entanto, em que comandos de longa duração fazem sentido, como ao salvar o estado +antes de parar um contêiner. + +### Garantias de entrega de _hooks_ + +A entrega do _hook_ é destinada a acontecer *pelo menos uma vez*, +o que quer dizer que um _hook_ pode ser chamado várias vezes para qualquer evento, +como para `PostStart` ou `PreStop`. +Depende da implementação do _hook_ lidar com isso corretamente. + +Geralmente, apenas entregas únicas são feitas. +Se, por exemplo, um receptor de _hook_ HTTP estiver inativo e não puder receber tráfego, +não há tentativa de reenviar. +Em alguns casos raros, no entanto, pode ocorrer uma entrega dupla. +Por exemplo, se um kubelet reiniciar no meio do envio de um _hook_, o _hook_ pode ser +reenviado depois que o kubelet voltar a funcionar. + +### Depurando manipuladores de _hooks_ + +Os logs para um manipulador de _hook_ não são expostos em eventos de Pod. +Se um manipulador falhar por algum motivo, ele transmitirá um evento. +Para `PostStart` é o evento `FailedPostStartHook` e para `PreStop` é o evento +`FailedPreStopHook`. +Você pode ver esses eventos executando `kubectl describe pod `. +Aqui está um exemplo de saída de eventos da execução deste comando: + +``` +Events: + FirstSeen LastSeen Count From SubObjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0" + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined] + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0" + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567 + 38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1 + 37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1 + 38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1" + 1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook +``` + + + +## {{% heading "whatsnext" %}} + + +* Saiba mais sobre o [Ambiente de contêiner](/docs/concepts/containers/container-environment/). +* Obtenha experiência prática + [anexando manipuladores a eventos de ciclo de vida do contêiner](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). + diff --git a/content/pt/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/pt/docs/reference/access-authn-authz/bootstrap-tokens.md new file mode 100644 index 0000000000..67f23e2bb6 --- /dev/null +++ b/content/pt/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -0,0 +1,169 @@ +--- +title: Autenticando com Tokens de Inicialização +content_type: concept +weight: 20 +--- + + + +{{< feature-state for_k8s_version="v1.18" state="stable" >}} + +Os tokens de inicialização são um _bearer token_ simples que devem ser utilizados +ao criar novos clusters ou para quando novos nós são registrados a clusters existentes. Eles foram construídos +para suportar a ferramenta [kubeadm](/docs/reference/setup-tools/kubeadm/), mas podem ser utilizados em outros contextos para usuários que desejam inicializar clusters sem utilizar o `kubeadm`. +Foram também construídos para funcionar, via políticas RBAC, com o sistema de [Inicialização do Kubelet via TLS](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/). + + +## Visão geral dos tokens de inicialização + +Os tokens de inicialização são definidos com um tipo especifico de _secrets_ (`bootstrap.kubernetes.io/token`) que existem no namespace `kube-system`. Estes _secrets_ são então lidos pelo autenticador de inicialização do servidor de API. +Tokens expirados são removidos pelo controlador _TokenCleaner_ no gerenciador de controle - kube-controller-manager. +Os tokens também são utilizados para criar uma assinatura para um ConfigMap específico usado no processo de descoberta através de um controlador denominado `BootstrapSigner`. + +## Formato do Token + +Tokens de inicialização tem o formato `abcdef.0123456789abcdef`. Mais formalmente, eles devem corresponder a expressão regular `[a-z0-9]{6}\.[a-z0-9]{16}`. + +A primeira parte do token é um identificador ("Token ID") e é considerado informação pública. +Ele é utilizado para se referir a um token sem vazar a parte secreta usada para autenticação. +A segunda parte é o _secret_ do token e somente deve ser compartilhado com partes confiáveis. + +## Habilitando autenticação com tokens de inicialização + +O autenticador de tokens de inicialização pode ser habilitado utilizando a seguinte opção no servidor de API: + +``` +--enable-bootstrap-token-auth +``` + +Quando habilitado, tokens de inicialização podem ser utilizado como credenciais _bearer token_ +para autenticar requisições no servidor de API. + +```http +Authorization: Bearer 07401b.f395accd246ae52d +``` + +Tokens são autenticados como o usuário `system:bootstrap:` e são membros +do grupo `system:bootstrappers`. Grupos adicionais podem ser +especificados dentro do _secret_ do token. + +Tokens expirados podem ser removidos automaticamente ao habilitar o controlador `tokencleaner` +do gerenciador de controle - kube-controller-manager. + +``` +--controllers=*,tokencleaner +``` + +## Formato do _secret_ dos tokens de inicialização + +Cada token válido possui um _secret_ no namespace `kube-system`. Você pode +encontrar a documentação completa [aqui](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md). + +Um _secret_ de token se parece com o exemplo abaixo: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + # Nome DEVE seguir o formato "bootstrap-token-" + name: bootstrap-token-07401b + namespace: kube-system + +# Tipo DEVE ser 'bootstrap.kubernetes.io/token' +type: bootstrap.kubernetes.io/token +stringData: + # Descrição legível. Opcional. + description: "The default bootstrap token generated by 'kubeadm init'." + + # identificador do token e _secret_. Obrigatório. + token-id: 07401b + token-secret: f395accd246ae52d + + # Validade. Opcional. + expiration: 2017-03-10T03:22:11Z + + # Usos permitidos. + usage-bootstrap-authentication: "true" + usage-bootstrap-signing: "true" + + # Grupos adicionais para autenticar o token. Devem começar com "system:bootstrappers:" + auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress +``` + +O tipo do _secret_ deve ser `bootstrap.kubernetes.io/token` e o nome deve seguir o formato `bootstrap-token-`. Ele também tem que existir no namespace `kube-system`. + +Os membros listados em `usage-bootstrap-*` indicam qual a intenção de uso deste _secret_. O valor `true` deve ser definido para que seja ativado. + +* `usage-bootstrap-authentication` indica que o token pode ser utilizado para autenticar no servidor de API como um _bearer token_. +* `usage-bootstrap-signing` indica que o token pode ser utilizado para assinar o ConfigMap `cluster-info` como descrito abaixo. + +O campo `expiration` controla a expiração do token. Tokens expirados são +rejeitados quando usados para autenticação e ignorados durante assinatura de ConfigMaps. +O valor de expiração é codificado como um tempo absoluto UTC utilizando a RFC3339. Para automaticamente +remover tokens expirados basta habilitar o controlador `tokencleaner`. + +## Gerenciamento de tokens com kubeadm + +Você pode usar a ferramenta `kubeadm` para gerenciar tokens em um cluster. Veja [documentação de tokens kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm-token/) para mais detalhes. + +## Assinatura de ConfigMap + +Além de autenticação, os tokens podem ser utilizados para assinar um ConfigMap. Isto pode +ser utilizado em estágio inicial do processo de inicialização de um cluster, antes que o cliente confie +no servidor de API. O Configmap assinado pode ser autenticado por um token compartilhado. + +Habilite a assinatura de ConfigMap ao habilitar o controlador `bootstrapsigner` no gerenciador de controle - kube-controller-manager. + +``` +--controllers=*,bootstrapsigner +``` +O ConfigMap assinado é o `cluster-info` no namespace `kube-public`. +No fluxo típico, um cliente lê o ConfigMap enquanto ainda não autenticado +e ignora os erros da camada de transporte seguro (TLS). +Ele então valida o conteúdo do ConfigMap ao verificar a assinatura contida no ConfigMap. + +O ConfigMap pode se parecer com o exemplo abaixo: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: cluster-info + namespace: kube-public +data: + jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U + kubeconfig: | + apiVersion: v1 + clusters: + - cluster: + certificate-authority-data: + server: https://10.138.0.2:6443 + name: "" + contexts: [] + current-context: "" + kind: Config + preferences: {} + users: [] +``` + +O membro `kubeconfig` do ConfigMap é um arquivo de configuração contendo somente +as informações do cluster preenchidas. A informação chave sendo comunicada aqui +está em `certificate-authority-data`. Isto poderá ser expandido no futuro. + +A assinatura é feita utilizando-se assinatura JWS em modo "separado". Para validar +a assinatura, o usuário deve codificar o conteúdo do `kubeconfig` de acordo com as regras do JWS +(codificando em base64 e descartando qualquer `=` ao final). O conteúdo codificado +e então usado para formar um JWS inteiro, inserindo-o entre os 2 pontos. Você pode +verificar o JWS utilizando o esquema `HS256` (HMAC-SHA256) com o token completo +(por exemplo: `07401b.f395accd246ae52d`) como o _secret_ compartilhado. Usuários _devem_ +verificar que o algoritmo HS256 (que é um método de assinatura simétrica) está sendo utilizado. + + +{{< warning >}} +Qualquer parte em posse de um token de inicialização pode criar uma assinatura válida +daquele token. Não é recomendável, quando utilizando assinatura de ConfigMap, que se compartilhe +o mesmo token com muitos clientes, uma vez que um cliente comprometido pode abrir brecha para potenciais +"homem no meio" entre outro cliente que confia na assinatura para estabelecer inicialização via camada de transporte seguro (TLS). +{{< /warning >}} + +Consulte a seção de [detalhes de implementação do kubeadm](/docs/reference/setup-tools/kubeadm/implementation-details/) para mais informações. \ No newline at end of file diff --git a/content/pt/docs/tutorials/kubernetes-basics/_index.html b/content/pt/docs/tutorials/kubernetes-basics/_index.html index aabad1f782..b397afba37 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/_index.html +++ b/content/pt/docs/tutorials/kubernetes-basics/_index.html @@ -90,9 +90,9 @@ card:

diff --git a/content/pt/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/pt/docs/tutorials/kubernetes-basics/explore/explore-interactive.html index 08dd5736b2..d4d93e7f7d 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/explore/explore-interactive.html +++ b/content/pt/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -29,7 +29,7 @@ weight: 20
diff --git a/content/pt/docs/tutorials/kubernetes-basics/expose/_index.md b/content/pt/docs/tutorials/kubernetes-basics/expose/_index.md index 28a8de2f24..c8f0d50a0e 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/expose/_index.md +++ b/content/pt/docs/tutorials/kubernetes-basics/expose/_index.md @@ -1,4 +1,4 @@ --- -title: Exponha publicamente seu App +title: Exponha publicamente seu aplicativo weight: 40 --- diff --git a/content/pt/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-interactive.html index a816afda10..cf24ae985e 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/expose/expose-interactive.html +++ b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -1,5 +1,5 @@ --- -title: Tutorial Interativo - Expondo seu App +title: Tutorial Interativo - Expondo seu aplicativo weight: 20 --- @@ -26,7 +26,7 @@ weight: 20
diff --git a/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html index 548892d678..4e66601116 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -1,5 +1,5 @@ --- -title: Utilizando um serviço para expor seu App +title: Utilizando um serviço para expor seu aplicativo weight: 10 --- diff --git a/content/pt/docs/tutorials/kubernetes-basics/scale/_index.md b/content/pt/docs/tutorials/kubernetes-basics/scale/_index.md new file mode 100644 index 0000000000..9e6d5b418e --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/scale/_index.md @@ -0,0 +1,4 @@ +--- +title: Escale seu aplicativo +weight: 50 +--- diff --git a/content/pt/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/pt/docs/tutorials/kubernetes-basics/scale/scale-interactive.html new file mode 100644 index 0000000000..a4ce38ded1 --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -0,0 +1,40 @@ +--- +title: Tutorial Interativo - Escalando seu aplicativo +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ Para interagir com o terminal, favor utilizar a versão desktop/tablet +
+
+
+
+ + +
+ + + +
+ + + diff --git a/content/pt/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/pt/docs/tutorials/kubernetes-basics/scale/scale-intro.html new file mode 100644 index 0000000000..351f4e01fe --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -0,0 +1,121 @@ +--- +title: Executando múltiplas instâncias de seu aplicativo +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+ +
+

Objetivos

+
    +
  • Escalar uma aplicação usando kubectl.
  • +
+
+ +
+

Escalando uma aplicação

+ +

Nos módulos anteriores nós criamos um Deployment, e então o expusemos publicamente através de um serviço (Service). O Deployment criou apenas um único Pod para executar nossa aplicação. Quando o tráfego aumentar nós precisaremos escalar a aplicação para suportar a demanda de usuários.

+ +

O escalonamento é obtido pela mudança do número de réplicas em um Deployment

+ +
+
+
+

Resumo:

+
    +
  • Escalando um Deployment
  • +
+
+
+

Você pode criar desde o início um Deployment com múltiplas instâncias usando o parâmetro --replicas para que o kubectl crie o comando de deployment

+
+
+
+
+ +
+
+

Visão geral sobre escalonamento

+
+
+ +
+
+
+ +
+
+ +
+ +
+
+ +

Escalar um Deployment garantirá que novos Pods serão criados e agendados para nós de processamento com recursos disponíveis. O escalonamento aumentará o número de Pods para o novo estado desejado. O Kubernetes também suporta o auto-escalonamento (autoscaling) de Pods, mas isso está fora do escopo deste tutorial. Escalar para zero também é possível, e isso terminará todos os Pods do Deployment especificado.

+ +

Executar múltiplas instâncias de uma aplicação irá requerer uma forma de distribuir o tráfego entre todas elas. Serviços possuem um balanceador de carga integrado que distribuirá o tráfego de rede entre todos os Pods de um Deployment exposto. Serviços irão monitorar continuamente os Pods em execução usando endpoints para garantir que o tráfego seja enviado apenas para Pods disponíveis.

+ +
+
+
+

O Escalonamento é obtido pela mudança do número de réplicas em um Deployment.

+
+
+
+ +
+ +
+
+

No momento em que tiver múltiplas instâncias de uma aplicação em execução, será capaz de fazer atualizações graduais sem indisponibilidade. Nós cobriremos isso no próximo módulo. Agora, vamos ao terminal online e escalar nossa aplicação.

+
+
+
+ + + +
+ +
+ + + diff --git a/content/zh/docs/concepts/architecture/control-plane-node-communication.md b/content/zh/docs/concepts/architecture/control-plane-node-communication.md index dc31269f47..0501507898 100644 --- a/content/zh/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/zh/docs/concepts/architecture/control-plane-node-communication.md @@ -47,13 +47,13 @@ Nodes should be provisioned with the public root certificate for the cluster suc 想要连接到 apiserver 的 Pod 可以使用服务账号安全地进行连接。 当 Pod 被实例化时,Kubernetes 自动把公共根证书和一个有效的持有者令牌注入到 Pod 里。 -`kubernetes` 服务(位于所有名字空间中)配置了一个虚拟 IP 地址,用于(通过 kube-proxy)转发 +`kubernetes` 服务(位于 `default` 名字空间中)配置了一个虚拟 IP 地址,用于(通过 kube-proxy)转发 请求到 apiserver 的 HTTPS 末端。 控制面组件也通过安全端口与集群的 apiserver 通信。 diff --git a/content/zh/docs/concepts/configuration/configmap.md b/content/zh/docs/concepts/configuration/configmap.md index 1fe9f0c2bf..1f2f34b1e3 100644 --- a/content/zh/docs/concepts/configuration/configmap.md +++ b/content/zh/docs/concepts/configuration/configmap.md @@ -66,7 +66,7 @@ Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData` fields. These fields accept key-value pairs as their values. Both the `data` field and the `binaryData` are optional. The `data` field is designed to contain UTF-8 byte sequences while the `binaryData` field is designed to -contain binary data. +contain binary data as base64-encoded strings. The name of a ConfigMap must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). @@ -78,7 +78,7 @@ ConfigMap 是一个 API [对象](/zh/docs/concepts/overview/working-with-objects 和其他 Kubernetes 对象都有一个 `spec` 不同的是,ConfigMap 使用 `data` 和 `binaryData` 字段。这些字段能够接收键-值对作为其取值。`data` 和 `binaryData` 字段都是可选的。`data` 字段设计用来保存 UTF-8 字节序列,而 `binaryData` 则 -被设计用来保存二进制数据。 +被设计用来保存二进制数据作为 base64 编码的字串。 ConfigMap 的名字必须是一个合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 @@ -361,7 +361,7 @@ kubelet 组件会在每次周期性同步时检查所挂载的 ConfigMap 是否 的 `ConfigMapAndSecretChangeDetectionStrategy` 字段来配置。 ConfigMap 既可以通过 watch 操作实现内容传播(默认形式),也可实现基于 TTL -的缓存,还可以直接将所有请求重定向到 API 服务器。 +的缓存,还可以直接经过所有请求重定向到 API 服务器。 因此,从 ConfigMap 被更新的那一刻算起,到新的主键被投射到 Pod 中去,这一 时间跨度可能与 kubelet 的同步周期加上高速缓存的传播延迟相等。 这里的传播延迟取决于所选的高速缓存类型 diff --git a/content/zh/docs/concepts/extend-kubernetes/operator.md b/content/zh/docs/concepts/extend-kubernetes/operator.md index fb9323d0dc..ca3d0f790f 100644 --- a/content/zh/docs/concepts/extend-kubernetes/operator.md +++ b/content/zh/docs/concepts/extend-kubernetes/operator.md @@ -185,57 +185,63 @@ kubectl edit SampleDB/example-database # 手动修改某些配置 可以了!Operator 会负责应用所作的更改并保持现有服务处于良好的状态。 + + ## 编写你自己的 Operator {#writing-operator} -如果生态系统中没可以实现你目标的 Operator,你可以自己编写代码。在 -[接下来](#what-s-next)一节中,你会找到编写自己的云原生 Operator -需要的库和工具的链接。 +如果生态系统中没可以实现你目标的 Operator,你可以自己编写代码。 你还可以使用任何支持 [Kubernetes API 客户端](/zh/docs/reference/using-api/client-libraries/) 的语言或运行时来实现 Operator(即控制器)。 + +以下是一些库和工具,你可用于编写自己的云原生 Operator。 + +{{% thirdparty-content %}} + +* [kubebuilder](https://book.kubebuilder.io/) +* [KUDO](https://kudo.dev/) (Kubernetes 通用声明式 Operator) +* [Metacontroller](https://metacontroller.app/),可与 Webhooks 结合使用,以实现自己的功能。 +* [Operator Framework](https://operatorframework.io) + ## {{% heading "whatsnext" %}} -* 详细了解[定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +* 详细了解 [定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * 在 [OperatorHub.io](https://operatorhub.io/) 上找到现成的、适合你的 Operator -* 借助已有的工具来编写你自己的 Operator,例如: - * [KUDO](https://kudo.dev/) (Kubernetes 通用声明式 Operator) - * [kubebuilder](https://book.kubebuilder.io/) - * [Metacontroller](https://metacontroller.app/),可与 Webhook 结合使用,以实现自己的功能。 - * [Operator Framework](https://operatorframework.io) * [发布](https://operatorhub.io/)你的 Operator,让别人也可以使用 -* 阅读 [CoreOS 原文](https://coreos.com/blog/introducing-operators.html),其介绍了 Operator 介绍 +* 阅读 [CoreOS 原始文章](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html),它介绍了 Operator 模式(这是一个存档版本的原始文章)。 * 阅读这篇来自谷歌云的关于构建 Operator 最佳实践的 [文章](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) - diff --git a/content/zh/docs/concepts/overview/working-with-objects/names.md b/content/zh/docs/concepts/overview/working-with-objects/names.md index 678adea96d..6075e0273c 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/names.md +++ b/content/zh/docs/concepts/overview/working-with-objects/names.md @@ -61,7 +61,7 @@ DNS 子域名的定义可参见 [RFC 1123](https://tools.ietf.org/html/rfc1123) 这一要求意味着名称必须满足如下规则: - 不能超过253个字符 -- 只能包含字母数字,以及'-' 和 '.' +- 只能包含小写字母、数字,以及'-' 和 '.' - 须以字母数字开头 - 须以字母数字结尾 @@ -83,7 +83,7 @@ This means the name must: 所定义的 DNS 标签标准。也就是命名必须满足如下规则: - 最多63个字符 -- 只能包含字母数字,以及'-' +- 只能包含小写字母、数字,以及'-' - 须以字母数字开头 - 须以字母数字结尾 diff --git a/content/zh/docs/concepts/services-networking/connect-applications-service.md b/content/zh/docs/concepts/services-networking/connect-applications-service.md index 6f04ea1194..2476d56f73 100644 --- a/content/zh/docs/concepts/services-networking/connect-applications-service.md +++ b/content/zh/docs/concepts/services-networking/connect-applications-service.md @@ -438,7 +438,7 @@ kind: "Secret" metadata: name: "nginxsecret" namespace: "default" - type: kubernetes.io/tls +type: kubernetes.io/tls data: tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lKQUp5M3lQK0pzMlpJTUEwR0NTcUdTSWIzRFFFQkJRVUFNQ1l4RVRBUEJnTlYKQkFNVENHNW5hVzU0YzNaak1SRXdEd1lEVlFRS0V3aHVaMmx1ZUhOMll6QWVGdzB4TnpFd01qWXdOekEzTVRKYQpGdzB4T0RFd01qWXdOekEzTVRKYU1DWXhFVEFQQmdOVkJBTVRDRzVuYVc1NGMzWmpNUkV3RHdZRFZRUUtFd2h1CloybHVlSE4yWXpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjFxSU1SOVdWM0IKMlZIQlRMRmtobDRONXljMEJxYUhIQktMSnJMcy8vdzZhU3hRS29GbHlJSU94NGUrMlN5ajBFcndCLzlYTnBwbQppeW1CL3JkRldkOXg5UWhBQUxCZkVaTmNiV3NsTVFVcnhBZW50VWt1dk1vLzgvMHRpbGhjc3paenJEYVJ4NEo5Ci82UVRtVVI3a0ZTWUpOWTVQZkR3cGc3dlVvaDZmZ1Voam92VG42eHNVR0M2QURVODBpNXFlZWhNeVI1N2lmU2YKNHZpaXdIY3hnL3lZR1JBRS9mRTRqakxCdmdONjc2SU90S01rZXV3R0ljNDFhd05tNnNTSzRqYUNGeGpYSnZaZQp2by9kTlEybHhHWCtKT2l3SEhXbXNhdGp4WTRaNVk3R1ZoK0QrWnYvcW1mMFgvbVY0Rmo1NzV3ajFMWVBocWtsCmdhSXZYRyt4U1FVQ0F3RUFBYU5RTUU0d0hRWURWUjBPQkJZRUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjcKTUI4R0ExVWRJd1FZTUJhQUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjdNQXdHQTFVZEV3UUZNQU1CQWY4dwpEUVlKS29aSWh2Y05BUUVGQlFBRGdnRUJBRVhTMW9FU0lFaXdyMDhWcVA0K2NwTHI3TW5FMTducDBvMm14alFvCjRGb0RvRjdRZnZqeE04Tzd2TjB0clcxb2pGSW0vWDE4ZnZaL3k4ZzVaWG40Vm8zc3hKVmRBcStNZC9jTStzUGEKNmJjTkNUekZqeFpUV0UrKzE5NS9zb2dmOUZ3VDVDK3U2Q3B5N0M3MTZvUXRUakViV05VdEt4cXI0Nk1OZWNCMApwRFhWZmdWQTRadkR4NFo3S2RiZDY5eXM3OVFHYmg5ZW1PZ05NZFlsSUswSGt0ejF5WU4vbVpmK3FqTkJqbWZjCkNnMnlwbGQ0Wi8rUUNQZjl3SkoybFIrY2FnT0R4elBWcGxNSEcybzgvTHFDdnh6elZPUDUxeXdLZEtxaUMwSVEKQ0I5T2wwWW5scE9UNEh1b2hSUzBPOStlMm9KdFZsNUIyczRpbDlhZ3RTVXFxUlU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" tls.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2RhaURFZlZsZHdkbFIKd1V5eFpJWmVEZWNuTkFhbWh4d1NpeWF5N1AvOE9ta3NVQ3FCWmNpQ0RzZUh2dGtzbzlCSzhBZi9WemFhWm9zcApnZjYzUlZuZmNmVUlRQUN3WHhHVFhHMXJKVEVGSzhRSHA3VkpMcnpLUC9QOUxZcFlYTE0yYzZ3MmtjZUNmZitrCkU1bEVlNUJVbUNUV09UM3c4S1lPNzFLSWVuNEZJWTZMMDUrc2JGQmd1Z0ExUE5JdWFubm9UTWtlZTRuMG4rTDQKb3NCM01ZUDhtQmtRQlAzeE9JNHl3YjREZXUraURyU2pKSHJzQmlIT05Xc0RadXJFaXVJMmdoY1kxeWIyWHI2UAozVFVOcGNSbC9pVG9zQngxcHJHclk4V09HZVdPeGxZZmcvbWIvNnBuOUYvNWxlQlkrZStjSTlTMkQ0YXBKWUdpCkwxeHZzVWtGQWdNQkFBRUNnZ0VBZFhCK0xkbk8ySElOTGo5bWRsb25IUGlHWWVzZ294RGQwci9hQ1Zkank4dlEKTjIwL3FQWkUxek1yall6Ry9kVGhTMmMwc0QxaTBXSjdwR1lGb0xtdXlWTjltY0FXUTM5SjM0VHZaU2FFSWZWNgo5TE1jUHhNTmFsNjRLMFRVbUFQZytGam9QSFlhUUxLOERLOUtnNXNrSE5pOWNzMlY5ckd6VWlVZWtBL0RBUlBTClI3L2ZjUFBacDRuRWVBZmI3WTk1R1llb1p5V21SU3VKdlNyblBESGtUdW1vVlVWdkxMRHRzaG9reUxiTWVtN3oKMmJzVmpwSW1GTHJqbGtmQXlpNHg0WjJrV3YyMFRrdWtsZU1jaVlMbjk4QWxiRi9DSmRLM3QraTRoMTVlR2ZQegpoTnh3bk9QdlVTaDR2Q0o3c2Q5TmtEUGJvS2JneVVHOXBYamZhRGR2UVFLQmdRRFFLM01nUkhkQ1pKNVFqZWFKClFGdXF4cHdnNzhZTjQyL1NwenlUYmtGcVFoQWtyczJxWGx1MDZBRzhrZzIzQkswaHkzaE9zSGgxcXRVK3NHZVAKOWRERHBsUWV0ODZsY2FlR3hoc0V0L1R6cEdtNGFKSm5oNzVVaTVGZk9QTDhPTm1FZ3MxMVRhUldhNzZxelRyMgphRlpjQ2pWV1g0YnRSTHVwSkgrMjZnY0FhUUtCZ1FEQmxVSUUzTnNVOFBBZEYvL25sQVB5VWs1T3lDdWc3dmVyClUycXlrdXFzYnBkSi9hODViT1JhM05IVmpVM25uRGpHVHBWaE9JeXg5TEFrc2RwZEFjVmxvcG9HODhXYk9lMTAKMUdqbnkySmdDK3JVWUZiRGtpUGx1K09IYnRnOXFYcGJMSHBzUVpsMGhucDBYSFNYVm9CMUliQndnMGEyOFVadApCbFBtWmc2d1BRS0JnRHVIUVV2SDZHYTNDVUsxNFdmOFhIcFFnMU16M2VvWTBPQm5iSDRvZUZKZmcraEppSXlnCm9RN3hqWldVR3BIc3AyblRtcHErQWlSNzdyRVhsdlhtOElVU2FsbkNiRGlKY01Pc29RdFBZNS9NczJMRm5LQTQKaENmL0pWb2FtZm1nZEN0ZGtFMXNINE9MR2lJVHdEbTRpb0dWZGIwMllnbzFyb2htNUpLMUI3MkpBb0dBUW01UQpHNDhXOTVhL0w1eSt5dCsyZ3YvUHM2VnBvMjZlTzRNQ3lJazJVem9ZWE9IYnNkODJkaC8xT2sybGdHZlI2K3VuCnc1YytZUXRSTHlhQmd3MUtpbGhFZDBKTWU3cGpUSVpnQWJ0LzVPbnlDak9OVXN2aDJjS2lrQ1Z2dTZsZlBjNkQKckliT2ZIaHhxV0RZK2Q1TGN1YSt2NzJ0RkxhenJsSlBsRzlOZHhrQ2dZRUF5elIzT3UyMDNRVVV6bUlCRkwzZAp4Wm5XZ0JLSEo3TnNxcGFWb2RjL0d5aGVycjFDZzE2MmJaSjJDV2RsZkI0VEdtUjZZdmxTZEFOOFRwUWhFbUtKCnFBLzVzdHdxNWd0WGVLOVJmMWxXK29xNThRNTBxMmk1NVdUTThoSDZhTjlaMTltZ0FGdE5VdGNqQUx2dFYxdEYKWSs4WFJkSHJaRnBIWll2NWkwVW1VbGc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K" diff --git a/content/zh/docs/concepts/storage/volumes.md b/content/zh/docs/concepts/storage/volumes.md index 5711035c23..cd2c09bcb2 100644 --- a/content/zh/docs/concepts/storage/volumes.md +++ b/content/zh/docs/concepts/storage/volumes.md @@ -1485,13 +1485,13 @@ Quobyte 的 GitHub 项目包含以 CSI 形式部署 Quobyte 的 -`rbd` 卷允许将 [Rados 块设备](https://ceph.com/docs/master/rbd/rbd/) 卷挂载到你的 Pod 中. +`rbd` 卷允许将 [Rados 块设备](https://docs.ceph.com/en/latest/rbd/) 卷挂载到你的 Pod 中. 不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,`rbd` 卷的内容在删除 Pod 时 会被保存,卷只是被卸载。 这意味着 `rbd` 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。 diff --git a/content/zh/docs/setup/production-environment/container-runtimes.md b/content/zh/docs/setup/production-environment/container-runtimes.md index 08f3ea9fc7..99739e8b10 100644 --- a/content/zh/docs/setup/production-environment/container-runtimes.md +++ b/content/zh/docs/setup/production-environment/container-runtimes.md @@ -9,7 +9,7 @@ reviewers: - bart0sh title: Container runtimes content_type: concept -weight: 10 +weight: 20 --> @@ -108,7 +108,7 @@ configuration, or reinstall it using automation. ### containerd -本节包含使用 `containerd` 作为 CRI 运行时的必要步骤。 +本节包含使用 containerd 作为 CRI 运行时的必要步骤。 使用以下命令在系统上安装容器: @@ -156,7 +156,7 @@ net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF -# Apply sysctl params without reboot +# 应用 sysctl 参数而无需重新启动 sudo sysctl --system ``` @@ -166,310 +166,85 @@ Install containerd: 安装 containerd: {{< tabs name="tab-cri-containerd-installation" >}} -{{% tab name="Ubuntu 16.04" %}} +{{% tab name="Linux" %}} -```shell -# (安装 containerd) -## (设置仓库) -### (安装软件包以允许 apt 通过 HTTPS 使用存储库) -sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common -``` - -```shell -## 安装 Docker 的官方 GPG 密钥 -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add - -``` - -```shell -## 新增 Docker apt 仓库。 -sudo add-apt-repository \ - "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ - $(lsb_release -cs) \ - stable" -``` - -```shell -## 安装 containerd -sudo apt-get update && sudo apt-get install -y containerd.io -``` - -```shell -# 配置 containerd -sudo mkdir -p /etc/containerd -sudo containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# 重启 containerd -sudo systemctl restart containerd -``` -{{< /tab >}} -{{% tab name="Ubuntu 18.04/20.04" %}} +1. 从官方Docker仓库安装 `containerd.io` 软件包。可以在 [安装 Docker 引擎](https://docs.docker.com/engine/install/#server) 中找到有关为各自的 Linux 发行版设置 Docker 存储库和安装 `containerd.io` 软件包的说明。 -```shell -# 安装 containerd -sudo apt-get update && sudo apt-get install -y containerd -``` +2. 配置 containerd: -```shell -# 配置 containerd -sudo mkdir -p /etc/containerd -sudo containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# 重启 containerd -sudo systemctl restart containerd -``` -{{% /tab %}} -{{% tab name="Debian 9+" %}} + ```shell + sudo mkdir -p /etc/containerd + containerd config default | sudo tee /etc/containerd/config.toml + ``` -```shell -# 安装 containerd -## 配置仓库 -### 安装软件包以使 apt 能够使用 HTTPS 访问仓库 -sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common -``` +3. 重新启动 containerd: -```shell -## 添加 Docker 的官方 GPG 密钥 -curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add - -``` + ```shell + sudo systemctl restart containerd + ``` -```shell -## 添加 Docker apt 仓库 -sudo add-apt-repository \ - "deb [arch=amd64] https://download.docker.com/linux/debian \ - $(lsb_release -cs) \ - stable" -``` - - -```shell -## 安装 containerd -sudo apt-get update && sudo apt-get install -y containerd.io -``` - -```shell -# 设置 containerd 的默认配置 -sudo mkdir -p /etc/containerd -containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# 重启 containerd -sudo systemctl restart containerd -``` -{{% /tab %}} -{{% tab name="CentOS/RHEL 7.4+" %}} - - -```shell -# 安装 containerd -## 设置仓库 -### 安装所需包 -sudo yum install -y yum-utils device-mapper-persistent-data lvm2 -``` - -```shell -### 添加 Docker 仓库 -sudo yum-config-manager \ - --add-repo \ - https://download.docker.com/linux/centos/docker-ce.repo -``` - -```shell -## 安装 containerd -sudo yum update -y && sudo yum install -y containerd.io -``` - -```shell -# 配置 containerd -sudo mkdir -p /etc/containerd -containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# 重启 containerd -sudo systemctl restart containerd -``` {{% /tab %}} {{% tab name="Windows (PowerShell)" %}} - +启动 Powershell 会话,将 `$Version` 设置为所需的版本(例如:`$ Version=1.4.3`),然后运行以下命令: -```powershell -# extract and configure -Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force -cd $Env:ProgramFiles\containerd\ -.\containerd.exe config default | Out-File config.toml -Encoding ascii + +1. 下载 containerd: -# review the configuration. depending on setup you may want to adjust: -# - the sandbox_image (kubernetes pause image) -# - cni bin_dir and conf_dir locations -Get-Content config.toml -``` + ```powershell + curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz + tar.exe xvf .\containerd-windows-amd64.tar.gz + ``` + +2. 提取并配置: -```powershell -# start containerd -.\containerd.exe --register-service -Start-Service containerd -``` - --> -```powershell -# 安装 containerd -# 下载 containerd -cmd /c curl -OL https://github.com/containerd/containerd/releases/download/v1.4.1/containerd-1.4.1-windows-amd64.tar.gz -cmd /c tar xvf .\containerd-1.4.1-windows-amd64.tar.gz -``` + ```powershell + Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force + cd $Env:ProgramFiles\containerd\ + .\containerd.exe config default | Out-File config.toml -Encoding ascii -```powershell -# 解压并配置 -Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force -cd $Env:ProgramFiles\containerd\ -.\containerd.exe config default | Out-File config.toml -Encoding ascii + # Review the configuration. Depending on setup you may want to adjust: + # - the sandbox_image (Kubernetes pause image) + # - cni bin_dir and conf_dir locations + Get-Content config.toml -# 检查配置文件,基于你可能想要调整的设置: -# - sandbox_image (kubernetes pause 镜像) -# - CNI 的 bin_dir 和 conf_dir 路径 -Get-Content config.toml -``` + # (Optional - but highly recommended) Exclude containerd from Windows Defender Scans + Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe" + ``` + + +3. 启动 containerd: + + ```powershell + .\containerd.exe --register-service + Start-Service containerd + ``` -```powershell -# 启动 containerd -.\containerd.exe --register-service -Start-Service containerd -``` {{% /tab %}} {{< /tabs >}} -#### systemd {#containerd-systemd} + + +#### 使用 `systemd` cgroup 驱动程序 {#containerd-systemd} + 结合 `runc` 使用 `systemd` cgroup 驱动,在 `/etc/containerd/config.toml` 中设置 ``` @@ -493,6 +266,19 @@ When using kubeadm, manually configure the SystemdCgroup = true ``` + +如果您应用此更改,请确保再次重新启动 containerd: + +```shell +sudo systemctl restart containerd +``` + + 当使用 kubeadm 时,请手动配置 [kubelet 的 cgroup 驱动](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node). @@ -505,7 +291,7 @@ Use the following commands to install CRI-O on your system: {{< note >}} The CRI-O major and minor versions must match the Kubernetes major and minor versions. -For more information, see the [CRI-O compatibility matrix](https://github.com/cri-o/cri-o). +For more information, see the [CRI-O compatibility matrix](https://github.com/cri-o/cri-o#compatibility-matrix-cri-o--kubernetes). {{< /note >}} Install and configure prerequisites: @@ -536,7 +322,7 @@ sudo sysctl --system 使用以下命令在系统中安装 CRI-O: 提示:CRI-O 的主要以及次要版本必须与 Kubernetes 的主要和次要版本相匹配。 -更多信息请查阅 [CRI-O 兼容性列表](https://github.com/cri-o/cri-o). +更多信息请查阅 [CRI-O 兼容性列表](https://github.com/cri-o/cri-o#compatibility-matrix-cri-o--kubernetes)。 安装以及配置的先决条件: @@ -569,31 +355,31 @@ To install CRI-O on the following operating systems, set the environment variabl to the appropriate value from the following table: | Operating system | `$OS` | -|------------------|-------------------| +| ---------------- | ----------------- | | Debian Unstable | `Debian_Unstable` | | Debian Testing | `Debian_Testing` |
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version. -For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`. +For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`. You can pin your installation to a specific release. -To install version 1.18.3, set `VERSION=1.18:1.18.3`. +To install version 1.20.0, set `VERSION=1.20:1.20.0`.
Then run --> 在下列操作系统上安装 CRI-O, 使用下表中合适的值设置环境变量 `OS`: -| 操作系统 | `$OS` | -|-----------------|-------------------| -| Debian Unstable | `Debian_Unstable` | -| Debian Testing | `Debian_Testing` | +| 操作系统 | `$OS` | +| ---------------- | ----------------- | +| Debian Unstable | `Debian_Unstable` | +| Debian Testing | `Debian_Testing` |
然后,将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。 -例如,如果你要安装 CRI-O 1.18, 请设置 `VERSION=1.18`. +例如,如果你要安装 CRI-O 1.20, 请设置 `VERSION=1.20`. 你也可以安装一个特定的发行版本。 -例如要安装 1.18.3 版本,设置 `VERSION=1.18:1.18.3`. +例如要安装 1.20.0 版本,设置 `VERSION=1.20.0:1.20.0`.
然后执行 @@ -605,8 +391,8 @@ cat < Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version. -For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`. +For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`. You can pin your installation to a specific release. -To install version 1.18.3, set `VERSION=1.18:1.18.3`. +To install version 1.20.0, set `VERSION=1.20:1.20.0`.
Then run --> 在下列操作系统上安装 CRI-O, 使用下表中合适的值设置环境变量 `OS`: -| 操作系统 | `$OS` | -|--------------|-----------------| -| Ubuntu 20.04 | `xUbuntu_20.04` | -| Ubuntu 19.10 | `xUbuntu_19.10` | -| Ubuntu 19.04 | `xUbuntu_19.04` | -| Ubuntu 18.04 | `xUbuntu_18.04` | +| 操作系统 | `$OS` | +| ---------------- | ----------------- | +| Ubuntu 20.04 | `xUbuntu_20.04` | +| Ubuntu 19.10 | `xUbuntu_19.10` | +| Ubuntu 19.04 | `xUbuntu_19.04` | +| Ubuntu 18.04 | `xUbuntu_18.04` |
然后,将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。 -例如,如果你要安装 CRI-O 1.18, 请设置 `VERSION=1.18`. +例如,如果你要安装 CRI-O 1.20, 请设置 `VERSION=1.20`. 你也可以安装一个特定的发行版本。 -例如要安装 1.18.3 版本,设置 `VERSION=1.18:1.18.3`. +例如要安装 1.20.0 版本,设置 `VERSION=1.20:1.20.0`.
然后执行 @@ -661,8 +447,8 @@ cat < Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version. -For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`. +For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`. You can pin your installation to a specific release. -To install version 1.18.3, set `VERSION=1.18:1.18.3`. +To install version 1.20.0, set `VERSION=1.20:1.20.0`.
Then run --> 在下列操作系统上安装 CRI-O, 使用下表中合适的值设置环境变量 `OS`: -| 操作系统 | `$OS` | -|-----------------|-------------------| -| Centos 8 | `CentOS_8` | -| Centos 8 Stream | `CentOS_8_Stream` | -| Centos 7 | `CentOS_7` | +| 操作系统 | `$OS` | +| ---------------- | ----------------- | +| Centos 8 | `CentOS_8` | +| Centos 8 Stream | `CentOS_8_Stream` | +| Centos 7 | `CentOS_7` |
然后,将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。 -例如,如果你要安装 CRI-O 1.18, 请设置 `VERSION=1.18`. +例如,如果你要安装 CRI-O 1.20, 请设置 `VERSION=1.20`. 你也可以安装一个特定的发行版本。 -例如要安装 1.18.3 版本,设置 `VERSION=1.18:1.18.3`. +例如要安装 1.20.0 版本,设置 `VERSION=1.20:1.20.0`.
然后执行 @@ -725,7 +511,7 @@ sudo zypper install cri-o 将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。 -例如,如果要安装 CRI-O 1.18,请设置 `VERSION=1.18`。 +例如,如果要安装 CRI-O 1.20,请设置 `VERSION=1.20`。 你可以用下列命令查找可用的版本: ```shell @@ -751,7 +537,7 @@ CRI-O 不支持在 Fedora 上固定到特定的版本。 然后执行 ```shell sudo dnf module enable cri-o:$VERSION -sudo dnf install cri-o +sudo dnf install cri-o --now ``` {{% /tab %}} @@ -762,272 +548,90 @@ Start CRI-O: ```shell sudo systemctl daemon-reload -sudo systemctl start crio +sudo systemctl enable crio --no ``` -Refer to the [CRI-O installation guide](https://github.com/kubernetes-sigs/cri-o#getting-started) +Refer to the [CRI-O installation guide](https://github.com/cri-o/cri-o/blob/master/install.md) for more information. --> -启动 CRI-O: +#### cgroup driver + +默认情况下,CRI-O 使用 systemd cgroup 驱动程序。切换到` +`cgroupfs` +cgroup 驱动程序,或者编辑 `/ etc / crio / crio.conf` 或放置一个插件 +在 `/etc/crio/crio.conf.d/02-cgroup-manager.conf` 中的配置,例如: -```shell -sudo systemctl daemon-reload -sudo systemctl start crio +```toml +[crio.runtime] +conmon_cgroup = "pod" +cgroup_manager = "cgroupfs" ``` - -更多信息请参阅 [CRI-O 安装指南](https://github.com/kubernetes-sigs/cri-o#getting-started)。 + +另请注意更改后的 `conmon_cgroup` ,必须将其设置为 +`pod`将 CRI-O 与 `cgroupfs` 一起使用时。通常有必要保持 +kubelet 的 cgroup 驱动程序配置(通常透过 kubeadm 完成)和CRI-O 同步中。 ### Docker - - -在你的所有节点上安装 Docker CE. - -Kubernetes 发布说明中列出了 Docker 的哪些版本与该版本的 Kubernetes 相兼容。 - -在你的操作系统上使用如下命令安装 Docker: - -{{< tabs name="tab-cri-docker-installation" >}} -{{% tab name="Ubuntu 16.04+" %}} +1. 在每个节点上,根据[安装 Docker 引擎](https://docs.docker.com/engine/install/#server) 为你的 Linux 发行版安装 Docker。 + 你可以在此文件中找到最新的经过验证的 Docker 版本[依赖关系](https://git.k8s.io/kubernetes/build/dependencies.yaml)。 +2. 配置 Docker 守护程序,尤其是使用 systemd 来管理容器的cgroup。 -```shell -# Add Docker's official GPG key: -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add --keyring /etc/apt/trusted.gpg.d/docker.gpg - -``` ---> - -```shell -# (安装 Docker CE) -## 设置仓库: -### 安装软件包以允许 apt 通过 HTTPS 使用存储库 -sudo apt-get update && sudo apt-get install -y \ - apt-transport-https ca-certificates curl software-properties-common gnupg2 -``` - -```shell -### 新增 Docker 的 官方 GPG 秘钥: -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add --keyring /etc/apt/trusted.gpg.d/docker.gpg - -``` + ```shell + sudo mkdir /etc/docker + cat <}} + + + 对于运行 Linux 内核版本 4.0 或更高版本,或使用 3.10.0-51 及更高版本的 RHEL 或 CentOS 的系统,`overlay2`是首选的存储驱动程序。 + {{< /note >}} -```shell -### 添加 Docker apt 仓库: -sudo add-apt-repository \ - "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ - $(lsb_release -cs) \ - stable" -``` +3. 重新启动 Docker 并在启动时启用: + ```shell + sudo systemctl enable docker + sudo systemctl daemon-reload + sudo systemctl restart docker + ``` -```shell -## 安装 Docker CE -sudo apt-get update && sudo apt-get install -y \ - containerd.io=1.2.13-2 \ - docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \ - docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) -``` - -```shell -# 设置 Docker daemon -cat <}} -```shell -# 重启 docker. -sudo systemctl daemon-reload -sudo systemctl restart docker -``` -{{% /tab %}} -{{% tab name="CentOS/RHEL 7.4+" %}} +{{< /note >}} - -```shell -# (安装 Docker CE) -## 设置仓库 -### 安装所需包 -sudo yum install -y yum-utils device-mapper-persistent-data lvm2 -``` - -```shell -### 新增 Docker 仓库 -sudo yum-config-manager --add-repo \ - https://download.docker.com/linux/centos/docker-ce.repo -``` - -```shell -## 安装 Docker CE -sudo yum update -y && sudo yum install -y \ - containerd.io-1.2.13 \ - docker-ce-19.03.11 \ - docker-ce-cli-19.03.11 -``` - -```shell -## 创建 /etc/docker 目录 -sudo mkdir /etc/docker -``` - -```shell -# 设置 Docker daemon -cat < -```shell -# 重启 Docker -sudo systemctl daemon-reload -sudo systemctl restart docker -``` -{{% /tab %}} -{{% /tabs %}} - - -如果你想开机即启动 `docker` 服务,执行以下命令: - -```shell -sudo systemctl enable docker -``` - - -请参阅[官方 Docker 安装指南](https://docs.docker.com/engine/installation/) -获取更多的信息。 +有关更多信息,请参阅 + - [配置 Docker 守护程序](https://docs.docker.com/config/daemon/) + - [使用 systemd 控制 Docker](https://docs.docker.com/config/daemon/systemd/) diff --git a/content/zh/docs/tasks/administer-cluster/change-default-storage-class.md b/content/zh/docs/tasks/administer-cluster/change-default-storage-class.md index e2d6bd6ad9..ca4efbb7e2 100644 --- a/content/zh/docs/tasks/administer-cluster/change-default-storage-class.md +++ b/content/zh/docs/tasks/administer-cluster/change-default-storage-class.md @@ -43,11 +43,11 @@ dynamic provisioning of storage. 如果是这样的话,你可以改变默认 StorageClass,或者完全禁用它以防止动态配置存储。 -简单的删除默认 StorageClass 可能行不通,因为它可能会被你集群中的扩展管理器自动重建。 +删除默认 StorageClass 可能行不通,因为它可能会被你集群中的扩展管理器自动重建。 请查阅你的安装文档中关于扩展管理器的细节,以及如何禁用单个扩展。 @@ -107,7 +107,7 @@ for details about addon manager and how to disable individual addons. 3. 标记一个 StorageClass 为默认的: 和前面的步骤类似,你需要添加/设置注解 `storageclass.kubernetes.io/is-default-class=true`。 diff --git a/content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md index 330416050c..34e519e79d 100644 --- a/content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -33,8 +33,6 @@ explains how to use `kubeadm` to migrate from `kube-dns`. 文档[迁移到 CoreDNS](/zh/docs/tasks/administer-cluster/coredns/#migrating-to-coredns) 解释了如何使用 `kubeadm` 从 `kube-dns` 迁移到 CoreDNS。 -{{% version-check %}} - ### 创建证书签名请求 (CSR) -你可以用 `kubeadm alpha certs renew --use-api` 为 Kubernetes 证书 API 创建一个证书签名请求。 - -如果你设置例如 [cert-manager](https://github.com/jetstack/cert-manager) -等外部签名者,证书签名请求(CSRs)会被自动批准。 -否则,你必须使用 [`kubectl certificate`](/zh/docs/setup/best-practices/certificates/) -命令手动批准证书。 -以下 kubeadm 命令输出要批准的证书名称,然后阻塞等待批准发生: - -```shell -sudo kubeadm alpha certs renew apiserver --use-api & -``` - - -输出类似于以下内容: -``` -[1] 2890 -[certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created -``` - - - -### 批准证书签名请求 (CSR) - -如果你设置了一个外部签名者, 证书签名请求 (CSRs) 会自动被批准。 - -否则,你必须用 [`kubectl certificate`](/zh/docs/setup/best-practices/certificates/) -命令手动批准证书,例如: - -```shell -kubectl certificate approve kubeadm-cert-kube-apiserver-ld526 -``` - - -输出类似于以下内容: - -``` -certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved -``` - - -你可以使用 `kubectl get csr` 查看待处理证书列表。 +有关使用 Kubernetes API 创建 CSR 的信息, +请参见[创建 CertificateSigningRequest](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest)。 ### 附加信息 -- 在对 kubelet 作次版本升级时需要[腾空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)。 +- 在对 kubelet 作次版本升版时需要[腾空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)。 对于控制面节点,其上可能运行着 CoreDNS Pods 或者其它非常重要的负载。 - 升级后,因为容器规约的哈希值已更改,所有容器都会被重新启动。 diff --git a/content/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 2039a68e19..7aaf473337 100644 --- a/content/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -173,11 +173,13 @@ kubectl get secret db-user-pass -o jsonpath='{.data}' 输出类似于: ```json -{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="} +{"password":"MWYyZDFlMmU2N2Rm","username":"YWRtaW4="} ``` - -现在你可以解码 `password.txt` 的数据: + +现在你可以解码 `password` 的数据: ```shell echo 'MWYyZDFlMmU2N2Rm' | base64 --decode diff --git a/content/zh/docs/tasks/run-application/run-replicated-stateful-application.md b/content/zh/docs/tasks/run-application/run-replicated-stateful-application.md index 05503836ab..e70c1075ac 100644 --- a/content/zh/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/zh/docs/tasks/run-application/run-replicated-stateful-application.md @@ -55,6 +55,7 @@ on general patterns for running stateful applications in Kubernetes. [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/). * Some familiarity with MySQL helps, but this tutorial aims to present general patterns that should be useful for other systems. +* You are using the default namespace or another namespace that does not contain any conflicting objects. --> * 本教程假定你熟悉 [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/) @@ -63,6 +64,7 @@ on general patterns for running stateful applications in Kubernetes. [服务](/zh/docs/concepts/services-networking/service/) 与 [ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/). * 熟悉 MySQL 会有所帮助,但是本教程旨在介绍对其他系统应该有用的常规模式。 +* 您正在使用默认命名空间或不包含任何冲突对象的另一个命名空间。 ## {{% heading "objectives" %}} @@ -280,21 +282,20 @@ properties. The script in the `init-mysql` container also applies either `primary.cnf` or `replica.cnf` from the ConfigMap by copying the contents into `conf.d`. Because the example topology consists of a single primary MySQL server and any number of -replicas, the script simply assigns ordinal `0` to be the primary server, and everyone +replicas, the script assigns ordinal `0` to be the primary server, and everyone else to be replicas. Combined with the StatefulSet controller's -[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/), +[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees), this ensures the primary MySQL server is Ready before creating replicas, so they can begin replicating. --> -通过将内容复制到 conf.d 中,`init-mysql` 容器中的脚本也可以应用 ConfigMap 中的 -`primary.cnf` 或 `replica.cnf`。 -由于示例部署结构由单个 MySQL 主节点和任意数量的副本节点组成,因此脚本仅将序数 -`0` 指定为主节点,而将其他所有节点指定为副本节点。 +通过将内容复制到 conf.d 中,`init-mysql` 容器中的脚本也可以应用 ConfigMap 中的 `primary.cnf` 或 `replica.cnf`。 +由于示例部署结构由单个 MySQL 主节点和任意数量的副本节点组成, +因此脚本仅将序数 `0` 指定为主节点,而将其他所有节点指定为副本节点。 与 StatefulSet 控制器的 -[部署顺序保证](/zh/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/) +[部署顺序保证](/zh/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) 相结合, 可以确保 MySQL 主服务器在创建副本服务器之前已准备就绪,以便它们可以开始复制。 diff --git a/content/zh/docs/tasks/tools/_index.md b/content/zh/docs/tasks/tools/_index.md index 244191cc89..028f22fb38 100644 --- a/content/zh/docs/tasks/tools/_index.md +++ b/content/zh/docs/tasks/tools/_index.md @@ -15,34 +15,34 @@ no_list: true ## kubectl -Kubernetes 命令行工具,`kubectl`,使得你可以对 Kubernetes 集群运行命令。 -你可以使用 `kubectl` 来部署应用、监测和管理集群资源以及查看日志。 +Kubernetes 命令行工具,[kubectl](/docs/reference/kubectl/kubectl/),使得你可以对 Kubernetes 集群运行命令。 +你可以使用 kubectl 来部署应用、监测和管理集群资源以及查看日志。 -关于如何下载和安装 `kubectl` 并配置其访问你的集群,可参阅 -[安装和配置 `kubectl`](/zh/docs/tasks/tools/install-kubectl/)。 +有关更多信息,包括 kubectl 操作的完整列表,请参见[`kubectl` +参考文件](/zh/docs/reference/kubectl/)。 - -查看 kubectl 安装和配置指南 - +kubectl 可安装在各种 Linux 平台、 macOS 和 Windows 上。 +在下面找到你喜欢的操作系统。 -你也可以阅读 [`kubectl` 参考文档](/zh/docs/reference/kubectl/). +- [在 Linux 上安装 kubectl](/zh/docs/tasks/tools/install-kubectl-linux) +- [在 macOS 上安装 kubectl](/zh/docs/tasks/tools/install-kubectl-macos) +- [在 Windows 上安装 kubectl](/zh/docs/tasks/tools/install-kubectl-windows) + {{< note >}} - 如果你在本地安装了 Minikube,运行 `minikube start`。 + 如果你在本地安装了 Minikube,运行 `minikube start`。 + 在运行 `minikube dashboard` 之前,你应该打开一个新终端, + 在此启动 `minikube dashboard` ,然后切换回主终端。 {{< /note >}} 3. 仅限 Katacoda 环境:在终端窗口的顶部,单击加号,然后单击 **选择要在主机 1 上查看的端口**。 + 4. 仅限 Katacoda 环境:输入“30000”,然后单击 **显示端口**。 + +{{< note >}} +`dashboard` 命令启用仪表板插件,并在默认的 Web 浏览器中打开代理。你可以在仪表板上创建 Kubernetes 资源,例如 Deployment 和 Service。 + +如果你以 root 用户身份在环境中运行, +请参见[使用 URL 打开仪表板](/zh/docs/tutorials/hello-minikube#open-dashboard-with-url)。 + +要停止代理,请运行 `Ctrl+C` 退出该进程。仪表板仍在运行中。 +{{< /note >}} + + +## 使用 URL 打开仪表板 + + +如果你不想打开 Web 浏览器,请使用 url 标志运行显示板命令以得到 URL: + +```shell +minikube dashboard --url +``` +