Merge release1.15 into master (#16535)

* initial commit

* promote AWS-NLB Support from alpha to beta (#14451) (#16459) (#16484)

* 1. Sync release-1.15 into master
2. Sync with en version

* 1. Add the lost yaml file.

* Update he cluster administration folder of concepts
1. Sync with 1.14 branch
2. Sync with en version

* Add yaml files which are used
pull/16521/head
成臣 Chengchen 2019-09-24 21:29:27 +09:00 committed by Kubernetes Prow Robot
parent 444ef3486e
commit 4aed0f11ae
14 changed files with 1948 additions and 73 deletions

View File

@ -0,0 +1,4 @@
---
title: "计算、存储和网络扩展"
weight: 30
---

View File

@ -1,45 +1,113 @@
---
title: 安装扩展Addons
content_template: templates/concept
---
{{% capture overview %}}
## 概览
<!--
Add-ons extend the functionality of Kubernetes.
This page lists some of the available add-ons and links to their respective installation instructions.
Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status.
-->
Add-ons 扩展了 Kubernetes 的功能。
本文列举了一些可用的 add-ons 以及到它们各自安装说明的链接。
每个 add-ons 按字母顺序排序 - 顺序不代表任何优先地位。
{{% /capture %}}
{{% capture body %}}
<!--
## Networking and Network Policy
* [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated container networking and network security with Cisco ACI.
* [Calico](https://docs.projectcalico.org/latest/getting-started/kubernetes/) is a secure L3 networking and network policy provider.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
* [Contiv](http://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](http://github.com/contiv). The [installer](http://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
* [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
-->
## 网络和网络策略
* [ACI](https://www.github.com/noironetworks/aci-containers) 通过 Cisco ACI 提供集成的容器网络和安全网络。
* [Calico](http://docs.projectcalico.org/latest/getting-started/kubernetes/installation/hosted/) 是一个安全的 L3 网络和网络策略提供者。
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) 结合 Flannel 和 Calico 提供网络和网络策略。
* [Cilium](https://github.com/cilium/cilium) 是一个 L3 网络和网络策略插件, 能够透明的实施 HTTP/API/L7 策略。 同时支持路由routing和叠加/封装( overlay/encapsulation模式。
* [Contiv](http://contiv.github.io) 为多种用例提供可配置网络(使用 BGP 的原生 L3使用 vxlan 的 overlay经典 L2 和 Cisco-SDN/ACI和丰富的策略框架。Contiv 项目完全[开源](http://github.com/contiv)。[安装工具](http://github.com/contiv/install)同时提供基于和不基于 kubeadm 的安装选项。
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) 是一个可以用于 Kubernetes 的 overlay 网络提供者。
* [Romana](http://romana.io) 是一个 pod 网络的层 3 解决方案,并且支持 [NetworkPolicy API](/docs/concepts/services-networking/network-policies/)。Kubeadm add-on 安装细节可以在[这里](https://github.com/romana/romana/tree/master/containerize)找到。
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) 提供了在网络分组两端参与工作的网络和网络策略,并且不需要额外的数据库。
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) 使 Kubernetes 无缝连接到一种 CNI 插件例如Flannel、Calico、Canal、Romana 或者 Weave。
* [Contiv](http://contiv.github.io) 为多种用例提供可配置网络(使用 BGP 的原生 L3使用 vxlan 的 overlay经典 L2 和 Cisco-SDN/ACI和丰富的策略框架。Contiv 项目完全[开源](http://github.com/contiv)。[安装工具](http://github.com/contiv/install)同时提供基于和不基于 kubeadm 的安装选项。
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) 是一个可以用于 Kubernetes 的 overlay 网络提供者。
* [Knitter](https://github.com/ZTE/Knitter/) 是为 kubernetes 提供复合网络解决方案的网络组件.
* [Multus](https://github.com/Intel-Corp/multus-cni) 是一个多插件,可在 Kubernetes 中提供多种网络支持,以支持所有 CNI 插件(例如 CalicoCiliumContivFlannelis a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel)而且包含了在Kubernetes中基于 SRIOVDPDKOVS-DPDK 和 VPP 的工作负载.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) 容器插件( NCP )提供了 VMware NSX-T 与容器协调器(例如 Kubernetes之间的集成以及 NSX-T 与基于容器的 CaaS / PaaS 平台(例如关键容器服务( PKS )和 OpenShift )之间的集成。
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) 是一个SDN平台可在Kubernetes Pods和非Kubernetes环境之间提供基于策略的联网并具有可视化和安全监控。
* [Romana](http://romana.io) 是一个 pod 网络的层 3 解决方案,并且支持 [NetworkPolicy API](/docs/concepts/services-networking/network-policies/)。Kubeadm add-on 安装细节可以在[这里](https://github.com/romana/romana/tree/master/containerize)找到。
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) 提供了在网络分组两端参与工作的网络和网络策略,并且不需要额外的数据库。
<!--
## Service Discovery
* [CoreDNS](https://coredns.io) is a flexible, extensible DNS server which can be [installed](https://github.com/coredns/deployment/tree/master/kubernetes) as the in-cluster DNS for pods.
-->
## 服务发现
* [CoreDNS](https://coredns.io) 是一种灵活的,可扩展的 DNS 服务器,可以 [安装](https://github.com/coredns/deployment/tree/master/kubernetes) 为集群内的 Pod 提供 DNS 服务。
<!--
## Visualization &amp; Control
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a dashboard web interface for Kubernetes.
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself.
-->
## 可视化管理
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) 是一个 Kubernetes 的 web 控制台界面。
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) 是一个图形化工具,用于查看你的 containers、 pods、services等。 请和一个 [Weave Cloud account](https://cloud.weave.works/) 一起使用,或者自己运行 UI。
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) 是一个图形化工具,用于查看你的 containers、 pods、services 等。 请和一个 [Weave Cloud account](https://cloud.weave.works/) 一起使用,或者自己运行 UI。
<!--
## Infrastructure
* [KubeVirt](https://kubevirt.io/user-guide/docs/latest/administration/intro.html#cluster-side-add-on-deployment) is an add-on to run virtual machines on Kubernetes. Usually run on bare-metal clusters.
-->
## 基础设施
* [KubeVirt](https://kubevirt.io/user-guide/docs/latest/administration/intro.html#cluster-side-add-on-deployment) 是可以让 Kubernetes 运行虚拟机的 add-ons 。通常运行在裸机群集上。
<!--
## Legacy Add-ons
There are several other add-ons documented in the deprecated [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) directory.
Well-maintained ones should be linked to here. PRs welcome!
-->
## 遗留 Add-ons
还有一些其它 add-ons 归档在已废弃的 [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) 路径中。
维护完善的 add-ons 应该被链接到这里。欢迎提出 PRs
{{% /capture %}}

View File

@ -2,53 +2,71 @@
cn-approvers:
- lichuqiang
title: 证书
content_template: templates/concept
weight: 20
---
{{< toc >}}
## 创建证书
{{% capture overview %}}
<!--
When using client certificate authentication, you can generate certificates
manually through `easyrsa`, `openssl` or `cfssl`.
-->
当使用客户端证书进行认证时,用户可以使用现有部署脚本,或者通过 `easyrsa`、`openssl` 或
`cfssl` 手动生成证书。
### 使用现有部署脚本
{{% /capture %}}
**现有部署脚本** 位于
`cluster/saltbase/salt/generate-cert/make-ca-cert.sh`
执行该脚本时需传入两个参数。 第一个参数为 API 服务器的 IP 地址,第二个参数为对象的候补名称列表,
形如 `IP:<ip地址> 或 DNS:<dns名称>`
脚本生成三个文件: `ca.crt`、`server.crt` 和 `server.key`
最后,将以下参数加入到 API 服务器的启动参数中:
```
--client-ca-file=/srv/kubernetes/ca.crt
--tls-cert-file=/srv/kubernetes/server.crt
--tls-private-key-file=/srv/kubernetes/server.key
```
{{% capture body %}}
### easyrsa
<!--
**easyrsa** can manually generate certificates for your cluster.
-->
使用 **easyrsa** 能够手动地为集群生成证书。
<!--
1. Download, unpack, and initialize the patched version of easyrsa3.
-->
1. 下载、解压并初始化 easyrsa3 的补丁版本。
curl -L -O https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
tar xzf easy-rsa.tar.gz
cd easy-rsa-master/easyrsa3
./easyrsa init-pki
<!--
1. Generate a CA. (`--batch` set automatic mode. `--req-cn` default CN to use.)
-->
1. 生成 CA通过 `--batch` 参数设置自动模式。 通过 `--req-cn` 设置默认使用的 CN
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
<!--
1. Generate server certificate and key.
The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will
be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR
that is specified as the `--service-cluster-ip-range` argument for both the API server and
the controller manager component. The argument `--days` is used to set the number of days
after which the certificate expires.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
-->
1. 生成服务器证书和密钥。
参数 `--subject-alt-name` 设置了访问 API 服务器时可能使用的 IP 和 DNS 名称。 `MASTER_CLUSTER_IP`
通常为 `--service-cluster-ip-range` 参数中指定的服务 CIDR 的 首个 IP 地址,`--service-cluster-ip-range`同时用于
通常为 `--service-cluster-ip-range` 参数中指定的服务 CIDR 的 首个 IP 地址,`--service-cluster-ip-range` 同时用于
API 服务器和控制器管理器组件。 `--days` 参数用于设置证书的有效期限。
下面的示例还假设用户使用 `cluster.local` 作为默认的 DNS 域名。
./easyrsa --subject-alt-name="IP:${MASTER_IP}"\
./easyrsa --subject-alt-name="IP:${MASTER_IP},"\
"IP:${MASTER_CLUSTER_IP},"\
"DNS:kubernetes,"\
"DNS:kubernetes.default,"\
@ -57,6 +75,12 @@ title: 证书
"DNS:kubernetes.default.svc.cluster.local" \
--days=10000 \
build-server-full server nopass
<!--
1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory.
1. Fill in and add the following parameters into the API server start parameters:
-->
1. 拷贝 `pki/ca.crt``pki/issued/server.crt``pki/private/server.key` 至您的目录。
1. 填充并在 API 服务器的启动参数中添加以下参数:
@ -66,19 +90,46 @@ title: 证书
### openssl
<!--
**openssl** can manually generate certificates for your cluster.
1. Generate a ca.key with 2048bit:
-->
使用 **openssl** 能够手动地为集群生成证书。
1. 生成密钥位数为 2048 的 ca.key
openssl genrsa -out ca.key 2048
<!--
1. According to the ca.key generate a ca.crt (use -days to set the certificate effective time):
-->
1. 依据 ca.key 生成 ca.crt (使用 -days 参数来设置证书有效时间):
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
<!--
1. Generate a server.key with 2048bit:
-->
1. 生成密钥位数为 2048 的 server.key
openssl genrsa -out server.key 2048
<!--
1. Create a config file for generating a Certificate Signing Request (CSR).
Be sure to substitute the values marked with angle brackets (e.g. `<MASTER_IP>`)
with real values before saving this to a file (e.g. `csr.conf`).
Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the
API server as described in previous subsection.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
-->
1. 创建用于生成证书签名请求CSR的配置文件。
确保在将其保存至文件(如`csr.conf`)之前将尖括号标记的值(如`<MASTER_IP>`
确保在将其保存至文件(如 `csr.conf`)之前将尖括号标记的值(如 `<MASTER_IP>`
替换为你想使用的真实值。 注意:`MASTER_CLUSTER_IP` 是前面小节中描述的 API 服务器的服务集群 IP
(service cluster IP)。 下面的示例也假设用户使用 `cluster.local` 作为默认的 DNS 域名。
@ -88,7 +139,7 @@ title: 证书
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
C = <country>
ST = <state>
@ -96,10 +147,10 @@ title: 证书
O = <organization>
OU = <organization unit>
CN = <MASTER_IP>
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
@ -108,46 +159,85 @@ title: 证书
DNS.5 = kubernetes.default.svc.cluster.local
IP.1 = <MASTER_IP>
IP.2 = <MASTER_CLUSTER_IP>
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
<!--
1. Generate the certificate signing request based on the config file:
-->
1. 基于配置文件生成证书签名请求:
openssl req -new -key server.key -out server.csr -config csr.conf
<!--
1. Generate the server certificate using the ca.key, ca.crt and server.csr:
-->
1. 使用 ca.key、ca.crt 和 server.csr 生成服务器证书:
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out server.crt -days 10000 \
-extensions v3_ext -extfile csr.conf
<!--
1. View the certificate:
-->
1. 查看证书:
openssl x509 -noout -text -in ./server.crt
<!--
Finally, add the same parameters into the API server start parameters.
-->
最后,添加同样的参数到 API 服务器的启动参数中。
### cfssl
<!--
**cfssl** is another tool for certificate generation.
-->
**cfssl** 是另一种用来生成证书的工具。
<!--
1. Download, unpack and prepare the command line tools as shown below.
Note that you may need to adapt the sample commands based on the hardware
architecture and cfssl version you are using.
-->
1. 按如下所示的方式下载、解压并准备命令行工具。
注意:你可能需要基于硬件架构和你所使用的 cfssl 版本对示例命令进行修改。
curl -LO https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o cfssl
chmod +x cfssl
curl -LO https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o cfssljson
chmod +x cfssljson
curl -LO https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o cfssl-certinfo
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o cfssl-certinfo
chmod +x cfssl-certinfo
<!--
1. Create a directory to hold the artifacts and initialize cfssl:
-->
1. 创建目录来存放物料,并初始化 cfssl
mkdir cert
cd cert
../cfssl print-defaults config > config.json
../cfssl print-defaults csr > csr.json
<!--
1. Create a JSON config file for generating the CA file, for example, `ca-config.json`:
-->
1. 创建用来生成 CA 文件的 JSON 配置文件,例如 `ca-config.json`
{
@ -161,13 +251,20 @@ title: 证书
"signing",
"key encipherment",
"server auth",
"client auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
<!--
1. Create a JSON config file for CA certificate signing request (CSR), for example,
`ca-csr.json`. Be sure to replace the values marked with angle brackets with
real values you want to use.
-->
1. 创建用来生成 CA 证书签名请求CSR的 JSON 配置文件,例如 `ca-csr.json`
确保将尖括号标记的值替换为你想使用的真实值。
@ -182,12 +279,27 @@ title: 证书
"ST": "<state>",
"L": "<city>",
"O": "<organization>",
"OU": "<organization unit>",
"OU": "<organization unit>"
}]
}
<!--
1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`):
-->
1. 生成 CA 密钥(`ca-key.pem`)和证书(`ca.pem`
../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca
<!--
1. Create a JSON config file for generating keys and certificates for the API
server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with
real values you want to use. The `MASTER_CLUSTER_IP` is the service cluster
IP for the API server as described in previous subsection.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
-->
1. 按如下所示的方式创建用来为 API 服务器生成密钥和证书的 JSON 配置文件。
确保将尖括号标记的值替换为你想使用的真实值。 `MASTER_CLUSTER_IP` 是前面小节中描述的
API 服务器的服务集群 IP。 下面的示例也假设用户使用 `cluster.local` 作为默认的 DNS 域名。
@ -215,7 +327,13 @@ title: 证书
"O": "<organization>",
"OU": "<organization unit>"
}]
}
}
<!--
1. Generate the key and certificate for the API server, which are by default
saved into file `server-key.pem` and `server.pem` respectively:
-->
1. 为 API 服务器生成密钥和证书,生成的秘钥和证书分别默认保存在文件 `server-key.pem`
`server.pem` 中:
@ -224,6 +342,17 @@ title: 证书
server-csr.json | ../cfssljson -bare server
<!--
## Distributing Self-Signed CA Certificate
A client node may refuse to recognize a self-signed CA certificate as valid.
For a non-production deployment, or for a deployment that runs behind a company
firewall, you can distribute a self-signed CA certificate to all clients and
refresh the local list for valid certificates.
On each client, perform the following operations:
-->
## 分发自签名 CA 证书
客户端节点可能拒绝承认自签名 CA 证书有效。
@ -233,15 +362,28 @@ title: 证书
在每个客户端上执行以下操作:
```bash
$ sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
$ sudo update-ca-certificates
sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
sudo update-ca-certificates
```
```
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....
done.
```
<!--
## Certificates API
You can use the `certificates.k8s.io` API to provision
x509 certificates to use for authentication as documented
[here](/docs/tasks/tls/managing-tls-in-a-cluster).
-->
## 证书 API
您可以按照[这里](/docs/tasks/tls/managing-tls-in-a-cluster)记录的方式,
使用 `certificates.k8s.io` API 来准备 x509 证书,用于认证。
{{% /capture %}}

View File

@ -1,86 +1,146 @@
---
approvers:
- davidopp
- lavalamp
title: 集群管理概述
content_template: templates/concept
weight: 10
---
{{% capture overview %}}
集群管理概述面向任何创建和管理 Kubernetes 集群的读者人群。我们假设你对 [用户指南](/docs/user-guide/)中的概念有一些熟悉。
<!--
The cluster administration overview is for anyone creating or administering a Kubernetes cluster.
It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/).
-->
集群管理概述面向任何创建和管理 Kubernetes 集群的读者人群。
我们假设你对[用户指南](/docs/user-guide/)中的概念大概了解。
{{% /capture %}}
{{% capture body %}}
## 规划集群
<!--
## Planning a cluster
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*.
Before choosing a guide, here are some considerations:
-->
## 规划集群
查阅 [安装](/docs/setup/) 中的指导,获取如何规划、建立以及配置 Kubernetes 集群的示例。本文所列的文章称为*发行版* 。
在选择一个指南前,有一些因素需要考虑:
<!--
- Do you just want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
- **If you are designing for high-availability**, learn about configuring [clusters in multiple zones](/docs/concepts/cluster-administration/federation/).
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
- Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
latter, choose an actively-developed distro. Some distros only use binary releases, but
offer a greater variety of choices.
- Familiarize yourself with the [components](/docs/admin/cluster-components/) needed to run a cluster.
-->
- 你是打算在你的电脑上尝试 Kubernetes还是要构建一个高可用的多节点集群请选择最适合你需求的发行版。
- **如果你正在设计一个高可用集群**,请了解[在多个 zones 中配置集群](/docs/admin/multi-cluster)。
- 你的集群是在**本地**还是**云IaaS**上Kubernetes 不能直接支持混合集群。作为代替,你可以建立多个集群。
- **如果你在本地配置 Kubernetes**,需要考虑哪种[网络模型](/docs/admin/networking)最适合。
- **如果你正在设计一个高可用集群**,请了解[在多个 zones 中配置集群](/docs/concepts/cluster-administration/federation/)。
- 您正在使用 类似 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) 这样的**被托管的Kubernetes集群**, 还是**管理您自己的集群**?
- 你的集群是在**本地**还是**云IaaS**上? Kubernetes 不能直接支持混合集群。作为代替,你可以建立多个集群。
- **如果你在本地配置 Kubernetes**,需要考虑哪种[网络模型](/docs/concepts/cluster-administration/networking/)最适合。
- 你的 Kubernetes 在 **裸金属硬件** 还是 **虚拟机VMs**上运行?
- 你**只想运行一个集群**,还是打算**活动开发 Kubernetes 项目代码**?如果是后者,请选择一个活动开发的发行版。某些发行版只提供二进制发布版,但提供更多的选择。
- 让你自己熟悉运行一个集群所需的[组件](/docs/admin/cluster-components) 。
<!--
Note: Not all distros are actively maintained. Choose distros which have been tested with a recent version of Kubernetes.
-->
请注意:不是所有的发行版都被积极维护着。请选择测试过最近版本的 Kubernetes 的发行版。
<!--
## Managing a cluster
如果你正在使用和 Salt 有关的指南,请查阅 [使用 Salt 配置 Kubernetes](/docs/admin/salt)。
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your clusters master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
* Learn how to [manage nodes](/docs/concepts/nodes/node/).
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
-->
## 管理集群
* [管理集群](/docs/concepts/cluster-administration/cluster-management/)叙述了和集群生命周期相关的几个主题:创建一个新集群、升级集群的 master 和 worker 节点、执行节点维护(例如内核升级)以及升级活动集群的 Kubernetes API 版本。
[管理集群](/docs/concepts/cluster-administration/cluster-management/)叙述了和集群生命周期相关的几个主题:创建一个新集群、升级集群的 master 和 worker 节点、执行节点维护(例如内核升级)以及升级活动集群的 Kubernetes API 版本。
* 学习如何 [管理节点](/docs/concepts/nodes/node/).
* 学习如何设定和管理集群共享的 [资源配额](/docs/concepts/policy/resource-quotas/) 。
## 保护集群
<!--
## Securing a cluster
* [Certificates](/docs/concepts/cluster-administration/certificates/) describes the steps to generate certificates using different tool chains.
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment-variables/) describes the environment for Kubelet managed containers on a Kubernetes node.
* [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/) describes how to set up permissions for users and service accounts.
* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in Kubernetes, including the various authentication options.
* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from authentication, and controls how HTTP calls are handled.
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization.
* [Using Sysctls in a Kubernetes Cluster](/docs/concepts/cluster-administration/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
* [Auditing](/docs/tasks/debug-application-cluster/audit/) describes how to interact with Kubernetes' audit logs.
-->
## 集群安全
* [Certificates](/docs/concepts/cluster-administration/certificates/) 描述了使用不同的工具链生成证书的步骤。
* [Kubernetes 容器环境](/docs/concepts/containers/container-environment-variables/) 描述了 Kubernetes 节点上由 Kubelet 管理的容器的环境。
* [控制到 Kubernetes API 的访问](/docs/reference/access-authn-authz/controlling-access/)描述了如何为用户和 service accounts 建立权限许可。
* [控制到 Kubernetes API 的访问](/docs/admin/accessing-the-api) 描述了如何为用户和 service accounts 建立权限许可.
* [用户认证](/docs/reference/access-authn-authz/authentication/)阐述了 Kubernetes 中的认证功能,包括许多认证选项。
* [授权](/docs/admin/authorization)从认证中分离出来,用于控制如何处理 HTTP 请求。
* [用户认证](/docs/admin/authentication) 阐述了 Kubernetes 中的认证功能,包括许多认证选项。
* [授权](/docs/admin/authorization)从认证中分离出来,用于控制如何处理 HTTP 请求。
* [使用 Admission Controllers](/docs/admin/admission-controllers) 阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。
* [使用 Admission Controllers](/docs/admin/admission-controllers) 阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。
* [在 Kubernetes Cluster 中使用 Sysctls](/docs/concepts/cluster-administration/sysctl-cluster/) 描述了管理员如何使用 `sysctl` 命令行工具来设置内核参数。
* [审计](/docs/tasks/debug-application-cluster/audit/)描述了如何与 Kubernetes 的审计日志交互。
* [审计](/docs/tasks/debug-application-cluster/audit/) 描述了如何与 Kubernetes 的审计日志交互。
<!--
### Securing the kubelet
* [Master-Node communication](/docs/concepts/architecture/master-node-communication/)
* [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
-->
### 保护 kubelet
* [Master 节点通信](/docs/concepts/cluster-administration/master-node-communication/)
* [TLS 引导](/docs/admin/kubelet-tls-bootstrapping/)
* [TLS 引导](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
* [Kubelet 认证/授权](/docs/admin/kubelet-authentication-authorization/)
<!--
## Optional Cluster Services
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service.
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it.
-->
## 可选集群服务
* [DNS 与 SkyDNS 集成](/docs/concepts/services-networking/dns-pod-service/)描述了如何将一个 DNS 名解析到一个 Kubernetes service。
* [DNS 与 SkyDNS 集成](/docs/concepts/services-networking/dns-pod-service/)描述了如何将一个 DNS 名解析到一个Kubernetes service。
* [记录和监控集群活动](/docs/concepts/cluster-administration/logging/) 阐述了Kubernetes 的日志如何工作以及怎样实现。
* [记录和监控集群活动](/docs/concepts/cluster-administration/logging/)阐述了 Kubernetes 的日志如何工作以及怎样实现。
{{% /capture %}}

View File

@ -0,0 +1,86 @@
---
title: 控制器管理器指标
content_template: templates/concept
weight: 100
---
<!--
---
title: Controller manager metrics
content_template: templates/concept
weight: 100
---
-->
{{% capture overview %}}
<!--
Controller manager metrics provide important insight into the performance and health of
the controller manager.
-->
控制器管理器指标为控制器管理器的性能和健康提供了重要的观测手段。
{{% /capture %}}
{{% capture body %}}
<!--
## What are controller manager metrics
Controller manager metrics provide important insight into the performance and health of the controller manager.
These metrics include common Go language runtime metrics such as go_routine count and controller specific metrics such as
etcd request latencies or Cloudprovider (AWS, GCE, OpenStack) API latencies that can be used
to gauge the health of a cluster.
Starting from Kubernetes 1.7, detailed Cloudprovider metrics are available for storage operations for GCE, AWS, Vsphere and OpenStack.
These metrics can be used to monitor health of persistent volume operations.
For example, for GCE these metrics are called:
-->
## 什么是控制器管理器度量
控制器管理器指标为控制器管理器的性能和健康提供了重要的观测手段。
这些度量包括常见的 Go 语言运行时度量,比如 go_routine 计数,以及控制器特定的度量,比如 etcd 请求延迟或 云提供商AWS、GCE、OpenStack的 API 延迟,这些参数可以用来测量集群的健康状况。
从 Kubernetes 1.7 版本开始,详细的云提供商指标可用于 GCE、AWS、Vsphere 和 OpenStack 的存储操作。
这些度量可用于监视持久卷操作的健康状况。
例如,在 GCE 中这些指标叫做:
```
cloudprovider_gce_api_request_duration_seconds { request = "instance_list"}
cloudprovider_gce_api_request_duration_seconds { request = "disk_insert"}
cloudprovider_gce_api_request_duration_seconds { request = "disk_delete"}
cloudprovider_gce_api_request_duration_seconds { request = "attach_disk"}
cloudprovider_gce_api_request_duration_seconds { request = "detach_disk"}
cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
```
<!--
## Configuration
In a cluster, controller-manager metrics are available from `http://localhost:10252/metrics`
from the host where the controller-manager is running.
The metrics are emitted in [prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/) and are human readable.
In a production environment you may want to configure prometheus or some other metrics scraper
to periodically gather these metrics and make them available in some kind of time series database.
-->
## 配置
在集群中,控制器管理器指标可从它所在的主机上的 `http://localhost:10252/metrics` 中获得。
这些指标是以 [prometheus 格式](https://prometheus.io/docs/instrumenting/exposition_formats/) 发出的,是人类可读的。
在生产环境中,您可能想配置 prometheus 或其他一些指标收集工具,以定期收集这些指标数据,并将它们应用到某种时间序列数据库中。
{{% /capture %}}

View File

@ -0,0 +1,199 @@
---
title: 配置 kubelet 垃圾回收策略
content_template: templates/concept
weight: 70
---
<!--
title: Configuring kubelet Garbage Collection
content_template: templates/concept
weight: 70
-->
{{% capture overview %}}
<!--
Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
-->
垃圾回收是 kubelet 的一个有用功能它将清理未使用的镜像和容器。Kubelet 将每分钟对容器执行一次垃圾回收,每五分钟对镜像执行一次垃圾回收。
不建议使用外部垃圾收集工具,因为这些工具可能会删除原本期望存在的容器进而破坏 kubelet 的行为。
<!--
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.
-->
{{% /capture %}}
{{% capture body %}}
## 镜像回收
<!--
## Image Collection
-->
Kubernetes 借助于 cadvisor 通过 imageManager 来管理所有镜像的生命周期。
<!--
Kubernetes manages lifecycle of all images through imageManager, with the cooperation
of cadvisor.
-->
镜像垃圾回收策略只考虑两个因素:`HighThresholdPercent` 和 `LowThresholdPercent`
磁盘使用率超过上限阈值HighThresholdPercent将触发垃圾回收。
垃圾回收将删除最近最少使用的镜像直到磁盘使用率满足下限阈值LowThresholdPercent
<!--
The policy for garbage collecting images takes two factors into consideration:
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the high threshold
will trigger garbage collection. The garbage collection will delete least recently used images until the low
threshold has been met.
-->
## 容器回收
<!--
## Container Collection
-->
容器垃圾回收策略考虑三个用户定义变量。`MinAge` 是容器可以被执行垃圾回收的最小生命周期。`MaxPerPodContainer` 是每个 pod 内允许存在的死亡容器的最大数量。
`MaxContainers` 是全部死亡容器的最大数量。可以分别独立地通过将 `MinAge` 设置为 0以及将 `MaxPerPodContainer``MaxContainers` 设置为小于 0 来禁用这些变量。
<!--
The policy for garbage collecting containers considers three user-defined variables. `MinAge` is the minimum age at which a container can be garbage collected. `MaxPerPodContainer` is the maximum number of dead containers every single
pod (UID, container name) pair is allowed to have. `MaxContainers` is the maximum number of total dead containers. These variables can be individually disabled by setting `MinAge` to zero and setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero.
-->
Kubelet 将处理无法辨识的、已删除的以及超出前面提到的参数所设置范围的容器。最老的容器通常会先被移除。
`MaxPerPodContainer``MaxContainer` 在某些场景下可能会存在冲突,例如在保证每个 pod 内死亡容器的最大数量(`MaxPerPodContainer`)的条件下可能会超过允许存在的全部死亡容器的最大数量(`MaxContainer`)。
`MaxPerPodContainer` 在这种情况下会被进行调整:最坏的情况是将 `MaxPerPodContainer` 降级为 1并驱逐最老的容器。
此外pod 内已经被删除的容器一旦年龄超过 `MinAge` 就会被清理。
<!--
Kubelet will act on containers that are unidentified, deleted, or outside of the boundaries set by the previously mentioned flags. The oldest containers will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other in situations where retaining the maximum number of containers per pod (`MaxPerPodContainer`) would go outside the allowable range of global dead containers (`MaxContainers`). `MaxPerPodContainer` would be adjusted in this situation: A worst case scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are removed once they are older than `MinAge`.
-->
不被 kubelet 管理的容器不受容器垃圾回收的约束。
<!--
Containers that are not managed by kubelet are not subject to container garbage collection.
-->
## 用户配置
<!--
## User Configuration
-->
用户可以使用以下 kubelet 参数调整相关阈值来优化镜像垃圾回收:
<!--
Users can adjust the following thresholds to tune image garbage collection with the following kubelet flags :
-->
<!--
1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection.
Default is 85%.
2. `image-gc-low-threshold`, the percent of disk usage to which image garbage collection attempts
to free. Default is 80%.
-->
1. `image-gc-high-threshold`,触发镜像垃圾回收的磁盘使用率百分比。默认值为 85%。
2. `image-gc-low-threshold`,镜像垃圾回收试图释放资源后达到的磁盘使用率百分比。默认值为 80%。
我们还允许用户通过以下 kubelet 参数自定义垃圾收集策略:
<!--
We also allow users to customize garbage collection policy through the following kubelet flags:
-->
<!--
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
2. `maximum-dead-containers-per-container`, maximum number of old instances to be retained
per container. Default is 1.
3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
Default is -1, which means there is no global limit.
-->
1. `minimum-container-ttl-duration`,完成的容器在被垃圾回收之前的最小年龄,默认是 0 分钟,这意味着每个完成的容器都会被执行垃圾回收。
2. `maximum-dead-containers-per-container`,每个容器要保留的旧实例的最大数量。默认值为 1。
3. `maximum-dead-containers`,要全局保留的旧容器实例的最大数量。默认值是 -1这意味着没有全局限制。
<!--
Containers can potentially be garbage collected before their usefulness has expired. These containers
can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for
`maximum-dead-containers-per-container` is highly recommended to allow at least 1 dead container to be
retained per expected container. A larger value for `maximum-dead-containers` is also recommended for a
similar reason.
-->
容器可能会在其效用过期之前被垃圾回收。这些容器可能包含日志和其他对故障诊断有用的数据。
强烈建议为 `maximum-dead-containers-per-container` 设置一个足够大的值,以便每个预期容器至少保留一个死亡容器。
由于同样的原因,`maximum-dead-containers` 也建议使用一个足够大的值。
查阅 [这个问题](https://github.com/kubernetes/kubernetes/issues/13287) 获取更多细节。
<!--
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
-->
## 弃用
<!--
## Deprecation
-->
这篇文档中的一些 kubelet 垃圾收集Garbage Collection功能将在未来被 kubelet 驱逐回收eviction所替代。
<!--
Some kubelet Garbage Collection features in this doc will be replaced by kubelet eviction in the future.
-->
包括:
| 现存参数 | 新参数 | 解释 |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard``--eviction-soft` | 现存的驱逐回收信号可以触发镜像垃圾回收 |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | 驱逐回收实现相同行为 |
| `--maximum-dead-containers` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--maximum-dead-containers-per-container` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--minimum-container-ttl-duration` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | 驱逐回收将磁盘阈值泛化到其他资源 |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | 驱逐回收将磁盘压力转换到其他资源 |
<!--
Including:
| Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | | deprecated once old logs are stored outside of container's context |
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
-->
{{% /capture %}}
{{% capture whatsnext %}}
查阅 [配置驱逐回收资源的策略](/docs/tasks/administer-cluster/out-of-resource/) 获取更多细节。
<!--
See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details.
-->
{{% /capture %}}

View File

@ -0,0 +1,478 @@
---
title: 日志架构
content_template: templates/concept
weight: 60
---
{{% capture overview %}}
<!--
Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
-->
应用和系统日志可以让您了解集群内部的运行状况。日志对调试问题和监控集群活动非常有用。大部分现代化应用都有某种日志记录机制;同样地,大多数容器引擎也被设计成支持某种日志记录机制。针对容器化应用,最简单且受欢迎的日志记录方式就是写入标准输出和标准错误流。
<!--
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, a pod is evicted, or a node dies, you'll usually still want to access your application's logs. As such, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level-logging_. Cluster-level logging requires a separate backend to store, analyze, and query logs. Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster.
-->
但是,由容器引擎或 runtime 提供的原生功能通常不足以满足完整的日志记录方案。例如如果发生容器崩溃、pod 被逐出或节点宕机等情况您仍然想访问到应用日志。因此日志应该具有独立的存储和生命周期与节点、pod 或容器的生命周期相独立。这个概念叫 _集群级的日志_ 。集群级日志方案需要一个独立的后台来存储、分析和查询日志。Kubernetes 没有为日志数据提供原生存储方案,但是您可以集成许多现有的日志解决方案到 Kubernetes 集群中。
{{% /capture %}}
{{% capture body %}}
<!--
Cluster-level logging architectures are described in assumption that
a logging backend is present inside or outside of your cluster. If you're
not interested in having cluster-level logging, you might still find
the description of how logs are stored and handled on the node to be useful.
-->
集群级日志架构假定在集群内部或者外部有一个日志后台。如果您对集群级日志不感兴趣,您仍会发现关于如何在节点上存储和处理日志的描述对您是有用的。
<!--
## Basic logging in Kubernetes
In this section, you can see an example of basic logging in Kubernetes that
outputs data to the standard output stream. This demonstration uses
a [pod specification](/examples/debug/counter-pod.yaml) with
a container that writes some text to standard output once per second.
-->
## Kubernetes 中的基本日志记录
本节您会看到一个kubernetes 中生成基本日志的例子,该例子中数据被写入到标准输出。
这里通过一个特定的 [pod 规约](/examples/debug/counter-pod.yaml) 演示创建一个容器,并令该容器每秒钟向标准输出写入数据。
{{< codenew file="debug/counter-pod.yaml" >}}
<!--
To run this pod, use the following command:
-->
用下面的命令运行 pod
```shell
kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml
```
<!--
The output is:
-->
输出结果为:
```
pod/counter created
```
<!--
To fetch the logs, use the `kubectl logs` command, as follows:
-->
使用 `kubectl logs` 命令获取日志:
```shell
kubectl logs counter
```
<!--
The output is:
-->
输出结果为:
```
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
...
```
<!--
You can use `kubectl logs` to retrieve logs from a previous instantiation of a container with `--previous` flag, in case the container has crashed. If your pod has multiple containers, you should specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
-->
一旦发生容器崩溃,您可以使用命令 `kubectl logs` 和参数 `--previous` 检索之前的容器日志。
如果 pod 中有多个容器,您应该向该命令附加一个容器名以访问对应容器的日志。
详见 [`kubectl logs` 文档](/docs/reference/generated/kubectl/kubectl-commands#logs)。
<!--
## Logging at the node level
![Node level logging](/images/docs/user-guide/logging/logging-node-level.png)
-->
## 节点级日志记录
![节点级别的日志记录](/images/docs/user-guide/logging/logging-node-level.png)
<!--
Everything a containerized application writes to `stdout` and `stderr` is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in json format.
-->
容器化应用写入 `stdout``stderr` 的任何数据,都会被容器引擎捕获并被重定向到某个位置。
例如Docker 容器引擎将这两个输出流重定向到某个 [日志驱动](https://docs.docker.com/engine/admin/logging/overview)
该日志驱动在 Kubernetes 中配置为以 json 格式写入文件。
<!--
{{< note >}}
The Docker json logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.
{{< /note >}}
-->
{{< note >}}
Docker json 日志驱动将日志的每一行当作一条独立的消息。该日志驱动不直接支持多行消息。您需要在日志代理级别或更高级别处理多行消息。
{{< /note >}}
<!--
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
-->
默认情况下如果容器重启kubelet 会保留被终止的容器日志。
如果 pod 在工作节点被驱逐,该 pod 中所有的容器也会被驱逐,包括容器日志。
<!--
An important consideration in node-level logging is implementing log rotation,
so that logs don't consume all available storage on the node. Kubernetes
currently is not responsible for rotating logs, but rather a deployment tool
should set up a solution to address that.
For example, in Kubernetes clusters, deployed by the `kube-up.sh` script,
there is a [`logrotate`](https://linux.die.net/man/8/logrotate)
tool configured to run each hour. You can also set up a container runtime to
rotate application's logs automatically, e.g. by using Docker's `log-opt`.
In the `kube-up.sh` script, the latter approach is used for COS image on GCP,
and the former approach is used in any other environment. In both cases, by
default rotation is configured to take place when log file exceeds 10MB.
-->
节点级日志记录中,需要重点考虑实现日志的轮转,以此来保证日志不会消耗节点上所有的可用空间。
Kubernetes 当前并不负责轮转日志,而是通过部署工具建立一个解决问题的方案。
例如,在 Kubernetes 集群中,用 `kube-up.sh` 部署一个每小时运行的工具 [`logrotate`](https://linux.die.net/man/8/logrotate)。
您也可以设置容器 runtime 来自动地轮转应用日志,比如使用 Docker 的 `log-opt` 选项。
`kube-up.sh` 脚本中,使用后一种方式来处理 GCP 上的 COS 镜像,而使用前一种方式来处理其他环境。
这两种方式,默认日志超过 10MB 大小时都会触发日志轮转。
<!--
As an example, you can find detailed information about how `kube-up.sh` sets
up logging for COS image on GCP in the corresponding [script][cosConfigureHelper].
-->
例如,您可以找到关于 `kube-up.sh` 为 GCP 环境的 COS 镜像设置日志的详细信息,
相应的脚本在 [这里][cosConfigureHelper]。
<!--
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
the basic logging example, the kubelet on the node handles the request and
reads directly from the log file, returning the contents in the response.
-->
当运行 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) 时,
节点上的 kubelet 处理该请求并直接读取日志文件,同时在响应中返回日志文件内容。
{{< note >}}
<!--
Currently, if some external system has performed the rotation,
only the contents of the latest log file will be available through
`kubectl logs`. E.g. if there's a 10MB file, `logrotate` performs
the rotation and there are two files, one 10MB in size and one empty,
`kubectl logs` will return an empty response.
-->
当前,如果有其他系统机制执行日志轮转,那么 `kubectl logs` 仅可查询到最新的日志内容。
比如,一个 10MB 大小的文件,通过`logrotate` 执行轮转后生成两个文件,一个 10MB 大小,一个为空,所以 `kubectl logs` 将返回空。
{{< /note >}}
[cosConfigureHelper]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh
<!--
### System component logs
There are two types of system components: those that run in a container and those
that do not run in a container. For example:
-->
### 系统组件日志
系统组件有两种类型:在容器中运行的和不在容器中运行的。例如:
<!--
* The Kubernetes scheduler and kube-proxy run in a container.
* The kubelet and container runtime, for example Docker, do not run in containers.
-->
* 在容器中运行的 kube-scheduler 和 kube-proxy。
* 不在容器中运行的 kubelet 和容器运行时(例如 Docker。
<!--
On machines with systemd, the kubelet and container runtime write to journald. If
systemd is not present, they write to `.log` files in the `/var/log` directory.
System components inside containers always write to the `/var/log` directory,
bypassing the default logging mechanism. They use the [klog][klog]
logging library. You can find the conventions for logging severity for those
components in the [development docs on logging](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md).
-->
在使用 systemd 机制的服务器上kubelet 和容器 runtime 写入日志到 journald。
如果没有 systemd他们写入日志到 `/var/log` 目录的 `.log` 文件。
容器中的系统组件通常将日志写到 `/var/log` 目录,绕过了默认的日志机制。他们使用 [klog][klog] 日志库。
您可以在[日志开发文档](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)找到这些组件的日志告警级别协议。
<!--
Similarly to the container logs, system component logs in the `/var/log`
directory should be rotated. In Kubernetes clusters brought up by
the `kube-up.sh` script, those logs are configured to be rotated by
the `logrotate` tool daily or once the size exceeds 100MB.
-->
和容器日志类似,`/var/log` 目录中的系统组件日志也应该被轮转。
通过脚本 `kube-up.sh` 启动的 Kubernetes 集群中,日志被工具 `logrotate` 执行每日轮转,或者日志大小超过 100MB 时触发轮转。
[klog]: https://github.com/kubernetes/klog
<!--
## Cluster-level logging architectures
-->
## 集群级日志架构
<!--
While Kubernetes does not provide a native solution for cluster-level logging, there are several common approaches you can consider. Here are some options:
* Use a node-level logging agent that runs on every node.
* Include a dedicated sidecar container for logging in an application pod.
* Push logs directly to a backend from within an application.
-->
虽然Kubernetes没有为集群级日志记录提供原生的解决方案但您可以考虑几种常见的方法。以下是一些选项
* 使用在每个节点上运行的节点级日志记录代理。
* 在应用程序的 pod 中,包含专门记录日志的 sidecar 容器。
* 将日志直接从应用程序中推送到日志记录后端。
<!--
### Using a node logging agent
![Using a node level logging agent](/images/docs/user-guide/logging/logging-with-node-agent.png)
-->
### 使用节点级日志代理
![使用节点日志记录代理](/images/docs/user-guide/logging/logging-with-node-agent.png)
<!--
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
-->
您可以通过在每个节点上使用 _节点级的日志记录代理_ 来实现群集级日志记录。日志记录代理是一种用于暴露日志或将日志推送到后端的专用工具。通常,日志记录代理程序是一个容器,它可以访问包含该节点上所有应用程序容器的日志文件的目录。
<!--
Because the logging agent must run on every node, it's common to implement it as either a DaemonSet replica, a manifest pod, or a dedicated native process on the node. However the latter two approaches are deprecated and highly discouraged.
-->
由于日志记录代理必须在每个节点上运行,它可以用 DaemonSet 副本Pod 或 本机进程来实现。然而,后两种方法被弃用并且非常不别推荐。
<!--
Using a node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, because it creates only one agent per node, and it doesn't require any changes to the applications running on the node. However, node-level logging _only works for applications' standard output and standard error_.
-->
对于 Kubernetes 集群来说,使用节点级的日志代理是最常用和被推荐的方式,因为在每个节点上仅创建一个代理,并且不需要对节点上的应用做修改。
但是,节点级的日志 _仅适用于应用程序的标准输出和标准错误输出_
<!--
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/user-guide/logging/stackdriver) for use with Google Cloud Platform, and [Elasticsearch](/docs/user-guide/logging/elasticsearch). You can find more information and instructions in the dedicated documents. Both use [fluentd](http://www.fluentd.org/) with custom configuration as an agent on the node.
-->
Kubernetes 并不指定日志代理,但是有两个可选的日志代理与 Kubernetes 发行版一起发布。
[Stackdriver 日志](/docs/user-guide/logging/stackdriver) 适用于 Google Cloud Platform和 [Elasticsearch](/docs/user-guide/logging/elasticsearch)。
您可以在专门的文档中找到更多的信息和说明。两者都使用 [fluentd](http://www.fluentd.org/) 与自定义配置作为节点上的代理。
<!--
### Using a sidecar container with the logging agent
-->
### 使用 sidecar 容器和日志代理
<!--
You can use a sidecar container in one of the following ways:
-->
您可以通过以下方式之一使用 sidecar 容器:
<!--
* The sidecar container streams application logs to its own `stdout`.
* The sidecar container runs a logging agent, which is configured to pick up logs from an application container.
-->
* sidecar 容器将应用程序日志传送到自己的标准输出。
* sidecar 容器运行一个日志代理,配置该日志代理以便从应用容器收集日志。
<!--
#### Streaming sidecar container
-->
#### 传输数据流的 sidecar 容器
<!--
![Sidecar container with a streaming container](/images/docs/user-guide/logging/logging-with-streaming-sidecar.png)
By having your sidecar containers stream to their own `stdout` and `stderr`
streams, you can take advantage of the kubelet and the logging agent that
already run on each node. The sidecar containers read logs from a file, a socket,
or the journald. Each individual sidecar container prints log to its own `stdout`
or `stderr` stream.
-->
利用 sidecar 容器向自己的 `stdout``stderr` 传输流的方式,您就可以利用每个节点上的 kubelet 和日志代理来处理日志。
sidecar 容器从文件socket 或 journald 读取日志。每个 sidecar 容器打印其自己的 `stdout``stderr` 流。
<!--
This approach allows you to separate several log streams from different
parts of your application, some of which can lack support
for writing to `stdout` or `stderr`. The logic behind redirecting logs
is minimal, so it's hardly a significant overhead. Additionally, because
`stdout` and `stderr` are handled by the kubelet, you can use built-in tools
like `kubectl logs`.
-->
这种方法允许您将日志流从应用程序的不同部分分离开,其中一些可能缺乏对写入 `stdout``stderr` 的支持。重定向日志背后的逻辑是最小的,因此它的开销几乎可以忽略不计。
另外,因为 `stdout`、`stderr` 由 kubelet 处理,你可以使用内置的工具 `kubectl logs`
<!--
Consider the following example. A pod runs a single container, and the container
writes to two different log files, using two different formats. Here's a
configuration file for the Pod:
-->
考虑接下来的例子。pod 的容器向两个文件写不同格式的日志,下面是这个 pod 的配置文件:
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
<!--
It would be a mess to have log entries of different formats in the same log
stream, even if you managed to redirect both components to the `stdout` stream of
the container. Instead, you could introduce two sidecar containers. Each sidecar
container could tail a particular log file from a shared volume and then redirect
the logs to its own `stdout` stream.
-->
在同一个日志流中有两种不同格式的日志条目,这有点混乱,即使您试图重定向它们到容器的 `stdout` 流。
取而代之的是,您可以引入两个 sidecar 容器。
每一个 sidecar 容器可以从共享卷跟踪特定的日志文件,并重定向文件内容到各自的 `stdout` 流。
<!--
Here's a configuration file for a pod that has two sidecar containers:
-->
这是运行两个 sidecar 容器的 pod 文件。
{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}}
<!--
Now when you run this pod, you can access each log stream separately by
running the following commands:
-->
现在当您运行这个 pod 时,您可以分别地访问每一个日志流,运行如下命令:
```shell
kubectl logs counter count-log-1
```
```
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
...
```
```shell
kubectl logs counter count-log-2
```
```
Mon Jan 1 00:00:00 UTC 2001 INFO 0
Mon Jan 1 00:00:01 UTC 2001 INFO 1
Mon Jan 1 00:00:02 UTC 2001 INFO 2
...
```
<!--
The node-level agent installed in your cluster picks up those log streams
automatically without any further configuration. If you like, you can configure
the agent to parse log lines depending on the source container.
-->
集群中安装的节点级代理会自动获取这些日志流,而无需进一步配置。如果您愿意,您可以配置代理程序来解析源容器的日志行。
<!--
Note, that despite low CPU and memory usage (order of couple of millicores
for cpu and order of several megabytes for memory), writing logs to a file and
then streaming them to `stdout` can double disk usage. If you have
an application that writes to a single file, it's generally better to set
`/dev/stdout` as destination rather than implementing the streaming sidecar
container approach.
-->
注意,尽管 CPU 和内存使用率都很低(以多个 cpu millicores 指标排序或者按内存的兆字节排序),
向文件写日志然后输出到 `stdout` 流仍然会成倍地增加磁盘使用率。
如果您的应用向单一文件写日志,通常最好设置 `/dev/stdout` 作为目标路径,而不是使用流式的 sidecar 容器方式。
<!--
Sidecar containers can also be used to rotate log files that cannot be
rotated by the application itself. An example
of this approach is a small container running logrotate periodically.
However, it's recommended to use `stdout` and `stderr` directly and leave rotation
and retention policies to the kubelet.
-->
应用本身如果不具备轮转日志文件的功能,可以通过 sidecar 容器实现。
该方式的 [例子](https://github.com/samsung-cnct/logrotate) 是运行一个定期轮转日志的容器。
然而,还是推荐直接使用 `stdout``stderr`,将日志的轮转和保留策略交给 kubelet。
<!--
#### Sidecar container with a logging agent
![Sidecar container with a logging agent](/images/docs/user-guide/logging/logging-with-sidecar-agent.png)
-->
### 具有日志代理功能的 sidecar 容器
![日志记录代理功能的 sidecar 容器](/images/docs/user-guide/logging/logging-with-sidecar-agent.png)
<!--
If the node-level logging agent is not flexible enough for your situation, you
can create a sidecar container with a separate logging agent that you have
configured specifically to run with your application.
-->
如果节点级日志记录代理程序对于你的场景来说不够灵活,您可以创建一个带有单独日志记录代理程序的 sidecar 容器,将代理程序专门配置为与您的应用程序一起运行。
<!--
{{< note >}}
Using a logging agent in a sidecar container can lead
to significant resource consumption. Moreover, you won't be able to access
those logs using `kubectl logs` command, because they are not controlled
by the kubelet.
{{< /note >}}
-->
{{< note >}}
在 sidecar 容器中使用日志代理会导致严重的资源损耗。此外,您不能使用 `kubectl logs` 命令访问日志,因为日志并没有被 kubelet 管理。
{{< /note >}}
<!--
As an example, you could use [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/),
which uses fluentd as a logging agent. Here are two configuration files that
you can use to implement this approach. The first file contains
a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd.
-->
例如,您可以使用 [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/)它使用fluentd作为日志记录代理。
以下是两个可用于实现此方法的配置文件。
第一个文件包含配置 fluentd 的[ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)。
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
<!--
{{< note >}}
The configuration of fluentd is beyond the scope of this article. For
information about configuring fluentd, see the
[official fluentd documentation](http://docs.fluentd.org/).
{{< /note >}}
-->
{{< note >}}
配置fluentd超出了本文的范围。要知道更多的关于如何配置fluentd请参考[fluentd 官方文档](http://docs.fluentd.org/).
{{< /note >}}
<!--
The second file describes a pod that has a sidecar container running fluentd.
The pod mounts a volume where fluentd can pick up its configuration data.
-->
第二个文件描述了运行 fluentd sidecar 容器的 pod 。flutend 通过 pod 的挂载卷获取它的配置数据。
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
<!--
After some time you can find log messages in the Stackdriver interface.
-->
一段时间后,您可以在 Stackdriver 界面看到日志消息。
<!--
Remember, that this is just an example and you can actually replace fluentd
with any logging agent, reading from any source inside an application
container.
-->
记住,这只是一个例子,事实上您可以用任何一个日志代理替换 fluentd ,并从应用容器中读取任何资源。
<!--
### Exposing logs directly from the application
![Exposing logs directly from the application](/images/docs/user-guide/logging/logging-from-application.png)
-->
### 从应用中直接暴露日志目录
![直接从应用程序暴露日志](/images/docs/user-guide/logging/logging-from-application.png)
<!--
You can implement cluster-level logging by exposing or pushing logs directly from
every application; however, the implementation for such a logging mechanism
is outside the scope of Kubernetes.
-->
通过暴露或推送每个应用的日志,您可以实现集群级日志记录;然而,这种日志记录机制的实现已超出 Kubernetes 的范围。
{{% /capture %}}

View File

@ -0,0 +1,666 @@
---
title: 管理资源
content_template: templates/concept
weight: 40
---
{{% capture overview %}}
<!--
You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/).
-->
您已经部署了应用并通过服务暴露它。然后呢Kubernetes 提供了一些工具来帮助管理您的应用部署,包括缩扩容和更新。我们将更深入讨论的特性包括[配置文件](/docs/concepts/configuration/overview/)和[标签](/docs/concepts/overview/working-with-objects/labels/)。
{{% /capture %}}
{{% capture body %}}
<!--
## Organizing resource configurations
Many applications require multiple resources to be created, such as a Deployment and a Service. Management of multiple resources can be simplified by grouping them together in the same file (separated by `---` in YAML). For example:
-->
## 组织资源配置
许多应用需要创建多个资源,例如 Deployment 和 Service。可以通过将多个资源组合在同一个文件中在 YAML 中以 `---` 分隔)来简化对它们的管理。例如:
{{< codenew file="application/nginx-app.yaml" >}}
<!--
Multiple resources can be created the same way as a single resource:
-->
可以用创建单个资源相同的方式来创建多个资源:
```shell
kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml
```
```shell
service/my-nginx-svc created
deployment.apps/my-nginx created
```
<!--
The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment.
-->
资源将按照它们在文件中的顺序创建。因此,最好先指定服务,这样在控制器(例如 Deployment创建 Pod 时能够确保调度器可以将与服务关联的多个 Pod 分散到不同节点。
<!--
`kubectl apply` also accepts multiple `-f` arguments:
-->
`kubectl create` 也接受多个 `-f` 参数:
```shell
kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
```
<!--
And a directory can be specified rather than or in addition to individual files:
-->
还可以指定目录路径,而不用添加多个单独的文件:
```shell
kubectl apply -f https://k8s.io/examples/application/nginx/
```
<!--
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
-->
`kubectl` 将读取任何后缀为 `.yaml``.yml` 或者 `.json` 的文件。
建议的做法是,将同一个微服务或同一应用层相关的资源放到同一个文件中,将同一个应用相关的所有文件按组存放到同一个目录中。如果应用的各层使用 DNS 相互绑定,那么您可以简单地将堆栈的所有组件一起部署。
还可以使用 URL 作为配置源,便于直接使用已经提交到 Github 上的配置文件进行部署:
```shell
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml
```
```shell
deployment.apps/my-nginx created
```
<!--
## Bulk operations in kubectl
Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created:
-->
## kubectl 中的批量操作
资源创建并不是 `kubectl` 可以批量执行的唯一操作。`kubectl` 还可以从配置文件中提取资源名,以便执行其他操作,特别是删除您之前创建的资源:
```shell
kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml
```
```shell
deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
<!--
In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax:
-->
在仅有两种资源的情况下,可以使用"资源类型/资源名"的语法在命令行中同时指定这两个资源:
```shell
kubectl delete deployments/my-nginx services/my-nginx-svc
```
<!--
For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using `-l` or `--selector`, to filter resources by their labels:
-->
对于资源数目较大的情况,您会发现使用 `-l``--selector` 指定的筛选器(标签查询)能很容易根据标签筛选资源:
```shell
kubectl delete deployment,services -l app=nginx
```
```shell
deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
<!--
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
-->
由于 `kubectl` 用来输出资源名称的语法与其所接受的资源名称语法相同,所以很容易使用 `$()``xargs` 进行链式操作:
```shell
kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service)
```
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s
```
<!--
With the above commands, we first create resources under `examples/application/nginx/` and print the resources created with `-o name` output format
(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`.
-->
上面的命令中,我们首先使用 `examples/application/nginx/` 下的配置文件创建资源,并使用 `-o name` 的输出格式(以"资源/名称"的形式打印每个资源)打印所创建的资源。然后,我们通过 `grep` 来过滤 "service",最后再打印 `kubectl get` 的内容。
<!--
If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the operations on the subdirectories also, by specifying `--recursive` or `-R` alongside the `--filename,-f` flag.
-->
如果您碰巧在某个路径下的多个子路径中组织资源,那么也可以递归地在所有子路径上执行操作,方法是在 `--filename,-f` 后面指定 `--recursive` 或者 `-R`
<!--
For instance, assume there is a directory `project/k8s/development` that holds all of the manifests needed for the development environment, organized by resource type:
-->
例如,假设有一个目录路径为 `project/k8s/development`,它保存开发环境所需的所有清单,并按资源类型组织:
```
project/k8s/development
├── configmap
│   └── my-configmap.yaml
├── deployment
│   └── my-deployment.yaml
└── pvc
└── my-pvc.yaml
```
<!--
By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If we had tried to create the resources in this directory using the following command, we would have encountered an error:
-->
默认情况下,对 `project/k8s/development` 执行的批量操作将停止在目录的第一级,而不是处理所有子目录。
如果我们试图使用以下命令在此目录中创建资源,则会遇到一个错误:
```shell
kubectl apply -f project/k8s/development
```
```shell
error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)
```
<!--
Instead, specify the `--recursive` or `-R` flag with the `--filename,-f` flag as such:
-->
然而,在 `--filename,-f` 后面标明 `--recursive` 或者 `-R` 之后:
```shell
kubectl apply -f project/k8s/development --recursive
```
```shell
configmap/my-config created
deployment.apps/my-deployment created
persistentvolumeclaim/my-pvc created
```
<!--
The `--recursive` flag works with any operation that accepts the `--filename,-f` flag such as: `kubectl {create,get,delete,describe,rollout} etc.`
The `--recursive` flag also works when multiple `-f` arguments are provided:
-->
`--recursive` 可以用于接受 `--filename,-f` 参数的任何操作,例如:`kubectl {create,get,delete,describe,rollout}` 等。
有多个 `-f` 参数出现的时候,`--recursive` 参数也能正常工作:
```shell
kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive
```
```shell
namespace/development created
namespace/staging created
configmap/my-config created
deployment.apps/my-deployment created
persistentvolumeclaim/my-pvc created
```
<!--
If you're interested in learning more about `kubectl`, go ahead and read [kubectl Overview](/docs/reference/kubectl/overview/).
-->
如果您有兴趣学习更多关于 `kubectl` 的内容,请阅读 [kubectl 概述](/docs/reference/kubectl/overview/)。
<!--
## Using labels effectively
The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
-->
## 有效地使用标签
到目前为止我们使用的示例中的资源最多使用了一个标签。在许多情况下,应使用多个标签来区分集合。
<!--
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
-->
例如,不同的应用可能会为 `app` 标签设置不同的值。
但是,类似 [guestbook 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) 这样的多层应用,还需要区分每一层。前端可以带以下标签:
```yaml
labels:
app: guestbook
tier: frontend
```
<!--
while the Redis master and slave would have different `tier` labels, and perhaps even an additional `role` label:
-->
Redis 的主节点和从节点会有不同的 `tier` 标签,甚至还有一个额外的 `role` 标签:
```yaml
labels:
app: guestbook
tier: backend
role: master
```
<!--
and
-->
以及
```yaml
labels:
app: guestbook
tier: backend
role: slave
```
<!--
The labels allow us to slice and dice our resources along any dimension specified by a label:
-->
标签允许我们按照标签指定的任何维度对我们的资源进行切片和切块:
```shell
kubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml
kubectl get pods -Lapp -Ltier -Lrole
```
```shell
NAME READY STATUS RESTARTS AGE APP TIER ROLE
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>
guestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>
guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master
guestbook-redis-slave-2q2yf 1/1 Running 0 1m guestbook backend slave
guestbook-redis-slave-qgazl 1/1 Running 0 1m guestbook backend slave
my-nginx-divi2 1/1 Running 0 29m nginx <none> <none>
my-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>
```
```shell
kubectl get pods -lapp=guestbook,role=slave
```
```shell
NAME READY STATUS RESTARTS AGE
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
guestbook-redis-slave-qgazl 1/1 Running 0 3m
```
<!--
## Canary deployments
-->
## 金丝雀部署
<!--
Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. It is common practice to deploy a *canary* of a new application release (specified via image tag in the pod template) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out.
-->
另一个需要多标签的场景是用来区分同一组件的不同版本或者不同配置的多个部署。常见的做法是部署一个使用*金丝雀发布*来部署新应用版本(在 pod 模板中通过镜像标签指定),保持新旧版本应用同时运行,这样,新版本在完全发布之前也可以接收实时的生产流量。
<!--
For instance, you can use a `track` label to differentiate different releases.
The primary, stable release would have a `track` label with value as `stable`:
-->
例如,您可以使用 `track` 标签来区分不同的版本。
主要稳定的发行版将有一个 `track` 标签,其值为 `stable`
```yaml
name: frontend
replicas: 3
...
labels:
app: guestbook
tier: frontend
track: stable
...
image: gb-frontend:v3
```
<!--
and then you can create a new release of the guestbook frontend that carries the `track` label with different value (i.e. `canary`), so that two sets of pods would not overlap:
-->
然后,您可以创建 guestbook 前端的新版本,让这些版本的 `track` 标签带有不同的值(即 `canary`),以便两组 pod 不会重叠:
```yaml
name: frontend-canary
replicas: 1
...
labels:
app: guestbook
tier: frontend
track: canary
...
image: gb-frontend:v4
```
<!--
The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the `track` label), so that the traffic will be redirected to both applications:
-->
前端服务通过选择标签的公共子集(即忽略 `track` 标签)来覆盖两组副本,以便流量可以转发到两个应用:
```yaml
selector:
app: guestbook
tier: frontend
```
<!--
You can tweak the number of replicas of the stable and canary releases to determine the ratio of each release that will receive live production traffic (in this case, 3:1).
Once you're confident, you can update the stable track to the new application release and remove the canary one.
-->
您可以调整 `stable``canary` 版本的副本数量,以确定每个版本将接收实时生产流量的比例(在本例中为 3:1)。一旦有信心,您就可以将新版本应用的 `track` 标签的值从 `canary` 替换为 `stable`,并且将老版本应用删除。
<!--
For a more concrete example, check the [tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary).
-->
想要了解更具体的示例,请查看 [Ghost 部署教程](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary)。
<!--
## Updating labels
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
For example, if you want to label all your nginx pods as frontend tier, simply run:
-->
## 更新标签
有时,现有的 pod 和其它资源需要在创建新资源之前重新标记。这可以用 `kubectl label` 完成。
例如,如果想要将所有 nginx pod 标记为前端层,只需运行:
```shell
kubectl label pods -l app=nginx tier=fe
```
```shell
pod/my-nginx-2035384211-j5fhi labeled
pod/my-nginx-2035384211-u2c7e labeled
pod/my-nginx-2035384211-u3t6x labeled
```
<!--
This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe".
To see the pods you just labeled, run:
-->
首先用标签 "app=nginx" 过滤所有的 pod然后用 "tier=fe" 标记它们。想要查看您刚才标记的 pod请运行
```shell
kubectl get pods -l app=nginx -L tier
```
```shell
NAME READY STATUS RESTARTS AGE TIER
my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe
my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe
my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe
```
<!--
This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with `-L` or `--label-columns`).
For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) and [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label).
-->
这将输出所有 "app=nginx" 的 pod并有一个额外的描述 pod 的 tier 的标签列(用参数 `-L` 或者 `--label-columns` 标明)。
想要了解更多信息,请参考 [标签](/docs/concepts/overview/working-with-objects/labels/) 和 [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label)。
<!--
## Updating annotations
Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example:
-->
## 更新注解
有时,您可能希望将注解附加到资源中。注解是 API 客户端(如工具、库等)用于检索的任意非标识元数据。这可以通过 `kubectl annotate` 来完成。例如:
```shell
kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
kubectl get pods my-nginx-v4-9gw19 -o yaml
```
```shell
apiVersion: v1
kind: pod
metadata:
annotations:
description: my frontend running nginx
...
```
<!--
For more information, please see [annotations](/docs/concepts/overview/working-with-objects/annotations/) and [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) document.
-->
想要了解更多信息,请参考 [注解](/docs/concepts/overview/working-with-objects/annotations/) 和 [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) 文档。
<!--
## Scaling your application
When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to decrease the number of nginx replicas from 3 to 1, do:
-->
## 缩扩您的应用
当应用上的负载增长或收缩时,使用 `kubectl` 能够轻松实现规模的缩扩。例如,要将 nginx 副本的数量从 3 减少到 1请执行以下操作
```shell
kubectl scale deployment/my-nginx --replicas=1
```
```shell
deployment.extensions/my-nginx scaled
```
<!--
Now you only have one pod managed by the deployment.
-->
现在,您的 deployment 管理的 pod 只有一个了。
```shell
kubectl get pods -l app=nginx
```
```shell
NAME READY STATUS RESTARTS AGE
my-nginx-2035384211-j5fhi 1/1 Running 0 30m
```
<!--
To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:
-->
想要让系统自动选择需要 nginx 副本的数量,范围从 1 到 3请执行以下操作
```shell
kubectl autoscale deployment/my-nginx --min=1 --max=3
```
```shell
horizontalpodautoscaler.autoscaling/my-nginx autoscaled
```
<!--
Now your nginx replicas will be scaled up and down as needed, automatically.
For more information, please see [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) and [horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document.
-->
现在,您的 nginx 副本将根据需要自动地增加或者减少。
想要了解更多信息,请参考 [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) 和 [pod 水平自动伸缩](/docs/tasks/run-application/horizontal-pod-autoscale/) 文档。
<!--
## In-place updates of resources
Sometimes it's necessary to make narrow, non-disruptive updates to resources you've created.
-->
## 就地更新资源
有时,有必要对您所创建的资源进行小范围、无干扰地更新。
### kubectl apply
<!--
It is suggested to maintain a set of configuration files in source control (see [configuration as code](http://martinfowler.com/bliki/InfrastructureAsCode.html)),
so that they can be maintained and versioned along with the code for the resources they configure.
Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to push your configuration changes to the cluster.
-->
建议在源代码管理中维护一组配置文件(参见[配置即代码](http://martinfowler.com/bliki/InfrastructureAsCode.html)),这样,它们就可以和应用代码一样进行维护和版本管理。然后,您可以用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) 将配置变更应用到集群中。
<!--
This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.
-->
这个命令将会把推送的版本与以前的版本进行比较,并应用您所做的更改,但是不会自动覆盖任何你没有指定更改的属性。
```shell
kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
deployment.apps/my-nginx configured
```
<!--
Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a three-way diff between the previous configuration, the provided input and the current configuration of the resource, in order to determine how to modify the resource.
-->
注意,`kubectl apply` 将为资源增加一个额外的注解,以确定自上次调用以来对配置的更改。当调用它时,`kubectl apply` 会在以前的配置、提供的输入和资源的当前配置之间找出三方差异,以确定如何修改资源。
<!--
Currently, resources are created without this annotation, so the first invocation of `kubectl apply` will fall back to a two-way diff between the provided input and the current configuration of the resource. During this first invocation, it cannot detect the deletion of properties set when the resource was created. For this reason, it will not remove them.
-->
目前,新创建的资源是没有这个注解的,所以,第一次调用 `kubectl apply` 将使用提供的输入和资源的当前配置双方之间差异进行比较。在第一次调用期间,它无法检测资源创建时属性集的删除情况。因此,不会删除它们。
<!--
All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as `kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to `kubectl apply` to detect and perform deletions using a three-way diff.
-->
所有后续调用 `kubectl apply` 以及其它修改配置的命令,如 `kubectl replace``kubectl edit`,都将更新注解,并允许随后调用的 `kubectl apply` 使用三方差异进行检查和执行删除。
<!--
{{< note >}}
To use apply, always create resource initially with either `kubectl apply` or `kubectl create --save-config`.
{{< /note >}}
-->
{{< note >}}
想要使用 apply请始终使用 `kubectl apply``kubectl create --save-config` 创建资源。
{{< /note >}}
### kubectl edit
<!--
Alternatively, you may also update resources with `kubectl edit`:
-->
或者,您也可以使用 `kubectl edit` 更新资源:
```shell
kubectl edit deployment/my-nginx
```
<!--
This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the resource with the updated version:
-->
这相当于首先 `get` 资源,在文本编辑器中编辑它,然后用更新的版本 `apply` 资源:
```shell
kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml
vi /tmp/nginx.yaml
# do some edit, and then save the file
kubectl apply -f /tmp/nginx.yaml
deployment.apps/my-nginx configured
rm /tmp/nginx.yaml
```
<!--
This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables.
For more information, please see [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) document.
-->
这使您可以更加容易地进行更重大的更改。请注意,可以使用 `EDITOR``KUBE_EDITOR` 环境变量来指定编辑器。
想要了解更多信息,请参考 [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) 文档。
### kubectl patch
<!--
You can use `kubectl patch` to update API objects in place. This command supports JSON patch,
JSON merge patch, and strategic merge patch. See
[Update API Objects in Place Using kubectl patch](/docs/tasks/run-application/update-api-object-kubectl-patch/)
and
[kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch).
-->
您可以使用 `kubectl patch` 来更新 API 对象。此命令支持 JSON patchJSON merge patch以及 strategic merge patch。 请参考
[使用 kubectl patch 更新 API 对象](/docs/tasks/run-application/update-api-object-kubectl-patch/)
[kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch).
<!--
## Disruptive updates
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
-->
## 破坏性的更新
在某些情况下,您可能需要更新某些初始化后无法更新的资源字段,或者您可能只想立即进行递归更改,例如修复 Deployment 创建的不正常的 Pod。若要更改这些字段请使用 `replace --force`,它将删除并重新创建资源。在这种情况下,您可以简单地修改原始配置文件:
```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
```
```shell
deployment.apps/my-nginx deleted
deployment.apps/my-nginx replaced
```
<!--
## Updating your application without a service outage
-->
## 在不中断服务的情况下更新应用
<!--
At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
-->
在某些时候,您最终需要更新已部署的应用,通常都是通过指定新的镜像或镜像标签,如上面的金丝雀发布的场景中所示。`kubectl` 支持几种更新操作,每种更新操作都适用于不同的场景。
<!--
We'll guide you through how to create and update applications with Deployments.
-->
我们将指导您通过 Deployment 如何创建和更新应用。
<!--
Let's say you were running version 1.7.9 of nginx:
-->
假设您正运行的是 1.7.9 版本的 nginx
```shell
kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
```
```shell
deployment.apps/my-nginx created
```
<!--
To update to version 1.9.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`, with the kubectl commands we learned above.
-->
要更新到 1.9.1 版本,只需使用我们前面学到的 kubectl 命令将 `.spec.template.spec.containers[0].image``nginx:1.7.9` 修改为 `nginx:1.9.1`
```shell
kubectl edit deployment/my-nginx
```
<!--
That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/).
-->
没错就是这样Deployment 将在后台逐步更新已经部署的 nginx 应用。它确保在更新过程中,只有一定数量的旧副本被开闭,并且只有一定基于所需 pod 数量的新副本被创建。想要了解更多细节,请参考 [Deployment](/docs/concepts/workloads/controllers/deployment/)。
{{% /capture %}}
{{% capture whatsnext %}}
<!--
- [Learn about how to use `kubectl` for application introspection and debugging.](/docs/tasks/debug-application-cluster/debug-application-introspection/)
- [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/)
-->
- [学习怎么样使用 `kubectl` 观察和调试应用](/docs/tasks/debug-application-cluster/debug-application-introspection/)
- [配置最佳实践和技巧](/docs/concepts/configuration/overview/)
{{% /capture %}}

View File

@ -0,0 +1,25 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/1.log
pos_file /var/log/1.log.pos
tag count.format1
</source>
<source>
type tail
format none
path /var/log/2.log
pos_file /var/log/2.log.pos
tag count.format2
</source>
<match **>
type google_cloud
</match>

View File

@ -0,0 +1,39 @@
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-agent
image: k8s.gcr.io/fluentd-gcp:1.30
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /etc/fluentd-config
volumes:
- name: varlog
emptyDir: {}
- name: config-volume
configMap:
name: fluentd-config

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log']
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-2
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log']
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}

View File

@ -0,0 +1,26 @@
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}

View File

@ -0,0 +1,34 @@
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']