k8smeetup-rootsongjc-pr-20170915

reviewable/pr5699/r1
Jimmy Song 2017-09-30 16:35:22 +08:00
parent 5740df35cf
commit e9b11a71ab
25 changed files with 3084 additions and 0 deletions

View File

@ -0,0 +1,644 @@
---
title: Managing Compute Resources for Containers
cn-approvers:
- rootsongjc
cn-reviewers:
- shirdrn
---
{% capture overview %}
<!--
When you specify a [Pod](/docs/user-guide/pods), you can optionally specify how
much CPU and memory (RAM) each Container needs. When Containers have resource
requests specified, the scheduler can make better decisions about which nodes to
place Pods on. And when Containers have their limits specified, contention for
resources on a node can be handled in a specified manner. For more details about
the difference between requests and limits, see
[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/resource-qos.md).
-->
当您定义 [Pod](/docs/user-guide/pods) 的时候可以选择为每个容器指定需要的 CPU 和内存RAM大小。当为容器指定了资源请求后调度器就能够更好的判断出将容器调度到哪个节点上。如果您还为容器指定了资源限制节点上的资源就可以按照指定的方式做竞争。关于资源请求和限制的不同点和更多资料请参考 [Resource QoS](https://git.k8s.io/community/contributors/design-proposals/resource-qos.md)。
{% endcapture %}
{% capture body %}
<!--
## Resource types
*CPU* and *memory* are each a *resource type*. A resource type has a base unit.
CPU is specified in units of cores, and memory is specified in units of bytes.
CPU and memory are collectively referred to as *compute resources*, or just
*resources*. Compute
resources are measurable quantities that can be requested, allocated, and
consumed. They are distinct from
[API resources](/docs/api/). API resources, such as Pods and
[Services](/docs/user-guide/services) are objects that can be read and modified
through the Kubernetes API server.
-->
## 资源类型
*CPU* 和 *内存* 都是 *资源类型*。资源类型具有基本单位。CPU 的单位是 core内存的单位是 byte。
CPU和内存统称为*计算资源*,也可以称为*资源*。计算资源的数量是可以被请求、分配和消耗的可测量的。它们与 [API 资源](/docs/api/) 不同。 API 资源(如 Pod 和 [Service](/docs/user-guide/services))是可通过 Kubernetes API server 读取和修改的对象。
<!--
## Resource requests and limits of Pod and Container
Each Container of a Pod can specify one or more of the following:
* `spec.containers[].resources.limits.cpu`
* `spec.containers[].resources.limits.memory`
* `spec.containers[].resources.requests.cpu`
* `spec.containers[].resources.requests.memory`
Although requests and limits can only be specified on individual Containers, it
is convenient to talk about Pod resource requests and limits. A
*Pod resource request/limit* for a particular resource type is the sum of the
resource requests/limits of that type for each Container in the Pod.
-->
## Pod 和 容器的资源请求和限制
Pod 中的每个容器都可以指定以下的一个或者多个值:
- spec.containers[].resources.limits.cpu`
- `spec.containers[].resources.limits.memory`
- `spec.containers[].resources.requests.cpu`
- `spec.containers[].resources.requests.memory`
尽管只能在个别容器上指定请求和限制,但是我们可以方便地计算出 Pod 资源请求和限制。特定资源类型的Pod 资源请求/限制是 Pod 中每个容器的该类型的资源请求/限制的总和。
<!--
## Meaning of CPU
Limits and requests for CPU resources are measured in *cpu* units.
One cpu, in Kubernetes, is equivalent to:
- 1 AWS vCPU
- 1 GCP Core
- 1 Azure vCore
- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading
Fractional requests are allowed. A Container with
`spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much
CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the
expression `100m`, which can be read as "one hundred millicpu". Some people say
"one hundred millicores", and this is understood to mean the same thing. A
request with a decimal point, like `0.1`, is converted to `100m` by the API, and
precision finer than `1m` is not allowed. For this reason, the form `100m` might
be preferred.
CPU is always requested as an absolute quantity, never as a relative quantity;
0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.
-->
## CPU 的含义
CPU 资源的限制和请求以 *cpu* 为单位。
Kubernetes 中的一个 cpu 等于:
- 1 AWS vCPU
- 1 GCP Core
- 1 Azure vCore
- 1 *Hyperthread* 在带有超线程的裸机 Intel 处理器上
允许浮点数请求。具有 `spec.containers[].resources.requests.cpu` 为 0.5 的容器保证了一半 CPU 要求 1 CPU的一半。表达式 `0.1` 等价于表达式 `100m`,可以看作 “100 millicpu”。有些人说成是“一百毫 cpu”其实说的是同样的事情。具有小数点`0.1`)的请求由 API 转换为`100m`,精度不超过 `1m`。因此,可能会优先选择 `100m` 的形式。
CPU 总是要用绝对数量不可以使用相对数量0.1 的 CPU 在单核、双核、48核的机器中的意义是一样的。
<!--
## Meaning of memory
Limits and requests for `memory` are measured in bytes. You can express memory as
a plain integer or as a fixed-point integer using one of these suffixes:
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following represent roughly the same value:
-->
## 内存的含义
内存的限制和请求以字节为单位。您可以使用以下后缀之一作为平均整数或定点整数表示内存EPTGMK。您还可以使用两个字母的等效的幂数EiPiTi GiMiKi。例如以下代表大致相同的值
```shell
128974848, 129e6, 129M, 123Mi
```
<!--
Here's an example.
The following Pod has two Containers. Each Container has a request of 0.25 cpu
and 64MiB (2<sup>26</sup> bytes) of memory Each Container has a limit of 0.5
cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128
MiB of memory, and a limit of 1 cpu and 256MiB of memory.
-->
下面是个例子。
以下 Pod 有两个容器。每个容器的请求为 0.25 cpu 和 64MiB2<sup>26</sup> 字节)内存,每个容器的限制为 0.5 cpu 和 128MiB 内存。您可以说该 Pod 请求 0.5 cpu 和 128 MiB 的内存,限制为 1 cpu 和 256MiB 的内存。
```yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
```
<!--
## How Pods with resource requests are scheduled
When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
run on. Each node has a maximum capacity for each of the resource types: the
amount of CPU and memory it can provide for Pods. The scheduler ensures that,
for each resource type, the sum of the resource requests of the scheduled
Containers is less than the capacity of the node. Note that although actual memory
or CPU resource usage on nodes is very low, the scheduler still refuses to place
a Pod on a node if the capacity check fails. This protects against a resource
shortage on a node when resource usage later increases, for example, during a
daily peak in request rate.
-->
## 具有资源请求的 Pod 如何调度
当您创建一个 Pod 时Kubernetes 调度程序将为 Pod 选择一个节点。每个节点具有每种资源类型的最大容量:可为 Pod 提供的 CPU 和内存量。调度程序确保对于每种资源类型,调度的容器的资源请求的总和小于节点的容量。请注意,尽管节点上的实际内存或 CPU 资源使用量非常低,但如果容量检查失败,则调度程序仍然拒绝在该节点上放置 Pod。当资源使用量稍后增加时例如在请求率的每日峰值期间这可以防止节点上的资源短缺。
<!--
## How Pods with resource limits are run
When the kubelet starts a Container of a Pod, it passes the CPU and memory limits
to the container runtime.
When using Docker:
- The `spec.containers[].resources.requests.cpu` is converted to its core value,
which is potentially fractional, and multiplied by 1024. The greater of this number
or 2 is used as the value of the
[`--cpu-shares`](https://docs.docker.com/engine/reference/run/#/cpu-share-constraint)
flag in the `docker run` command.
- The `spec.containers[].resources.limits.cpu` is converted to its millicore value,
multiplied by 100000, and then divided by 1000. This number is used as the value
of the [`--cpu-quota`](https://docs.docker.com/engine/reference/run/#/cpu-quota-constraint)
flag in the `docker run` command. The [`--cpu-period`] flag is set to 100000,
which represents the default 100ms period for measuring quota usage. The
kubelet enforces cpu limits if it is started with the
[`--cpu-cfs-quota`] flag set to true. As of Kubernetes version 1.2, this flag
defaults to true.
- The `spec.containers[].resources.limits.memory` is converted to an integer, and
used as the value of the
[`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints)
flag in the `docker run` command.
If a Container exceeds its memory limit, it might be terminated. If it is
restartable, the kubelet will restart it, as with any other type of runtime
failure.
If a Container exceeds its memory request, it is likely that its Pod will
be evicted whenever the node runs out of memory.
A Container might or might not be allowed to exceed its CPU limit for extended
periods of time. However, it will not be killed for excessive CPU usage.
To determine whether a Container cannot be scheduled or is being killed due to
resource limits, see the
[Troubleshooting](#troubleshooting) section.
-->
## 具有资源限制的 Pod 如何运行
当 kubelet 启动一个 Pod 的容器时,它会将 CPU 和内存限制传递到容器运行时。
当使用 Docker 时:
- `spec.containers[].resources.requests.cpu` 的值将转换成 millicore 值这是个浮点数并乘以1024这个数字中的较大者或2用作 `docker run` 命令中的[ `--cpu-shares`](https://docs.docker.com/engine/reference/run/#/cpu-share-constraint) 标志的值。
- `spec.containers[].resources.limits.cpu` 被转换成 millicore 值。被乘以 100000 然后 除以 1000。这个数字用作 `docker run` 命令中的 [`--cpu-quota`](https://docs.docker.com/engine/reference/run/#/cpu-quota-constraint) 标志的值。[`--cpu-quota` ] 标志被设置成了 100000表示测量配额使用的默认100ms 周期。如果 [`--cpu-cfs-quota`] 标志设置为 true则 kubelet 会强制执行 cpu 限制。从 Kubernetes 1.2 版本起,此标志默认为 true。
- `spec.containers[].resources.limits.memory` 被转换为整型,作为 `docker run` 命令中的 [`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints) 标志的值。
如果容器超过其内存限制则可能会被终止。如果可重新启动则与所有其他类型的运行时故障一样kubelet 将重新启动它。
如果一个容器超过其内存请求,那么当节点内存不足时,它的 Pod 可能被逐出。
容器可能被允许也可能不被允许超过其 CPU 限制时间。但是,由于 CPU 使用率过高,不会被杀死。
要确定容器是否由于资源限制而无法安排或被杀死,请参阅 [疑难解答](#troubleshooting) 部分。
<!--
## Monitoring compute resource usage
The resource usage of a Pod is reported as part of the Pod status.
If [optional monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md)
is configured for your cluster, then Pod resource usage can be retrieved from
the monitoring system.
-->
## 监控计算资源使用
Pod 的资源使用情况被报告为 Pod 状态的一部分。
如果为集群配置了 [可选监控](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md),则可以从监控系统检索 Pod 资源的使用情况。
<!--
## Troubleshooting
### My Pods are pending with event message failedScheduling
If the scheduler cannot find any node where a Pod can fit, the Pod remains
unscheduled until a place can be found. An event is produced each time the
scheduler fails to find a place for the Pod, like this:
## 疑难解答
### 我的 Pod 处于 pending 状态且事件信息显示 failedScheduling
如果调度器找不到任何该 Pod 可以匹配的节点,则该 Pod 将保持不可调度状态,直到找到一个可以被调度到的位置。每当调度器找不到 Pod 可以调度的地方时,会产生一个事件,如下所示:
-->
```shell
$ kubectl describe pod frontend | grep -A 3 Events
Events:
FirstSeen LastSeen Count From Subobject PathReason Message
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
```
<!--
In the preceding example, the Pod named "frontend" fails to be scheduled due to
insufficient CPU resource on the node. Similar error messages can also suggest
failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod
is pending with a message of this type, there are several things to try:
- Add more nodes to the cluster.
- Terminate unneeded Pods to make room for pending Pods.
- Check that the Pod is not larger than all the nodes. For example, if all the
nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will
never be scheduled.
You can check node capacities and amounts allocated with the
`kubectl describe nodes` command. For example:
-->
在上述示例中,由于节点上的 CPU 资源不足,名为 “frontend” 的 Pod 将无法调度。由于内存不足PodExceedsFreeMemory类似的错误消息也可能会导致失败。一般来说如果有这种类型的消息而处于 pending 状态,您可以尝试如下几件事情:
```shell
$ kubectl describe nodes e2e-test-minion-group-4lw4
Name: e2e-test-minion-group-4lw4
[ ... lines removed for clarity ...]
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 7679792Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 1800m
memory: 7474992Ki
pods: 110
[ ... lines removed for clarity ...]
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)
kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)
kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)
kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
```
<!--
In the preceding output, you can see that if a Pod requests more than 1120m
CPUs or 6.23Gi of memory, it will not fit on the node.
By looking at the `Pods` section, you can see which Pods are taking up space on
the node.
The amount of resources available to Pods is less than the node capacity, because
system daemons use a portion of the available resources. The `allocatable` field
[NodeStatus](/docs/resources-reference/{{page.version}}/#nodestatus-v1-core)
gives the amount of resources that are available to Pods. For more information, see
[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md).
The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.
-->
在上面的输出中,您可以看到如果 Pod 请求超过 1120m CPU 或者 6.23Gi 内存,节点将无法满足。
通过查看 `Pods` 部分,您将看到哪些 Pod 占用的节点上的资源。
Pod 可用的资源量小于节点容量,因为系统守护程序使用一部分可用资源。 [NodeStatus](/docs/resources-reference/{{page.version}}/#nodestatus-v1-core) 的 `allocatable` 字段给出了可用于 Pod 的资源量。有关更多信息,请参阅 [节点可分配资源](https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md)。
可以将 [资源配额](/docs/concepts/policy/resource-quotas/) 功能配置为限制可以使用的资源总量。如果与 namespace 配合一起使用,就可以防止一个团队占用所有资源。
<!--
### My Container is terminated
Your Container might get terminated because it is resource-starved. To check
whether a Container is being killed because it is hitting a resource limit, call
`kubectl describe pod` on the Pod of interest:
-->
## 我的容器被终止了
您的容器可能因为资源枯竭而被终止了。要查看容器是否因为遇到资源限制而被杀死,请在相关的 Pod 上调用 `kubectl describe pod`
```shell
[12:54:41] $ kubectl describe pod simmemleak-hra99
Name: simmemleak-hra99
Namespace: default
Image(s): saadali/simmemleak
Node: kubernetes-node-tf0f/10.240.216.66
Labels: name=simmemleak
Status: Running
Reason:
Message:
IP: 10.244.2.75
Replication Controllers: simmemleak (1/1 replicas created)
Containers:
simmemleak:
Image: saadali/simmemleak
Limits:
cpu: 100m
memory: 50Mi
State: Running
Started: Tue, 07 Jul 2015 12:54:41 -0700
Last Termination State: Terminated
Exit Code: 1
Started: Fri, 07 Jul 2015 12:54:30 -0700
Finished: Fri, 07 Jul 2015 12:54:33 -0700
Ready: False
Restart Count: 5
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
```
<!--
In the preceding example, the `Restart Count: 5` indicates that the `simmemleak`
Container in the Pod was terminated and restarted five times.
You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status
of previously terminated Containers:
-->
在上面的例子中,`Restart Count: 5` 意味着 Pod 中的 `simmemleak` 容器被终止并重启了五次。
您可以使用 `kubectl get pod` 命令加上 `-o go-template=...` 选项来获取之前终止容器的状态。
```shell{% raw %}
[13:59:01] $ kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]{% endraw %}
```
<!--
You can see that the Container was terminated because of `reason:OOM Killed`,
where `OOM` stands for Out Of Memory.
-->
您可以看到容器因为 `reason:OOM killed` 被终止,`OOM` 表示 Out Of Memory。
<!--
## Opaque integer resources (Alpha feature)
Kubernetes version 1.5 introduces Opaque integer resources. Opaque
integer resources allow cluster operators to advertise new node-level
resources that would be otherwise unknown to the system.
Users can consume these resources in Pod specs just like CPU and memory.
The scheduler takes care of the resource accounting so that no more than the
available amount is simultaneously allocated to Pods.
**Note:** Opaque integer resources are Alpha in Kubernetes version 1.5.
Only resource accounting is implemented; node-level isolation is still
under active development.
Opaque integer resources are resources that begin with the prefix
`pod.alpha.kubernetes.io/opaque-int-resource-`. The API server
restricts quantities of these resources to whole numbers. Examples of
_valid_ quantities are `3`, `3000m` and `3Ki`. Examples of _invalid_
quantities are `0.5` and `1500m`.
There are two steps required to use opaque integer resources. First, the
cluster operator must advertise a per-node opaque resource on one or more
nodes. Second, users must request the opaque resource in Pods.
To advertise a new opaque integer resource, the cluster operator should
submit a `PATCH` HTTP request to the API server to specify the available
quantity in the `status.capacity` for a node in the cluster. After this
operation, the node's `status.capacity` will include a new resource. The
`status.allocatable` field is updated automatically with the new resource
asynchronously by the kubelet. Note that because the scheduler uses the
node `status.allocatable` value when evaluating Pod fitness, there may
be a short delay between patching the node capacity with a new resource and the
first pod that requests the resource to be scheduled on that node.
**Example:**
Here is an HTTP request that advertises five "foo" resources on node `k8s-node-1` whose master is `k8s-master`.
-->
## 不透明整型资源Alpha功能
Kubernetes 1.5 版本中引入不透明整型资源。不透明的整数资源允许集群运维人员发布新的节点级资源,否则系统将不了解这些资源。
用户可以在 Pod 的 spec 中消费这些资源,就像 CPU 和内存一样。调度器负责资源计量,以便在不超过可用量的同时分配给 Pod。
**注意:** 不透明整型资源在 kubernetes 1.5 中还是 Alpha 版本。只实现了资源计量,节点级别的隔离还处于积极的开发阶段。
不透明整型资源是以 `pod.alpha.kubernetes.io/opaque-int-resource-` 为前缀的资源。API server 将限制这些资源的数量为整数。*有效* 数量的例子有 `3`、`3000m` 和 `3Ki`。*无效*数量的例子有 `0.5``1500m`
申请使用不透明整型资源需要两步。首先,集群运维人员必须在一个或多个节点上通告每个节点不透明的资源。然后,用户必须在 Pod 中请求不透明资源。
要发布新的不透明整型资源,集群运维人员应向 API server 提交 `PATCH` HTTP请求以指定集群中节点的`status.capacity` 的可用数量。在此操作之后,节点的 `status.capacity` 将包括一个新的资源。 `status.allocatable` 字段由 kubelet 异步地使用新资源自动更新。请注意,由于调度器在评估 Pod 适应度时使用节点 `status.allocatable` 值,所以在使用新资源修补节点容量和请求在该节点上调度资源的第一个 pod 之间可能会有短暂的延迟。
**示例**
这是一个 HTTP 请求master 节点是 k8s-master在 k8s-node-1 节点上通告 5 个 “foo” 资源。
```http
PATCH /api/v1/nodes/k8s-node-1/status HTTP/1.1
Accept: application/json
Content-Type: application/json-patch+json
Host: k8s-master:8080
[
{
"op": "add",
"path": "/status/capacity/pod.alpha.kubernetes.io~1opaque-int-resource-foo",
"value": "5"
}
]
```
```shell
curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path": "/status/capacity/pod.alpha.kubernetes.io~1opaque-int-resource-foo", "value": "5"}]' \
http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
```
<!--
**Note**: In the preceding request, `~1` is the encoding for the character `/`
in the patch path. The operation path value in JSON-Patch is interpreted as a
JSON-Pointer. For more details, see
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
To consume an opaque resource in a Pod, include the name of the opaque
resource as a key in the `spec.containers[].resources.requests` map.
The Pod is scheduled only if all of the resource requests are
satisfied, including cpu, memory and any opaque resources. The Pod will
remain in the `PENDING` state as long as the resource request cannot be met by
any node.
**Example:**
The Pod below requests 2 cpus and 1 "foo" (an opaque resource.)
-->
**注意:** 在前面的请求中,`~1` 是 patch 路径中 `/` 字符的编码。JSON-Patch 中的操作路径值被解释为 JSON-Pointer。更多详细信息请参阅 [IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3)。
```yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: myimage
resources:
requests:
cpu: 2
pod.alpha.kubernetes.io/opaque-int-resource-foo: 1
```
<!--
## Planned Improvements
Kubernetes version 1.5 only allows resource quantities to be specified on a
Container. It is planned to improve accounting for resources that are shared by
all Containers in a Pod, such as
[emptyDir volumes](/docs/concepts/storage/volumes/#emptydir).
Kubernetes version 1.5 only supports Container requests and limits for CPU and
memory. It is planned to add new resource types, including a node disk space
resource, and a framework for adding custom
[resource types](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/resources.md).
Kubernetes supports overcommitment of resources by supporting multiple levels of
[Quality of Service](http://issue.k8s.io/168).
In Kubernetes version 1.5, one unit of CPU means different things on different
cloud providers, and on different machine types within the same cloud providers.
For example, on AWS, the capacity of a node is reported in
[ECUs](http://aws.amazon.com/ec2/faqs/), while in GCE it is reported in logical
cores. We plan to revise the definition of the cpu resource to allow for more
consistency across providers and platforms.
-->
## 计划改进
在 kubernetes 1.5 版本中仅允许在容器上指定资源量。计划改进对所有容器在 Pod 中共享资源的计量,如 [emptyDir volume](/docs/concepts/storage/volumes/#emptydir)。
在 kubernetes 1.5 版本中仅支持容器对 CPU 和内存的申请和限制。计划增加新的资源类型,包括节点磁盘空间资源和一个可支持自定义 [资源类型](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/resources.md) 的框架。
Kubernetes 通过支持通过多级别的 [服务质量](http://issue.k8s.io/168) 来支持资源的过度使用。
在 kubernetes 1.5 版本中,一个 CPU 单位在不同的云提供商和同一云提供商的不同机器类型中的意味都不同。例如,在 AWS 上,节点的容量报告为 [ECU](http://aws.amazon.com/ec2/faqs/),而在 GCE 中报告为逻辑内核。我们计划修改 cpu 资源的定义,以便在不同的提供商和平台之间保持一致。
{% endcapture %}
{% capture whatsnext %}
<!--
* Get hands-on experience
[assigning CPU and RAM resources to a container](/docs/tasks/configure-pod-container/assign-cpu-ram-container/).
* [Container](/docs/api-reference/{{page.version}}/#container-v1-core)
* [ResourceRequirements](/docs/resources-reference/{{page.version}}/#resourcerequirements-v1-core)
-->
- 获取将 [CPU 和内存资源分配给容器](/docs/tasks/configure-pod-container/assign-cpu-ram-container/) 的实践经验
- [容器](/docs/api-reference/{{page.version}}/#container-v1-core)
- [ResourceRequirements](/docs/resources-reference/{{page.version}}/#resourcerequirements-v1-core)
{% endcapture %}
{% include templates/concept.md %}

View File

@ -0,0 +1,988 @@
---
approvers:
- mikedanese
cn-approvers
- rootsongjc
cn-reviewers:
- markthink
- xuyuan02965
title: Secret
---
<!--
Objects of type `secret` are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` is safer and more flexible than putting it verbatim in a `pod` definition or in a docker image. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/secrets.md) for more information.
-->
`Secret` 对象类型用来保存敏感信息例如密码、OAuth 令牌和 ssh key。将这些信息放在 `secret` 中比放在 `pod` 的定义或者 docker 镜像中来说更加安全和灵活。参阅 [Secret 设计文档](https://git.k8s.io/community/contributors/design-proposals/secrets.md) 获取更多详细信息。
* TOC
{:toc}
<!--
## Overview of Secrets
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image; putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.
Users can create secrets, and the system also creates some secrets.
To use a secret, a pod needs to reference the secret. A secret can be used with a pod in two ways: as files in a [volume](/docs/concepts/storage/volumes/) mounted on one or more of its containers, or used by kubelet when pulling images for the pod.
-->
## Secret 概览
Secret 是一种包含少量敏感信息例如密码、token 或 key 的对象。这样的信息可能会被放在 Pod spec 中或者镜像中;将其放在一个 secret 对象中可以更好地控制它的用途,并降低意外暴露的风险。
用户可以创建 secret同时系统也创建了一些 secret。
要使用 secretpod 需要引用 secret。Pod 可以用两种方式使用 secret作为 [volume](/docs/concepts/storage/volumes/) 中的文件被挂载到 pod 中的一个或者多个容器里,或者当 kubelet 为 pod 拉取镜像时使用。
<!--
### Built-in Secrets
#### Service Accounts Automatically Create and Attach Secrets with API Credentials
Kubernetes automatically creates secrets which contain credentials for accessing the API and it automatically modifies your pods to use this type of secret.
The automatic creation and use of API credentials can be disabled or overridden if desired. However, if all you need to do is securely access the apiserver, this is the recommended workflow.
See the [Service Account](/docs/user-guide/service-accounts) documentation for more information on how Service Accounts work.
-->
### 内置 secret
#### Service Account 使用 API 凭证自动创建和附加 secret
Kubernetes 自动创建包含访问 API 凭据的 secret并自动修改您的 pod 以使用此类型的 secret。
如果需要可以禁用或覆盖自动创建和使用API凭据。但是如果您需要的只是安全地访问 apiserver我们推荐这样的工作流程。
参阅 [Service Account](/docs/user-guide/service-accounts) 文档获取关于 Service Account 如何工作的更多信息。
<!--
### Creating your own Secrets
#### Creating a Secret Using kubectl create secret
Say that some pods need to access a database. The username and password that the pods should use is in the files `./username.txt` and `./password.txt` on your local machine.
-->
### 创建您自己的 Secret
#### 使用 kubectl 创建 Secret
假设有些 pod 需要访问数据库。这些 pod 需要使用的用户名和密码在您本地机器的 `./username.txt``./password.txt` 文件里。
```shell
# Create files needed for rest of example.
$ echo -n "admin" > ./username.txt
$ echo -n "1f2d1e2e67df" > ./password.txt
```
<!--
The `kubectl create secret` command packages these files into a Secret and creates the object on the Apiserver.
-->
`kubectl create secret` 命令将这些文件打包到一个 Secret 中并在 API server 中创建了一个对象。
```shell
$ kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
secret "db-user-pass" created
```
<!--
You can check that the secret was created like this:
-->
您可以这样检查刚创建的 secret
```shell
$ kubectl get secrets
NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
$ kubectl describe secrets/db-user-pass
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password.txt: 12 bytes
username.txt: 5 bytes
```
<!--
Note that neither `get` nor `describe` shows the contents of the file by default. This is to protect the secret from being exposed accidentally to someone looking or from being stored in a terminal log.
See [decoding a secret](#decoding-a-secret) for how to see the contents.
#### Creating a Secret Manually
You can also create a secret object in a file first, in json or yaml format, and then create that object.
Each item must be base64 encoded:
-->
请注意,默认情况下,`get` 和 `describe` 命令都不会显示文件的内容。这是为了防止将 secret 中的内容被意外暴露给从终端日志记录中刻意寻找它们的人。
请参阅 [解码 secret](#decoding-a-secret) 了解如何查看它们的内容。
#### 手动创建 Secret
您也可以先以 json 或 yaml 格式在文件中创建一个 secret 对象,然后创建该对象。
每一项必须是 base64 编码:
```shell
$ echo -n "admin" | base64
YWRtaW4=
$ echo -n "1f2d1e2e67df" | base64
MWYyZDFlMmU2N2Rm
```
<!--
Now write a secret object that looks like this:
-->
现在可以像这样写一个 secret 对象:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
```
<!--
The data field is a map. Its keys must match [`DNS_SUBDOMAIN`](https://git.k8s.io/community/contributors/design-proposals/identifiers.md), except that leading dots are also allowed. The values are arbitrary data, encoded using base64.
Create the secret using [`kubectl create`](/docs/user-guide/kubectl/v1.7/#create):
-->
数据字段是一个映射。它的键必须匹配 [`DNS_SUBDOMAIN`](https://git.k8s.io/community/contributors/design-proposals/identifiers.md),前导点也是可以的。这些值可以是任意数据,使用 base64 进行编码。
使用 [`kubectl create`](/docs/user-guide/kubectl/v1.7/#create) 创建 secret
```shell
$ kubectl create -f ./secret.yaml
secret "mysecret" created
```
<!--
**Encoding Note:** The serialized JSON and YAML values of secret data are encoded as base64 strings. Newlines are not valid within these strings and must be omitted. When using the `base64` utility on Darwin/OS X users should avoid using the `-b` option to split long lines. Conversely Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if `-w` option is not available.
#### Decoding a Secret
Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section:
-->
**编码注意:** secret 数据的序列化 JSON 和 YAML 值使用 base64 编码成字符串。换行符在这些字符串中无效,必须省略。当在 Darwin/OS X 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来拆分长行。另外,对于 Linux 用户如果 `-w` 选项不可用的话,应该添加选项 `-w 0``base64` 命令或管道 `base64 | tr -d '\n' `
#### 解码 Secret
可以使用 `kubectl get secret` 命令获取 secret。例如获取在上一节中创建的 secret
```shell
$ kubectl get secret mysecret -o yaml
apiVersion: v1
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
creationTimestamp: 2016-01-22T18:41:56Z
name: mysecret
namespace: default
resourceVersion: "164619"
selfLink: /api/v1/namespaces/default/secrets/mysecret
uid: cfee02d6-c137-11e5-8d73-42010af00002
type: Opaque
```
<!--
Decode the password field:
-->
解码密码字段:
```shell
$ echo "MWYyZDFlMmU2N2Rm" | base64 --decode
1f2d1e2e67df
```
<!--
### Using Secrets
Secrets can be mounted as data volumes or be exposed as environment variables to be used by a container in a pod. They can also be used by other parts of the system, without being directly exposed to the pod. For example, they can hold credentials that other parts of the system should use to interact with external systems on your behalf.
#### Using Secrets as Files from a Pod
To consume a Secret in a volume in a Pod:
1. Create a secret or use an existing one. Multiple pods can reference the same secret.
2. Modify your Pod definition to add a volume under `spec.volumes[]`. Name the volume anything, and have a `spec.volumes[].secret.secretName` field equal to the name of the secret object.
3. Add a `spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `spec.containers[].volumeMounts[].readOnly = true` and `spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear.
4. Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`.
This is an example of a pod that mounts a secret in a volume:
-->
### 使用 Secret
Secret 可以作为数据卷被挂载,或作为环境变量暴露出来以供 pod 中的容器使用。它们也可以被系统的其他部分使用,而不直接暴露在 pod 内。例如,它们可以保存凭据,系统的其他部分应该用它来代表您与外部系统进行交互。
#### 在 Pod 中使用 Secret 文件
在 Pod 中的 volume 里使用 Secret
1. 创建一个 secret 或者使用已有的 secret。多个 pod 可以引用同一个 secret。
2. 修改您的 pod 的定义在 `spec.volumes[]` 下增加一个 volume。可以给这个 volume 随意命名,它的 `spec.volumes[].secret.secretName` 必须等于 secret 对象的名字。
3. 将 `spec.containers[].volumeMounts[]` 加到需要用到该 secret 的容器中。指定 `spec.containers[].volumeMounts[].readOnly = true``spec.containers[].volumeMounts[].mountPath` 为您想要该 secret 出现的尚未使用的目录。
4. 修改您的镜像并且或者命令行让程序从该目录下寻找文件。Secret 的 `data` 映射中的每一个键都成为了 `mountPath` 下的一个文件名。
这是一个在 pod 中使用 volume 挂在 secret 的例子:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
```
<!--
Each secret you want to use needs to be referred to in `spec.volumes`.
If there are multiple containers in the pod, then each container needs its own `volumeMounts` block, but only one `spec.volumes` is needed per secret.
You can package many files into one secret, or use many secrets, whichever is convenient.
**Projection of secret keys to specific paths**
We can also control the paths within the volume where Secret keys are projected. You can use `spec.volumes[].secret.items` field to change target path of each key:
-->
您想要用的每个 secret 都需要在 `spec.volumes` 中指明。
如果 pod 中有多个容器,每个容器都需要自己的 `volumeMounts` 配置块,但是每个 secret 只需要一个 `spec.volumes`
您可以打包多个文件到一个 secret 中,或者使用的多个 secret怎样方便就怎样来。
**向特性路径映射 secret 密钥**
我们还可以控制 Secret key 映射在 volume 中的路径。您可以使用 `spec.volumes[].secret.items` 字段修改每个 key 的目标路径:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
items:
- key: username
path: my-group/my-username
```
<!--
What will happen:
* `username` secret is stored under `/etc/foo/my-group/my-username` file instead of `/etc/foo/username`.
* `password` secret is not projected
If `spec.volumes[].secret.items` is used, only keys specified in `items` are projected. To consume all keys from the secret, all of them must be listed in the `items` field. All listed keys must exist in the corresponding secret. Otherwise, the volume is not created.
**Secret files permissions**
You can also specify the permission mode bits files part of a secret will have. If you don't specify any, `0644` is used by default. You can specify a default mode for the whole secret volume and override per key if needed.
For example, you can specify a default mode like this:
-->
将会发生什么呢:
- `username` secret 存储在 `/etc/foo/my-group/my-username` 文件中而不是 `/etc/foo/username` 中。
- `password` secret 没有被映射
如果使用了 `spec.volumes[].secret.items`,只有在 `items` 中指定的 key 被映射。要使用 secret 中所有的 key所有这些都必须列在 `items` 字段中。所有列出的密钥必须存在于相应的 secret 中。否则,不会创建卷。
**Secret 文件权限**
您还可以指定 secret 将拥有的权限模式位文件。如果不指定,默认使用 `0644`。您可以为整个保密卷指定默认模式,如果需要,可以覆盖每个密钥。
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
volumes:
- name: foo
secret:
secretName: mysecret
defaultMode: 256
```
<!--
Then, the secret will be mounted on `/etc/foo` and all the files created by the secret volume mount will have permission `0400`.
Note that the JSON spec doesn't support octal notation, so use the value 256 for 0400 permissions. If you use yaml instead of json for the pod, you can use octal notation to specify permissions in a more natural way.
You can also use mapping, as in the previous example, and specify different permission for different files like this:
-->
然后secret 将被挂载到 `/etc/foo` 目录,所有通过该 secret volume 挂载创建的文件的权限都是 `0400`
请注意JSON 规范不支持八进制符号,因此使用 256 值作为 0400 权限。如果您使用 yaml 而不是 json 作为 pod则可以使用八进制符号以更自然的方式指定权限。
您还可以使用映射,如上一个示例,并为不同的文件指定不同的权限,如下所示:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
volumes:
- name: foo
secret:
secretName: mysecret
items:
- key: username
path: my-group/my-username
mode: 511
```
<!--
In this case, the file resulting in `/etc/foo/my-group/my-username` will have permission value of `0777`. Owing to JSON limitations, you must specify the mode in decimal notation.
Note that this permission value might be displayed in decimal notation if you read it later.
**Consuming Secret Values from Volumes**
Inside the container that mounts a secret volume, the secret keys appear as files and the secret values are base-64 decoded and stored inside these files. This is the result of commands executed inside the container from the example above:
-->
在这种情况下,导致 `/etc/foo/my-group/my-username` 的文件的权限值为 `0777`。由于 JSON 限制,必须以十进制格式指定模式。
请注意,如果稍后阅读此权限值可能会以十进制格式显示。
**从 Volume 中消费 secret 值**
在挂载的 secret volume 的容器内secret key 将作为文件,并且 secret 的值使用 base-64 解码并存储在这些文件中。这是在上面的示例容器内执行的命令的结果:
```shell
$ ls /etc/foo/
username
password
$ cat /etc/foo/username
admin
$ cat /etc/foo/password
1f2d1e2e67df
```
<!--
The program in a container is responsible for reading the secrets from the files.
**Mounted Secrets are updated automatically**
When a secret being already consumed in a volume is updated, projected keys are eventually updated as well.
Kubelet is checking whether the mounted secret is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the secret. As a result, the total delay from the moment when the secret is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of secrets cache in kubelet.
#### Using Secrets as Environment Variables
To use a secret in an environment variable in a pod:
1. Create a secret or use an existing one. Multiple pods can reference the same secret.
2. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[x].valueFrom.secretKeyRef`.
3. Modify your image and/or command line so that the program looks for values in the specified environment variables
This is an example of a pod that uses secrets from environment variables:
-->
容器中的程序负责从文件中读取 secret。
**挂载的 secret 被自动更新**
当已经在 volume 中被消费的 secret 被更新时,被映射的 key 也将被更新。
Kubelet 在周期性同步时检查被挂载的 secret 是不是最新的。但是,它正在使用其基于本地 ttl 的缓存来获取当前的 secret 值。结果是,当 secret 被更新的时刻到将新的 secret 映射到 pod 的时刻的总延迟可以与 kubelet 中的secret 缓存的 kubelet sync period + ttl 一样长。
#### Secret 作为环境变量
将 secret 作为 pod 中的环境变量使用:
1. 创建一个 secret 或者使用一个已存在的 secret。多个 pod 可以引用同一个 secret。
2. 修改 Pod 定义,为每个要使用 secret 的容器添加对应 secret key 的环境变量。消费secret key 的环境变量应填充 secret 的名称,并键入 `env[x].valueFrom.secretKeyRef`
3. 修改镜像并/或者命令行,以便程序在指定的环境变量中查找值。
```yaml
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
```
<!--
**Consuming Secret Values from Environment Variables**
Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base-64 decoded values of the secret data. This is the result of commands executed inside the container from the example above:
-->
**消费环境变量里的 Secret 值**
在一个消耗环境变量 secret 的容器中secret key 作为包含 secret 数据的 base-64 解码值的常规环境变量。这是从上面的示例在容器内执行的命令的结果:
```shell
$ echo $SECRET_USERNAME
admin
$ echo $SECRET_PASSWORD
1f2d1e2e67df
```
<!--
#### Using imagePullSecrets
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry password to the Kubelet so it can pull a private image on behalf of your Pod.
**Manually specifying an imagePullSecret**
Use of imagePullSecrets is described in the [images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)
-->
#### 使用 imagePullSecret
imagePullSecret 是将包含 Docker或其他镜像注册表密码的 secret 传递给 Kubelet 的一种方式,因此可以代表您的 pod 拉取私有镜像。
**手动指定 imagePullSecret**
imagePullSecret 的使用在 [镜像文档](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) 中说明。
<!--
### Arranging for imagePullSecrets to be Automatically Attached
You can manually create an imagePullSecret, and reference it from a serviceAccount. Any pods created with that serviceAccount or that default to use that serviceAccount, will get their imagePullSecret field set to that of the service account. See [Adding ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#adding-imagepullsecrets-to-a-service-account) for a detailed explanation of that process.
#### Automatic Mounting of Manually Created Secrets
Manually created secrets (e.g. one containing a token for accessing a github account) can be automatically attached to pods based on their service account. See [Injecting Information into Pods Using a PodPreset](/docs/tasks/run-application/podpreset/) for a detailed explanation of that process.
-->
### 安排 imagePullSecrets 自动附加
您可以手动创建 imagePullSecret并从 serviceAccount 引用它。使用该 serviceAccount 创建的任何 pod 和默认使用该 serviceAccount 的 pod 将会将其的 imagePullSecret 字段设置为服务帐户的 imagePullSecret 字段。有关该过程的详细说明,请参阅 [将 ImagePullSecrets 添加到服务帐户](/docs/tasks/configure-pod-container/configure-service-account/#adding-imagepullsecrets-to-a-service-account)。
#### 自动挂载手动创建的 Secret
手动创建的 secret例如包含用于访问 github 帐户的令牌)可以根据其服务帐户自动附加到 pod。请参阅 [使用 PodPreset 向 Pod 中注入信息](/docs/tasks/run-application/podpreset/) 以获取该进程的详细说明。
<!--
## Details
### Restrictions
Secret volume sources are validated to ensure that the specified object reference actually points to an object of type `Secret`. Therefore, a secret needs to be created before any pods that depend on it.
Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace.
-->
## 详细
### 限制
验证 secret volume 来源确保指定的对象引用实际上指向一个类型为 Secret 的对象。因此,需要在依赖于它的任何 pod 之前创建一个 secret。
Secret API 对象驻留在命名空间中。它们只能由同一命名空间中的 pod 引用。
<!--
Individual secrets are limited to 1MB in size. This is to discourage creation of very large secrets which would exhaust apiserver and kubelet memory. However, creation of many smaller secrets could also exhaust memory. More comprehensive limits on memory usage due to secrets is a planned feature.
Kubelet only supports use of secrets for Pods it gets from the API server. This includes any pods created using kubectl, or indirectly via a replication controller. It does not include pods created via the kubelets `--manifest-url` flag, its `--config` flag, or its REST API (these are not common ways to create pods.)
Secrets must be created before they are consumed in pods as environment variables unless they are marked as optional. References to Secrets that do not exist will prevent the pod from starting.
References via `secretKeyRef` to keys that do not exist in a named Secret will prevent the pod from starting.
Secrets used to populate environment variables via `envFrom` that have keys that are considered invalid environment variable names will have those keys skipped. The pod will be allowed to start. There will be an event whose reason is `InvalidVariableNames` and the message will contain the list of invalid keys that were skipped. The example shows a pod which refers to the default/mysecret ConfigMap that contains 2 invalid keys, 1badkey and 2alsobad.
-->
每个 secret 的大小限制为1MB。这是为了防止创建非常大的 secret 会耗尽 apiserver 和 kubelet 的内存。然而,创建许多较小的 secret 也可能耗尽内存。更全面得限制 secret 对内存使用的功能还在计划中。
Kubelet 仅支持从 API server 获取的 Pod 使用 secret。这包括使用 kubectl 创建的任何 pod或间接通过 replication controller 创建的 pod。它不包括通过 kubelet `--manifest-url` 标志,其 `--config` 标志或其 REST API 创建的pod这些不是创建 pod 的常用方法)。
必须先创建 secret除非将它们标记为可选项否则必须在将其作为环境变量在 pod 中使用之前创建 secret。对不存在的 secret 的引用将阻止其启动。
通过 `secretKeyRef` 对不存在于命名的 key 中的 key 进行引用将阻止该启动。
用于通过 `envFrom` 填充环境变量的 secret这些环境变量具有被认为是无效环境变量名称的 key 将跳过这些键。该 pod 将被允许启动。将会有一个事件,其原因是 `InvalidVariableNames`,该消息将包含被跳过的无效键的列表。该示例显示一个 pod它指的是包含2个无效键1badkey 和 2alsobad 的默认/mysecret ConfigMap。
```shell
$ kubectl get events
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON
0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names.
```
<!--
### Secret and Pod Lifetime interaction
When a pod is created via the API, there is no check whether a referenced secret exists. Once a pod is scheduled, the kubelet will try to fetch the secret value. If the secret cannot be fetched because it does not exist or because of a temporary lack of connection to the API server, kubelet will periodically retry. It will report an event about the pod explaining the reason it is not started yet. Once the secret is fetched, the kubelet will create and mount a volume containing it. None of the pod's containers will start until all the pod's volumes are mounted.
## Use cases
### Use-Case: Pod with ssh keys
Create a secret containing some ssh keys:
-->
### Secret 与 Pod 生命周期的联系
通过 API 创建 Pod 时,不会检查应用的 secret 是否存在。一旦 Pod 被调度kubelet 就会尝试获取该 secret 的值。如果获取不到该 secret或者暂时无法与 API server 建立连接kubelet 将会定期重试。Kubelet 将会报告关于 pod 的事件,并解释它无法启动的原因。一旦获取到 secretkubelet将创建并装载一个包含它的卷。在所有 pod 的卷被挂载之前,都不会启动 pod 的容器。
## 使用案例
### 使用案例:包含 ssh 密钥的 pod
创建一个包含 ssh key 的 secret
```shell
$ kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
```
<!--
**Security Note:** think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to be accessible to all the users with whom you share the Kubernetes cluster, and can revoke if they are compromised.
Now we can create a pod which references the secret with the ssh key and consumes it in a volume:
-->
**安全性注意事项**:发送自己的 ssh 密钥之前要仔细思考:集群的其他用户可能有权访问该密钥。使用您想要共享 Kubernetes 群集的所有用户可以访问的服务帐户,如果它们遭到入侵,可以撤销。
现在我们可以创建一个使用 ssh 密钥引用 secret 的pod并在一个卷中使用它
```yaml
kind: Pod
apiVersion: v1
metadata:
name: secret-test-pod
labels:
name: secret-test
spec:
volumes:
- name: secret-volume
secret:
secretName: ssh-key-secret
containers:
- name: ssh-test-container
image: mySshImage
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
```
<!--
When the container's command runs, the pieces of the key will be available in:
-->
当容器中的命令运行时,密钥的片段将可在以下目录:
```shell
/etc/secret-volume/ssh-publickey
/etc/secret-volume/ssh-privatekey
```
<!--
The container is then free to use the secret data to establish an ssh connection.
-->
然后容器可以自由使用密钥数据建立一个 ssh 连接。
<!--
### Use-Case: Pods with prod / test credentials
This example illustrates a pod which consumes a secret containing prod credentials and another pod which consumes a secret with test environment credentials.
Make the secrets:
-->
### 使用案例:包含 prod/test 凭据的 pod
下面的例子说明一个 pod 消费一个包含 prod 凭据的 secret另一个 pod 使用测试环境凭据消费 secret。
创建 secret
```shell
$ kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11
secret "prod-db-secret" created
$ kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
secret "test-db-secret" created
```
<!--
Now make the pods:
-->
创建 pod
```yaml
apiVersion: v1
kind: List
items:
- kind: Pod
apiVersion: v1
metadata:
name: prod-db-client-pod
labels:
name: prod-db-client
spec:
volumes:
- name: secret-volume
secret:
secretName: prod-db-secret
containers:
- name: db-client-container
image: myClientImage
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
- kind: Pod
apiVersion: v1
metadata:
name: test-db-client-pod
labels:
name: test-db-client
spec:
volumes:
- name: secret-volume
secret:
secretName: test-db-secret
containers:
- name: db-client-container
image: myClientImage
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
```
<!--
Both containers will have the following files present on their filesystems with the values for each container's environment:
-->
这两个容器将在其文件系统上显示以下文件,其中包含每个容器环境的值:
```shell
/etc/secret-volume/username
/etc/secret-volume/password
```
<!--
Note how the specs for the two pods differ only in one field; this facilitates creating pods with different capabilities from a common pod config template. You could further simplify the base pod specification by using two Service Accounts: one called, say, `prod-user` with the `prod-db-secret`, and one called, say, `test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example:
-->
请注意,两个 pod 的 spec 配置中仅有一个字段有所不同;这有助于使用普通的 pod 配置模板创建具有不同功能的 pod。您可以使用两个 service account 进一步简化基本 pod spec一个名为 `prod-user` 拥有 `prod-db-secret` ,另一个称为 `test-user` 拥有 `test-db-secret` 。然后pod spec 可以缩短为,例如:
```yaml
kind: Pod
apiVersion: v1
metadata:
name: prod-db-client-pod
labels:
name: prod-db-client
spec:
serviceAccount: prod-db-client
containers:
- name: db-client-container
image: myClientImage
```
<!--
### Use-case: Dotfiles in secret volume
In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply
make that key begin with a dot. For example, when the following secret is mounted into a volume:
-->
### 使用案例secret 卷中以点号开头的文件
为了将数据“隐藏”起来(即文件名以点号开头的文件),简单地说让该键以一个点开始。例如,当如下 secret 被挂载到卷中:
```yaml
kind: Secret
apiVersion: v1
metadata:
name: dotfile-secret
data:
.secret-file: dmFsdWUtMg0KDQo=
---
kind: Pod
apiVersion: v1
metadata:
name: secret-dotfiles-pod
spec:
volumes:
- name: secret-volume
secret:
secretName: dotfile-secret
containers:
- name: dotfile-test-container
image: gcr.io/google_containers/busybox
command:
- ls
- "-l"
- "/etc/secret-volume"
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
```
<!--
The `secret-volume` will contain a single file, called `.secret-file`, and the `dotfile-test-container` will have this file present at the path `/etc/secret-volume/.secret-file`.
**NOTE**
Files beginning with dot characters are hidden from the output of `ls -l`; you must use `ls -la` to see them when listing directory contents.
-->
`Secret-volume` 将包含一个单独的文件,叫做 `.secret-file``dotfile-test-container` 的 `/etc/secret-volume/.secret-file` 路径下将有该文件。
**注意**
以点号开头的文件在 `ls -l` 的输出中被隐藏起来了;列出目录内容时,必须使用 `ls -la` 才能查看它们。
<!--
### Use-case: Secret visible to one container in a pod
Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed remote file reading exploit in the server, which could expose the private key to an attacker.
This could be divided into two processes in two containers: a frontend container which handles user interaction and business logic, but which cannot see the private key; and a signer container that can see the private key, and responds to simple signing requests from the frontend (e.g. over localhost networking).
With this partitioned approach, an attacker now has to trick the application server into doing something rather arbitrary, which may be harder than getting it to read a file.
-->
### 使用案例Secret 仅对 pod 中的一个容器可见
考虑以下一个需要处理 HTTP 请求的程序,执行一些复杂的业务逻辑,然后使用 HMAC 签署一些消息。因为它具有复杂的应用程序逻辑,所以在服务器中可能会出现一个未被注意的远程文件读取漏洞,这可能会将私钥暴露给攻击者。
这可以在两个容器中分为两个进程:前端容器,用于处理用户交互和业务逻辑,但无法看到私钥;以及可以看到私钥的签名者容器,并且响应来自前端的简单签名请求(例如通过本地主机网络)。
使用这种分割方法,攻击者现在必须欺骗应用程序服务器才能进行任意的操作,这可能比使其读取文件更难。
<!-- TODO: explain how to do this while still using automation. -->
<!--
## Best practices
### Clients that use the secrets API
When deploying applications that interact with the secrets API, access should be limited using [authorization policies](https://kubernetes.io/docs/admin/authorization/) such as [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/).
Secrets often hold values that span a spectrum of importance, many of which can cause escalations within Kubernetes (e.g. service account tokens) and to external systems. Even if an individual app can reason about the power of the secrets it expects to interact with, other apps within the same namespace can render those assumptions invalid.
For these reasons `watch` and `list` requests for secrets within a namespace are extremely powerful capabilities and should be avoided, since listing secrets allows the clients to inspect the values if all secrets are in that namespace. The ability to `watch` and `list` all secrets in a cluster should be reserved for only the most privileged, system-level components.
Applications that need to access the secrets API should perform `get` requests on the secrets they need. This lets administrators restrict access to all secrets while [white-listing access to individual instances](https://kubernetes.io/docs/admin/authorization/rbac/#referring-to-resources) that the app needs.
For improved performance over a looping `get`, clients can design resources that reference a secret then `watch` the resource, re-requesting the secret when the reference changes. Additionally, a ["bulk watch" API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bulk_watch.md) to let clients `watch` individual resources has also been proposed, and will likely be available in future releases of Kubernetes.
-->
## 最佳实践
### 客户端使用 secret API
当部署与 secret API 交互的应用程序时,应使用诸如 [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) 之类的 [授权策略](https://kubernetes.io/docs/admin/authorization/) 来限制访问。
Secret 中的值对于不同的环境来说重要性可能不同,例如对于 Kubernetes 集群内部(例如 service account 令牌)和集群外部来说就不一样。即使一个应用程序可以理解其期望的与之交互的 secret 有多大的能力,但是同一命名空间中的其他应用程序却可能不这样认为。
由于这些原因,在命名空间中 `watch``list` secret 的请求是非常强大的功能,应该避免这样的行为,因为列出 secret 可以让客户端检查所有 secret 是否在该命名空间中。在群集中`watch` 和 `list` 所有 secret 的能力应该只保留给最有特权的系统级组件。
需要访问 secrets API 的应用程序应该根据他们需要的 secret 执行 `get` 请求。这允许管理员限制对所有 secret 的访问,同时设置 [白名单访问](https://kubernetes.io/docs/admin/authorization/rbac/#referring-to-resources) 应用程序需要的各个实例。
为了提高循环获取的性能,客户端可以设计引用 secret 的资源,然后 `watch` 资源,在引用更改时重新请求 secret。此外还提出了一种 [”批量监控“ API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bulk_watch.md) 来让客户端 `watch` 每个资源,该功能可能会在将来的 Kubernetes 版本中提供。
<!--
## Security Properties
### Protections
Because `secret` objects can be created independently of the `pods` that use them, there is less risk of the secret being exposed during the workflow of creating, viewing, and editing pods. The system can also take additional precautions with `secret` objects, such as avoiding writing them to disk where possible.
A secret is only sent to a node if a pod on that node requires it. It is not written to disk. It is stored in a tmpfs. It is deleted once the pod that depends on it is deleted.
On most Kubernetes-project-maintained distributions, communication between user to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. Secrets are protected when transmitted over these channels.
Secret data on nodes is stored in tmpfs volumes and thus does not come to rest on the node.
There may be secrets for several pods on the same node. However, only the secrets that a pod requests are potentially visible within its containers. Therefore, one Pod does not have access to the secrets of another pod.
There may be several containers in a pod. However, each container in a pod has to request the secret volume in its `volumeMounts` for it to be visible within the container. This can be used to construct useful [security partitions at the Pod level](#use-case-secret-visible-to-one-container-in-a-pod).
-->
## 安全属性
### 保护
因为 `secret` 对象可以独立于使用它们的 `pod` 而创建,所以在创建、查看和编辑 pod 的流程中 secret 被暴露的风险较小。系统还可以对 `secret` 对象采取额外的预防措施,例如避免将其写入到磁盘中可能的位置。
只有当节点上的 pod 需要用到该 secret 时,该 secret 才会被发送到该节点上。它不会被写入磁盘,而是存储在 tmpfs 中。一旦依赖于它的 pod 被删除,它就被删除。
在大多数 Kubernetes 项目维护的发行版中,用户与 API server 之间的通信以及从 API server 到 kubelet 的通信都受到 SSL/TLS 的保护。通过这些通道传输时secret 受到保护。
节点上的 secret 数据存储在 tmpfs 卷中,因此不会传到节点上的其他磁盘。
同一节点上的很多个 pod 可能拥有多个 secret。但是只有 pod 请求的 secret 在其容器中才是可见的。因此,一个 pod 不能访问另一个 Pod 的 secret。
Pod 中有多个容器。但是pod 中的每个容器必须请求其挂载卷中的 secret 卷才能在容器内可见。这可以用于 [在 Pod 级别构建安全分区](#use-case-secret-visible-to-one-container-in-a-pod)。
<!--
### Risks
- In the API server secret data is stored as plaintext in etcd; therefore:
- Administrators should limit access to etcd to admin users
- Secret data in the API server is at rest on the disk that etcd uses; admins may want to wipe/shred disks used by etcd when no longer in use
- If you configure the secret through a manifest (JSON or YAML) file which has the secret data encoded as base64, sharing this file or checking it in to a source repository means the secret is compromised. Base64 encoding is not an encryption method and is considered the same as plain text.
- Applications still need to protect the value of secret after reading it from the volume, such as not accidentally logging it or transmitting it to an untrusted party.
- A user who can create a pod that uses a secret can also see the value of that secret. Even if apiserver policy does not allow that user to read the secret object, the user could run a pod which exposes the secret.
- If multiple replicas of etcd are run, then the secrets will be shared between them. By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured.
- Currently, anyone with root on any node can read any secret from the apiserver, by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node.
-->
### 风险
- API server 的 secret 数据以纯文本的方式存储在 etcd 中;因此:
- 管理员应该限制 admin 用户访问 etcd
- API server 中的 secret 数据位于 etcd 使用的磁盘上;管理员可能希望在不再使用时擦除/粉碎 etcd 使用的磁盘
- 如果您将 secret 数据编码为 base64 的清单JSON 或 YAML文件共享该文件或将其检入代码库这样的话该密码将会被泄露。 Base64 编码不是一种加密方式,一样也是纯文本。
- 应用程序在从卷中读取 secret 后仍然需要保护 secret 的值,例如不会意外记录或发送给不信任方。
- 可以创建和使用 secret 的 pod 的用户也可以看到该 secret 的值。即使 API server 策略不允许用户读取 secret 对象,用户也可以运行暴露 secret 的 pod。
- 如果运行了多个副本,那么这些 secret 将在它们之间共享。默认情况下etcd 不能保证与 SSL/TLS 的对等通信,尽管可以进行配置。
- 目前,任何节点的 root 用户都可以通过模拟 kubelet 来读取 API server 中的任何 secret。只有向实际需要它们的节点发送 secret 才能限制单个节点的根漏洞的影响,该功能还在计划中。

View File

@ -0,0 +1,445 @@
---
title: Pod 生命周期
redirect_from:
- "/docs/user-guide/pod-states/"
- "/docs/user-guide/pod-states.html"
cn-approvers:
- rootsongjc
cn-reviewers:
- zjj2wry
---
{% capture overview %}
{% comment %}Updated: 4/14/2015{% endcomment %}
{% comment %}Edited and moved to Concepts section: 2/2/17{% endcomment %}
<!--
This page describes the lifecycle of a Pod.
-->
该页面将描述 Pod 的生命周期。
{% endcapture %}
{% capture body %}
<!--
## Pod phase
A Pod's `status` field is a
[PodStatus](/docs/resources-reference/v1.6/#podstatus-v1-core)
object, which has a `phase` field.
The phase of a Pod is a simple, high-level summary of where the Pod is in its
lifecycle. The phase is not intended to be a comprehensive rollup of observations
of Container or Pod state, nor is it intended to be a comprehensive state machine.
The number and meanings of Pod phase values are tightly guarded.
Other than what is documented here, nothing should be assumed about Pods that
have a given `phase` value.
Here are the possible values for `phase`:
* Pending: The Pod has been accepted by the Kubernetes system, but one or more of
the Container images has not been created. This includes time before being
scheduled as well as time spent downloading images over the network,
which could take a while.
* Running: The Pod has been bound to a node, and all of the Containers have been
created. At least one Container is still running, or is in the process of
starting or restarting.
* Succeeded: All Containers in the Pod have terminated in success, and will not
be restarted.
* Failed: All Containers in the Pod have terminated, and at least one Container
has terminated in failure. That is, the Container either exited with non-zero
status or was terminated by the system.
* Unknown: For some reason the state of the Pod could not be obtained, typically
due to an error in communicating with the host of the Pod.
-->
## Pod phase
Pod 的 `status` 定义在 [PodStatus](/docs/resources-reference/v1.7/#podstatus-v1-core) 对象中,其中有一个 `phase` 字段。
Pod 的相位phase是 Pod 在其生命周期中的简单宏观概述。该阶段并不是对容器或 Pod 的综合汇总,也不是为了做为综合状态机。
Pod 相位的数量和含义是严格指定的。除了本文档中列举的内容外,不应该再假定 Pod 有其他的 `phase` 值。
下面是 `phase` 可能的值:
- 挂起PendingPod 已被 Kubernetes 系统接受,但有一个或者多个容器镜像尚未创建。等待时间包括调度 Pod 的时间和通过网络下载镜像的时间,这可能需要花点时间。
- 运行中Running该 Pod 已经绑定到了一个节点上Pod 中所有的容器都已被创建。至少有一个容器正在运行,或者正处于启动或重启状态。
- 成功SucceededPod 中的所有容器都被成功终止,并且不会再重启。
- 失败FailedPod 中的所有容器都已终止了并且至少有一个容器是因为失败终止。也就是说容器以非0状态退出或者被系统终止。
- 未知Unknown因为某些原因无法取得 Pod 的状态,通常是因为与 Pod 所在主机通信失败。
<!--
## Pod conditions
A Pod has a PodStatus, which has an array of
[PodConditions](/docs/resources-reference/v1.6/#podcondition-v1-core). Each element
of the PodCondition array has a `type` field and a `status` field. The `type`
field is a string, with possible values PodScheduled, Ready, Initialized, and
Unschedulable. The `status` field is a string, with possible values True, False,
and Unknown.
-->
## Pod 状态
Pod 有一个 PodStatus 对象,其中包含一个 [PodCondition](/docs/resources-reference/v1.7/#podcondition-v1-core) 数组。 PodCondition 数组的每个元素都有一个 `type` 字段和一个 `status` 字段。`type` 字段是字符串,可能的值有 PodScheduled、Ready、Initialized 和 Unschedulable。`status` 字段是一个字符串,可能的值有 True、False 和 Unknown。
<!--
## Container probes
A [Probe](/docs/resources-reference/v1.6/#probe-v1-core) is a diagnostic
performed periodically by the [kubelet](/docs/admin/kubelet/)
on a Container. To perform a diagnostic,
the kubelet calls a
[Handler](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler) implemented by
the Container. There are three types of handlers:
* [ExecAction](/docs/resources-reference/v1.6/#execaction-v1-core):
Executes a specified command inside the Container. The diagnostic
is considered successful if the command exits with a status code of 0.
* [TCPSocketAction](/docs/resources-reference/v1.6/#tcpsocketaction-v1-core):
Performs a TCP check against the Container's IP address on
a specified port. The diagnostic is considered successful if the port is open.
* [HTTPGetAction](/docs/resources-reference/v1.6/#httpgetaction-v1-core):
Performs an HTTP Get request against the Container's IP
address on a specified port and path. The diagnostic is considered successful
if the response has a status code greater than or equal to 200 and less than 400.
Each probe has one of three results:
* Success: The Container passed the diagnostic.
* Failure: The Container failed the diagnostic.
* Unknown: The diagnostic failed, so no action should be taken.
The kubelet can optionally perform and react to two kinds of probes on running
Containers:
* `livenessProbe`: Indicates whether the Container is running. If
the liveness probe fails, the kubelet kills the Container, and the Container
is subjected to its [restart policy](#restart-policy). If a Container does not
provide a liveness probe, the default state is `Success`.
* `readinessProbe`: Indicates whether the Container is ready to service requests.
If the readiness probe fails, the endpoints controller removes the Pod's IP
address from the endpoints of all Services that match the Pod. The default
state of readiness before the initial delay is `Failure`. If a Container does
not provide a readiness probe, the default state is `Success`.
-->
## 容器探针
[探针](/docs/resources-reference/v1.7/#probe-v1-core) 是由 [kubelet](/docs/admin/kubelet/) 对容器执行的定期诊断。要执行诊断kubelet 调用由容器实现的 [Handler](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler)。有三种类型的处理程序:
- [ExecAction](/docs/resources-reference/v1.7/#execaction-v1-core):在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。
- [TCPSocketAction](/docs/resources-reference/v1.7/#tcpsocketaction-v1-core):对指定端口上的容器的 IP 地址进行 TCP 检查。如果端口打开,则诊断被认为是成功的。
- [HTTPGetAction](/docs/resources-reference/v1.7/#httpgetaction-v1-core):对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400则诊断被认为是成功的。
每次探测都将获得以下三种结果之一:
- 成功:容器通过了诊断。
- 失败:容器未通过诊断。
- 未知:诊断失败,因此不会采取任何行动。
Kubelet 可以选择是否执行在容器上运行的两种探针执行和做出反应:
- `livenessProbe`:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其 [重启策略](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) 的影响。如果容器不提供存活探针,则默认状态为 `Success`
- `readinessProbe`:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 `Failure`。如果容器不提供就绪探针,则默认状态为 `Success`
<!--
### When should you use liveness or readiness probes?
If the process in your Container is able to crash on its own whenever it
encounters an issue or becomes unhealthy, you do not necessarily need a liveness
probe; the kubelet will automatically perform the correct action in accordance
with the Pod's `restartPolicy`.
If you'd like your Container to be killed and restarted if a probe fails, then
specify a liveness probe, and specify a `restartPolicy` of Always or OnFailure.
If you'd like to start sending traffic to a Pod only when a probe succeeds,
specify a readiness probe. In this case, the readiness probe might be the same
as the liveness probe, but the existence of the readiness probe in the spec means
that the Pod will start without receiving any traffic and only start receiving
traffic after the probe starts succeeding.
If you want your Container to be able to take itself down for maintenance, you
can specify a readiness probe that checks an endpoint specific to readiness that
is different from the liveness probe.
Note that if you just want to be able to drain requests when the Pod is deleted,
you do not necessarily need a readiness probe; on deletion, the Pod automatically
puts itself into an unready state regardless of whether the readiness probe exists.
The Pod remains in the unready state while it waits for the Containers in the Pod
to stop.
-->
### 该什么时候使用存活liveness和就绪readiness探针?
如果容器中的进程能够在遇到问题或不健康的情况下自行崩溃,则不一定需要存活探针; kubelet 将根据 Pod 的`restartPolicy` 自动执行正确的操作。
如果您希望容器在探测失败时被杀死并重新启动,那么请指定一个存活探针,并指定`restartPolicy` 为 Always 或 OnFailure。
如果要仅在探测成功时才开始向 Pod 发送流量,请指定就绪探针。在这种情况下,就绪探针可能与存活探针相同,但是 spec 中的就绪探针的存在意味着 Pod 将在没有接收到任何流量的情况下启动,并且只有在探针探测成功后才开始接收流量。
如果您希望容器能够自行维护,您可以指定一个就绪探针,该探针检查与存活探针不同的端点。
请注意,如果您只想在 Pod 被删除时能够排除请求,则不一定需要使用就绪探针;在删除 Pod 时Pod 会自动将自身置于未完成状态,无论就绪探针是否存在。当等待 Pod 中的容器停止时Pod 仍处于未完成状态。
<!--
## Pod and Container status
For detailed information about Pod Container status, see
[PodStatus](/docs/resources-reference/v1.6/#podstatus-v1-core)
and
[ContainerStatus](/docs/resources-reference/v1.6/#containerstatus-v1-core).
Note that the information reported as Pod status depends on the current
[ContainerState](/docs/resources-reference/v1.6/#containerstatus-v1-core).
-->
## Pod 和容器状态
有关 Pod 容器状态的详细信息,请参阅 [PodStatus](/docs/resources-reference/v1.7/#podstatus-v1-core) 和 [ContainerStatus](/docs/resources-reference/v1.7/#containerstatus-v1-core)。请注意,报告的 Pod 状态信息取决于当前的 [ContainerState](/docs/resources-reference/v1.7/#containerstatus-v1-core)。
<!--
## Restart policy
A PodSpec has a `restartPolicy` field with possible values Always, OnFailure,
and Never. The default value is Always.
`restartPolicy` applies to all Containers in the Pod. `restartPolicy` only
refers to restarts of the Containers by the kubelet on the same node. Failed
Containers that are restarted by the kubelet are restarted with an exponential
back-off delay (10s, 20s, 40s ...) capped at five minutes, and is reset after ten
minutes of successful execution. As discussed in the
[Pods document](/docs/user-guide/pods/#durability-of-pods-or-lack-thereof),
once bound to a node, a Pod will never be rebound to another node.
-->
## 重启策略
PodSpec 中有一个 `restartPolicy` 字段,可能的值为 Always、OnFailure 和 Never。默认为 Always。 `restartPolicy` 适用于 Pod 中的所有容器。`restartPolicy` 仅指通过同一节点上的 kubelet 重新启动容器。失败的容器由 kubelet 以五分钟为上限的指数退避延迟10秒20秒40秒...)重新启动,并在成功执行十分钟后重置。如 [Pod 文档](/docs/user-guide/pods/#durability-of-pods-or-lack-thereof) 中所述一旦绑定到一个节点Pod 将永远不会重新绑定到另一个节点。
<!--
## Pod lifetime
In general, Pods do not disappear until someone destroys them. This might be a
human or a controller. The only exception to
this rule is that Pods with a `phase` of Succeeded or Failed for more than some
duration (determined by the master) will expire and be automatically destroyed.
Three types of controllers are available:
- Use a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) for Pods that are expected to terminate,
for example, batch computations. Jobs are appropriate only for Pods with
`restartPolicy` equal to OnFailure or Never.
- Use a [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/),
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/), or
[Deployment](/docs/concepts/workloads/controllers/deployment/)
for Pods that are not expected to terminate, for example, web servers.
ReplicationControllers are appropriate only for Pods with a `restartPolicy` of
Always.
- Use a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) for Pods that need to run one per
machine, because they provide a machine-specific system service.
All three types of controllers contain a PodTemplate. It
is recommended to create the appropriate controller and let
it create Pods, rather than directly create Pods yourself. That is because Pods
alone are not resilient to machine failures, but controllers are.
If a node dies or is disconnected from the rest of the cluster, Kubernetes
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
-->
## Pod 的生命
一般来说Pod 不会消失,直到人为销毁他们。这可能是一个人或控制器。这个规则的唯一例外是成功或失败的 `phase` 超过一段时间(由 master 确定的Pod将过期并被自动销毁。
有三种可用的控制器:
- 使用 [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) 运行预期会终止的 Pod例如批量计算。Job 仅适用于重启策略为 `OnFailure``Never` 的 Pod。
- 对预期不会终止的 Pod 使用 [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/)、[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) 和 [Deployment](/docs/concepts/workloads/controllers/deployment/) ,例如 Web 服务器。 ReplicationController 仅适用于具有 `restartPolicy` 为 Always 的 Pod。
- 提供特定于机器的系统服务,使用 [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) 为每台机器运行一个 Pod 。
所有这三种类型的控制器都包含一个 PodTemplate。建议创建适当的控制器让它们来创建 Pod而不是直接自己创建 Pod。这是因为单独的 Pod 在机器故障的情况下没有办法自动复原,而控制器却可以。
如果节点死亡或与集群的其余部分断开连接,则 Kubernetes 将应用一个策略将丢失节点上的所有 Pod 的 `phase` 设置为 Failed。
<!--
## Examples
### Advanced liveness probe example
Liveness probes are executed by the kubelet, so all requests are made in the
kubelet network namespace.
-->
## 示例
### 高级 liveness 探针示例
存活探针由 kubelet 来执行,因此所有的请求都在 kubelet 的网络命名空间中进行。
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- args:
- /server
image: gcr.io/google_containers/liveness
livenessProbe:
httpGet:
# when "host" is not defined, "PodIP" will be used
# host: my-host
# when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed
# scheme: HTTPS
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15
timeoutSeconds: 1
name: liveness
```
<!--
### Example states
* Pod is running and has one Container. Container exits with success.
* Log completion event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Pod `phase` becomes Succeeded.
* Never: Pod `phase` becomes Succeeded.
* Pod is running and has one Container. Container exits with failure.
* Log failure event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Restart Container; Pod `phase` stays Running.
* Never: Pod `phase` becomes Failed.
* Pod is running and has two Containers. Container 1 exits with failure.
* Log failure event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Restart Container; Pod `phase` stays Running.
* Never: Do not restart Container; Pod `phase` stays Running.
* If Container 1 is not running, and Container 2 exits:
* Log failure event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Restart Container; Pod `phase` stays Running.
* Never: Pod `phase` becomes Failed.
* Pod is running and has one Container. Container runs out of memory.
* Container terminates in failure.
* Log OOM event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Restart Container; Pod `phase` stays Running.
* Never: Log failure event; Pod `phase` becomes Failed.
* Pod is running, and a disk dies.
* Kill all Containers.
* Log appropriate event.
* Pod `phase` becomes Failed.
* If running under a controller, Pod is recreated elsewhere.
* Pod is running, and its node is segmented out.
* Node controller waits for timeout.
* Node controller sets Pod `phase` to Failed.
* If running under a controller, Pod is recreated elsewhere.
{% endcapture %}
{% capture whatsnext %}
* Get hands-on experience
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
* Get hands-on experience
[configuring liveness and readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/).
* Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
-->
### 状态示例
- Pod 中只有一个容器并且正在运行。容器成功退出。
- 记录完成事件。
- 如果 `restartPolicy` 为:
- Always重启容器Pod `phase` 仍为 Running。
- OnFailurePod `phase` 变成 Succeeded。
- NeverPod `phase` 变成 Succeeded。
- Pod 中只有一个容器并且正在运行。容器退出失败。
- 记录失败事件。
- 如果 `restartPolicy` 为:
- Always重启容器Pod `phase` 仍为 Running。
- OnFailure重启容器Pod `phase` 仍为 Running。
- NeverPod `phase` 变成 Failed。
- Pod 中有两个容器并且正在运行。有一个容器退出失败。
- 记录失败事件。
- 如果 restartPolicy 为:
- Always重启容器Pod `phase` 仍为 Running。
- OnFailure重启容器Pod `phase` 仍为 Running。
- Never不重启容器Pod `phase` 仍为 Running。
- 如果有一个容器没有处于运行状态,并且两个容器退出:
- 记录失败事件。
- 如果 `restartPolicy` 为:
- Always重启容器Pod `phase` 仍为 Running。
- OnFailure重启容器Pod `phase` 仍为 Running。
- NeverPod `phase` 变成 Failed。
- Pod 中只有一个容器并处于运行状态。容器运行时内存超出限制:
- 容器以失败状态终止。
- 记录 OOM 事件。
- 如果 `restartPolicy` 为:
- Always重启容器Pod `phase` 仍为 Running。
- OnFailure重启容器Pod `phase` 仍为 Running。
- Never: 记录失败事件Pod `phase` 仍为 Failed。
- Pod 正在运行,磁盘故障:
- 杀掉所有容器。
- 记录适当事件。
- Pod `phase` 变成 Failed。
- 如果使用控制器来运行Pod 将在别处重建。
- Pod 正在运行,其节点被分段。
- 节点控制器等待直到超时。
- 节点控制器将 Pod `phase` 设置为 Failed。
- 如果是用控制器来运行Pod 将在别处重建。
{% endcapture %}
{% include templates/concept.md %}

View File

@ -0,0 +1,16 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.91
ports:
- containerPort: 80

View File

@ -0,0 +1,25 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: curl-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: curlpod
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
containers:
- name: curlpod
command:
- sh
- -c
- while true; do sleep 1; done
image: radial/busyboxplus:curl
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume

View File

@ -0,0 +1,21 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

View File

@ -0,0 +1,468 @@
---
approvers:
- bgrant0607
- brendandburns
- thockin
cn-approvers:
- rootsongjc
cn-reviewers:
- Jimexist
title: Docker 用户使用 kubectl 命令指南
---
<!--
In this doc, we introduce the Kubernetes command line for interacting with the api to docker-cli users. The tool, kubectl, is designed to be familiar to docker-cli users but there are a few necessary differences. Each section of this doc highlights a docker subcommand explains the kubectl equivalent.
-->
在本文中,我们将向 docker-cli 用户介绍 Kubernetes 命令行如何与 api 进行交互。该命令行工具——kubectl被设计成 docker-cli 用户所熟悉的样子,但是它们之间又存在一些必要的差异。该文档将向您展示每个 docker 子命令和 kubectl 与其等效的命令。
* TOC
{:toc}
#### docker run
<!--
How do I run an nginx Deployment and expose it to the world? Checkout [kubectl run](/docs/user-guide/kubectl/{{page.version}}/#run).
With docker:
-->
如何运行一个 nginx Deployment 并将其暴露出来? 查看 [kubectl run](/docs/user-guide/kubectl/{{page.version}}/#run) 。
使用 docker 命令:
```shell
$ docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx
a9ec34d9878748d2f33dc20cb25c714ff21da8d40558b45bfaec9955859075d0
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 2 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
# start the pod running nginx
$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
deployment "nginx-app" created
```
<!--
`kubectl run` creates a Deployment named "nginx-app" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/{{page.version}}/#run) for more details.
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands. Now, we can expose a new Service with the deployment created above:
-->
在大于等于 1.2 版本 Kubernetes 集群中,使用`kubectl run` 命令将创建一个名为 "nginx-app" 的 Deployment。如果您运行的是老版本将会创建一个 replication controller。
如果您想沿用旧的行为,使用 `--generation=run/v1` 参数,这样就会创建 replication controller。查看 [`kubectl run`](/docs/user-guide/kubectl/{{page.version}}/#run) 获取更多详细信息。
```shell
# expose a port through with a service
$ kubectl expose deployment nginx-app --port=80 --name=nginx-http
service "nginx-http" exposed
```
<!--
With kubectl, we create a [Deployment](/docs/concepts/workloads/controllers/deployment/) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/docs/user-guide/services) with a selector that matches the Deployment's selector. See the [Quick start](/docs/user-guide/quick-start) for more information.
By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use:
-->
在 kubectl 命令中,我们创建了一个 [Deployment](/docs/concepts/workloads/controllers/deployment/),这将保证有 N 个运行 nginx 的 podN 代表 spec 中声明的 replica 数,默认为 1。我们还创建了一个 [service](/docs/user-guide/services),使用 selector 匹配具有相应的 selector 的 Deployment。查看 [快速开始](/docs/user-guide/quick-start) 获取更多信息。
默认情况下镜像会在后台运行,与`docker run -d ...` 类似,如果您想在前台运行,使用:
```shell
kubectl run [-i] [--tty] --attach <name> --image=<image>
```
<!--
Unlike `docker run ...`, if `--attach` is specified, we attach to `stdin`, `stdout` and `stderr`, there is no ability to control which streams are attached (`docker -a ...`).
Because we start a Deployment for your container, it will be restarted if you terminate the attached process (e.g. `ctrl-c`), this is different from `docker run -it`.
To destroy the Deployment (and its pods) you need to run `kubectl delete deployment <name>`.
-->
`docker run ...` 不同的是,如果指定了 `--attach` ,我们将连接到 `stdin``stdout` 和 `stderr`,而不能控制具体连接到哪个输出流(`docker -a ...`)。
因为我们使用 Deployment 启动了容器,如果您终止了连接到的进程(例如 `ctrl-c`),容器将会重启,这跟 `docker run -it` 不同。
如果想销毁该 Deployment和它的 pod您需要运行 `kubeclt delete deployment <name>`
#### docker ps
<!--
How do I list what is currently running? Checkout [kubectl get](/docs/user-guide/kubectl/{{page.version}}/#get).
With docker:
-->
如何列出哪些正在运行?查看 [kubectl get](/docs/user-guide/kubectl/{{page.version}}/#get)。
使用 docker 命令:
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 1h
```
#### docker attach
<!--
How do I attach to a process that is already running in a container? Checkout [kubectl attach](/docs/user-guide/kubectl/{{page.version}}/#attach).
With docker:
-->
如何连接到已经运行在容器中的进程?查看 [kubectl attach](/docs/user-guide/kubectl/{{page.version}}/#attach)。
使用 docker 命令:
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker attach a9ec34d98787
...
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
$ kubectl attach -it nginx-app-5jyvm
...
```
#### docker exec
<!--
How do I execute a command in a container? Checkout [kubectl exec](/docs/user-guide/kubectl/{{page.version}}/#exec).
With docker:
-->
如何在容器中执行命令?查看 [kubectl exec](/docs/user-guide/kubectl/{{page.version}}/#exec)。
使用 docker 命令:
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker exec a9ec34d98787 cat /etc/hostname
a9ec34d98787
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
$ kubectl exec nginx-app-5jyvm -- cat /etc/hostname
nginx-app-5jyvm
```
<!--
What about interactive commands?
With docker:
-->
执行交互式命令怎么办?
使用 docker 命令:
```shell
$ docker exec -ti a9ec34d98787 /bin/sh
# exit
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
```
<!--
For more information see [Getting a Shell to a Running Container](/docs/tasks/kubectl/get-shell-running-container/).
-->
更多信息请查看 [获取运行中容器的 Shell 环境](/docs/tasks/kubectl/get-shell-running-container/)。
#### docker logs
<!--
How do I follow stdout/stderr of a running process? Checkout [kubectl logs](/docs/user-guide/kubectl/{{page.version}}/#logs).
With docker:
-->
如何查看运行中进程的 stdout/stderr查看 [kubectl logs](/docs/user-guide/kubectl/{{page.version}}/#logs)。
使用 docker 命令:
```shell
$ docker logs -f a9e
192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
$ kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
<!--
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated, but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
-->
现在是时候提一下 pod 和容器之间的细微差别了;默认情况下如果 pod 中的进程退出 pod 也不会终止,相反它将会重启该进程。这类似于 docker run 时的 `--restart=always` 选项, 这是主要差别。在 docker 中,进程的每个调用的输出都是被连接起来的,但是对于 kubernetes每个调用都是分开的。要查看以前在 kubernetes 中执行的输出,请执行以下操作:
```shell
$ kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
<!--
See [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) for more information.
-->
查看 [记录和监控集群活动](/docs/concepts/cluster-administration/logging/) 获取更多信息。
<!--
#### docker stop and docker rm
How do I stop and delete a running process? Checkout [kubectl delete](/docs/user-guide/kubectl/{{page.version}}/#delete).
With docker:
-->
#### docker stop 和 docker rm
如何停止和删除运行中的进程?查看 [kubectl delete](/docs/user-guide/kubectl/{{page.version}}/#delete)。
使用 docker 命令:
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 22 hours ago Up 22 hours 0.0.0.0:80->80/tcp, 443/tcp nginx-app
$ docker stop a9ec34d98787
a9ec34d98787
$ docker rm a9ec34d98787
a9ec34d98787
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
$ kubectl get deployment nginx-app
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-app 1 1 1 1 2m
$ kubectl get po -l run=nginx-app
NAME READY STATUS RESTARTS AGE
nginx-app-2883164633-aklf7 1/1 Running 0 2m
$ kubectl delete deployment nginx-app
deployment "nginx-app" deleted
$ kubectl get po -l run=nginx-app
# Return nothing
```
<!--
Notice that we don't delete the pod directly. With kubectl we want to delete the Deployment that owns the pod. If we delete the pod directly, the Deployment will recreate the pod.
-->
请注意,我们不直接删除 pod。使用 kubectl 命令,我们要删除拥有该 pod 的 Deployment。如果我们直接删除podDeployment 将会重新创建该 pod。
#### docker login
<!--
There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](/docs/concepts/containers/images/#using-a-private-registry).
-->
在 kubectl 中没有对 `docker login` 的直接模拟。如果您有兴趣在私有镜像仓库中使用 Kubernetes请参阅 [使用私有镜像仓库](/docs/concepts/containers/images/#using-a-private-registry)。
#### docker version
<!--
How do I get the version of my client and server? Checkout [kubectl version](/docs/user-guide/kubectl/{{page.version}}/#version).
With docker:
-->
如何查看客户端和服务端的版本?查看 [kubectl version](/docs/user-guide/kubectl/{{page.version}}/#version)。
使用 docker 命令:
```shell
$ docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): linux/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4335", GitCommit:"9b77fed11a9843ce3780f70dd251e92901c43072", GitTreeState:"dirty", BuildDate:"2017-08-29T20:32:58Z", OpenPaasKubernetesVersion:"v1.03.02", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4335", GitCommit:"9b77fed11a9843ce3780f70dd251e92901c43072", GitTreeState:"dirty", BuildDate:"2017-08-29T20:32:58Z", OpenPaasKubernetesVersion:"v1.03.02", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
```
#### docker info
<!--
How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](/docs/user-guide/kubectl/{{page.version}}/#cluster-info).
With docker:
-->
如何获取有关环境和配置的各种信息?查看 [kubectl cluster-info](/docs/user-guide/kubectl/{{page.version}}/#cluster-info)。
使用 docker 命令:
```shell
$ docker info
Containers: 40
Images: 168
Storage Driver: aufs
Root Dir: /usr/local/google/docker/aufs
Backing Filesystem: extfs
Dirs: 248
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-53-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 12
Total Memory: 31.32 GiB
Name: k8s-is-fun.mtv.corp.google.com
ID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO
WARNING: No swap limit support
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
$ kubectl cluster-info
Kubernetes master is running at https://108.59.85.141
KubeDNS is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/kube-dns/proxy
KubeUI is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/kube-ui/proxy
Grafana is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
Heapster is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy
InfluxDB is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
```

View File

@ -0,0 +1,9 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80

View File

@ -0,0 +1,15 @@
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never

View File

@ -0,0 +1,100 @@
---
title: JSONPath 支持
cn-approvers:
- rootsongjc
cn-reviewers:
- rootsongjc
---
<!--
JSONPath template is composed of JSONPath expressions enclosed by {}.
And we add three functions in addition to the original JSONPath syntax:
-->
JSONPath 模板由 {} 包起来的 JSONPath 表达式组成。
除了原始的 JSONPath 语法之外,我们还添加了三个函数:
<!--
1. The `$` operator is optional since the expression always starts from the root object by default.
2. We can use `""` to quote text inside JSONPath expressions.
3. We can use `range` operator to iterate lists.
The result object is printed as its String() function.
Given the input:
-->
1. `$` 运算符是可选的,因为表达式默认情况下始终从根对象开始。
2. 我们可以使用 `""` 来引用 JSONPath 表达式中的文本。
3. 我们可以使用 `range` 运算符来迭代列表。
结果对象使用 String() 函数打印。
给定输入:
```json
{
"kind": "List",
"items":[
{
"kind":"None",
"metadata":{"name":"127.0.0.1"},
"status":{
"capacity":{"cpu":"4"},
"addresses":[{"type": "LegacyHostIP", "address":"127.0.0.1"}]
}
},
{
"kind":"None",
"metadata":{"name":"127.0.0.2"},
"status":{
"capacity":{"cpu":"8"},
"addresses":[
{"type": "LegacyHostIP", "address":"127.0.0.2"},
{"type": "another", "address":"127.0.0.3"}
]
}
}
],
"users":[
{
"name": "myself",
"user": {}
},
{
"name": "e2e",
"user": {"username": "admin", "password": "secret"}
}
]
}
```
<!--
| Function | Description | Example | Result |
| ----------------- | ------------------------- | ---------------------------------------- | ---------------------------------------- |
| text | the plain text | kind is {.kind} | kind is List |
| @ | the current object | {@} | the same as input |
| . or [] | child operator | {.kind} or {['kind']} | List |
| .. | recursive descent | {..name} | 127.0.0.1 127.0.0.2 myself e2e |
| * | wildcard. Get all objects | {.items[*].metadata.name} | [127.0.0.1 127.0.0.2] |
| [start:end :step] | subscript operator | {.users[0].name} | myself |
| [,] | union operator | {.items[*]['metadata.name', 'status.capacity']} | 127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8] |
| ?() | filter | {.users[?(@.name=="e2e")].user.password} | secret |
| range, end | iterate list | {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} | [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]] |
| "" | quote interpreted string | {range .items[*]}{.metadata.name}{"\t"}{end} | 127.0.0.1 127.0.0.2 |
-->
| 函数 | 描述 | 示例 | 结果 |
| ----------------- | ---------- | ---------------------------------------- | ---------------------------------------- |
| text | 纯文本 | kind is {.kind} | kind is List |
| @ | 当前对象 | {@} | 与输入相同 |
| . or [] | 子运算符 | {.kind} 或者 {['kind']} | List |
| .. | 递归下降 | {..name} | 127.0.0.1 127.0.0.2 myself e2e |
| * | 通配符,获取所有对象 | {.items[*].metadata.name} | [127.0.0.1 127.0.0.2] |
| [start:end :step] | 下标运算符 | {.users[0].name} | myself |
| [,] | 并集运算符 | {.items[*]['metadata.name', 'status.capacity']} | 127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8] |
| ?() | 过滤 | {.users[?(@.name=="e2e")].user.password} | secret |
| range, end | 迭代列表 | {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} | [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]] |
| "" | 引用解释执行字符串 | {range .items[*]}{.metadata.name}{"\t"}{end} | 127.0.0.1 127.0.0.2 |

View File

@ -0,0 +1,49 @@
---
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis
redis-sentinel: "true"
role: master
name: redis-master
spec:
containers:
- name: master
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.5"
volumeMounts:
- mountPath: /redis-master-data
name: data
- name: sentinel
image: kubernetes/redis:v1
env:
- name: SENTINEL
value: "true"
ports:
- containerPort: 26379
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis-proxy
role: proxy
name: redis-proxy
spec:
containers:
- name: proxy
image: kubernetes/redis-proxy:v1
ports:
- containerPort: 6379
name: api

View File

@ -0,0 +1,16 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
ports:
- containerPort: 80

View File

@ -0,0 +1,29 @@
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

View File

@ -0,0 +1,16 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

View File

@ -0,0 +1,29 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io/index.html
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}

View File

@ -0,0 +1,20 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
preStop:
exec:
# SIGTERM triggers a quick exit; gracefully terminate instead
command: ["/usr/sbin/nginx","-s","quit"]

View File

@ -0,0 +1,22 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
httpGet:
# Path to probe; should be cheap, but representative of typical behavior
path: /index.html
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1

View File

@ -0,0 +1,43 @@
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
containers:
- name: nginxhttps
image: bprashanth/nginxhttps:1.0
ports:
- containerPort: 443
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: pod-w-message
spec:
containers:
- name: messager
image: "ubuntu:14.04"
command: ["/bin/sh","-c"]
args: ["sleep 60 && /bin/echo Sleep expired > /dev/termination-log"]

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

View File

@ -0,0 +1,24 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis
spec:
template:
metadata:
labels:
app: redis
tier: backend
spec:
# Provision a fresh volume for the pod
volumes:
- name: data
emptyDir: {}
containers:
- name: redis
image: kubernetes/redis:v1
ports:
- containerPort: 6379
# Mount the volume into the pod
volumeMounts:
- mountPath: /redis-master-data
name: data # must match the name of the volume, above

View File

@ -0,0 +1,27 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis
spec:
template:
metadata:
labels:
app: redis
tier: backend
spec:
containers:
- name: redis
image: kubernetes/redis:v1
ports:
- containerPort: 80
resources:
limits:
# cpu units are cores
cpu: 500m
# memory units are bytes
memory: 64Mi
requests:
# cpu units are cores
cpu: 500m
# memory units are bytes
memory: 64Mi

View File

@ -0,0 +1,27 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis
spec:
template:
metadata:
labels:
app: redis
tier: backend
spec:
volumes:
- name: data
emptyDir: {}
- name: supersecret # The "mysecret" secret populates this "supersecret" volume.
secret:
secretName: mysecret
containers:
- name: redis
image: kubernetes/redis:v1
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /var/run/secrets/super # Mount the "supersecret" volume into the pod.
name: supersecret

View File

@ -0,0 +1,17 @@
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80