Merge pull request #1 from kubernetes/master

update from upstream
pull/25093/head
DangHT 2020-11-18 10:47:46 +08:00 committed by GitHub
commit 414266968a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
45 changed files with 2059 additions and 1296 deletions

7
.github/OWNERS vendored Normal file
View File

@ -0,0 +1,7 @@
# See the OWNERS docs at https://go.k8s.io/owners
reviewers:
- sig-docs-en-reviews # Defined in OWNERS_ALIASES
approvers:
- sig-docs-en-owners # Defined in OWNERS_ALIASES

11
.github/workflows/OWNERS vendored Normal file
View File

@ -0,0 +1,11 @@
# See the OWNERS docs at https://go.k8s.io/owners
# When modifying this file, consider the security implications of
# allowing listed reviewers / approvals to modify or remove any
# configured GitHub Actions.
reviewers:
- sig-docs-leads
approvers:
- sig-docs-leads

View File

@ -157,7 +157,7 @@ github_repo = "https://github.com/kubernetes/website"
# param for displaying an announcement block on every page.
# See /i18n/en.toml for message text and title.
announcement = true
announcement_bg = "#3f0374" # choose a dark color text is white
announcement_bg = "#3d4cb7" # choose a dark color text is white
#Searching
k8s_search = true

View File

@ -8,7 +8,7 @@ sitemap:
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
[Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) is an open-source system for automating deployment, scaling, and management of containerized applications.
[Kubernetes]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}), also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon [15 years of experience of running production workloads at Google](http://queue.acm.org/detail.cfm?id=2898444), combined with best-of-breed ideas and practices from the community.
{{% /blocks/feature %}}
@ -28,7 +28,7 @@ Whether testing locally or running a global enterprise, Kubernetes flexibility g
{{% /blocks/feature %}}
{{% blocks/feature image="suitcase" %}}
#### Run Anywhere
#### Run K8s Anywhere
Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.

View File

@ -142,7 +142,7 @@ The `restartPolicy` applies to all containers in the Pod. `restartPolicy` only
refers to restarts of the containers by the kubelet on the same node. After containers
in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s,
40s, …), that is capped at five minutes. Once a container has executed for 10 minutes
without any problems, the kubelet resets the restart backoff timer forthat container.
without any problems, the kubelet resets the restart backoff timer for that container.
## Pod conditions

View File

@ -1,6 +1,6 @@
---
content_type: concept
title: Contribute to Kubernetes docs
title: Contribute to K8s docs
linktitle: Contribute
main_menu: true
no_list: true
@ -8,7 +8,7 @@ weight: 80
card:
name: contribute
weight: 10
title: Start contributing
title: Start contributing to K8s
---
<!-- overview -->

View File

@ -32,7 +32,7 @@ cards:
button: "View Tutorials"
button_path: "/docs/tutorials"
- name: setup
title: "Set up a cluster"
title: "Set up a K8s cluster"
description: "Get Kubernetes running based on your resources and needs."
button: "Set up Kubernetes"
button_path: "/docs/setup"
@ -57,7 +57,7 @@ cards:
button: Contribute to the docs
button_path: /docs/contribute
- name: release-notes
title: Release Notes
title: K8s Release Notes
description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes.
button: "Download Kubernetes"
button_path: "/docs/setup/release/notes"

View File

@ -138,12 +138,11 @@ different Kubernetes components.
| `RuntimeClass` | `true` | Beta | 1.14 | |
| `SCTPSupport` | `false` | Alpha | 1.12 | 1.18 |
| `SCTPSupport` | `true` | Beta | 1.19 | |
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | 1.18 |
| `ServiceAppProtocol` | `true` | Beta | 1.19 | |
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
| `ServerSideApply` | `true` | Beta | 1.16 | |
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | |
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | |
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | 1.18 |
| `ServiceAppProtocol` | `true` | Beta | 1.19 | |
| `ServiceNodeExclusion` | `false` | Alpha | 1.8 | 1.18 |
| `ServiceNodeExclusion` | `true` | Beta | 1.19 | |
| `ServiceTopology` | `false` | Alpha | 1.17 | |

View File

@ -25,10 +25,11 @@ daemons installed:
## Running Node Conformance Test
To run the node conformance test, perform the following steps:
1. Point your Kubelet to localhost `--api-servers="http://localhost:8080"`,
because the test framework starts a local master to test Kubelet. There are some
other Kubelet flags you may care:
1. Work out the value of the `--kubeconfig` option for the kubelet; for example:
`--kubeconfig=/var/lib/kubelet/config.yaml`.
Because the test framework starts a local control plane to test the kubelet,
use `http://localhost:8080` as the URL of the API server.
There are some other kubelet command line parameters you may want to use:
* `--pod-cidr`: If you are using `kubenet`, you should specify an arbitrary CIDR
to Kubelet, for example `--pod-cidr=10.180.0.0/24`.
* `--cloud-provider`: If you are using `--cloud-provider=gce`, you should

View File

@ -50,7 +50,7 @@ this example.
1. Configure the kubelet to be a service manager for etcd.
{{< note >}}You must do this on every host where etcd should be running.{{< /note >}}
Since etcd was created first, you must override the service priority by creating a new unit file
that has higher precedence than the kubeadm-provided kubelet unit file.
@ -68,6 +68,12 @@ this example.
systemctl restart kubelet
```
Check the kubelet status to ensure it is running.
```sh
systemctl status kubelet
```
1. Create configuration files for kubeadm.
Generate one kubeadm configuration file for each host that will have an etcd

View File

@ -140,7 +140,7 @@ curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/dow
### Joining a Windows worker node
{{< note >}}
You must install the `Containers` feature and install Docker. Instructions
to do so are available at [Install Docker Engine - Enterprise on Windows Servers](https://docs.mirantis.com/docker-enterprise/v3.1/dockeree-products/docker-engine-enterprise/dee-windows.html).
to do so are available at [Install Docker Engine - Enterprise on Windows Servers](https://hub.docker.com/editions/enterprise/docker-ee-server-windows).
{{< /note >}}
{{< note >}}

View File

@ -19,7 +19,7 @@ Cloud infrastructure technologies let you run Kubernetes on public, private, and
Kubernetes believes in automated, API-driven infrastructure without tight coupling between
components.
-->
使用云基础设施技术,你可以在有云、私有云或者混合云环境中运行 Kubernetes。
使用云基础设施技术,你可以在有云、私有云或者混合云环境中运行 Kubernetes。
Kubernetes 的信条是基于自动化的、API 驱动的基础设施,同时避免组件间紧密耦合。
{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="组件 cloud-controller-manager 是">}}

View File

@ -102,7 +102,7 @@ Before choosing a guide, here are some considerations:
* [证书](/zh/docs/concepts/cluster-administration/certificates/)节描述了使用不同的工具链生成证书的步骤。
* [Kubernetes 容器环境](/zh/docs/concepts/containers/container-environment/)描述了 Kubernetes 节点上由 Kubelet 管理的容器的环境。
* [控制到 Kubernetes API 的访问](/zh/docs/reference/access-authn-authz/controlling-access/)描述了如何为用户和 service accounts 建立权限许可。
* [控制到 Kubernetes API 的访问](/zh/docs/concepts/security/controlling-access/)描述了如何为用户和 service accounts 建立权限许可。
* [认证](/docs/reference/access-authn-authz/authentication/)节阐述了 Kubernetes 中的身份认证功能,包括许多认证选项。
* [鉴权](/zh/docs/reference/access-authn-authz/authorization/)从认证中分离出来,用于控制如何处理 HTTP 请求。
* [使用准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers) 阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。

View File

@ -5,6 +5,8 @@ content_type: concept
<!-- overview -->
{{% thirdparty-content %}}
<!--
Add-ons extend the functionality of Kubernetes.
@ -34,6 +36,8 @@ Add-ons 扩展了 Kubernetes 的功能。
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
* [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) is a networking provider for Kubernetes based on [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), a virtual networking implementation that came out of the Open vSwitch (OVS) project. OVN-Kubernetes provides an overlay based networking implementation for Kubernetes, including an OVS based implementation of load balancing and network policy.
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
@ -63,6 +67,15 @@ Add-ons 扩展了 Kubernetes 的功能。
* [Multus](https://github.com/Intel-Corp/multus-cni) 是一个多插件,可在 Kubernetes 中提供多种网络支持,
以支持所有 CNI 插件(例如 CalicoCiliumContivFlannel
而且包含了在 Kubernetes 中基于 SRIOV、DPDK、OVS-DPDK 和 VPP 的工作负载。
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) 是一个 Kubernetes 网络驱动,
基于 [OVNOpen Virtual Network](https://github.com/ovn-org/ovn/)实现,是从 Open vSwitch (OVS)
项目衍生出来的虚拟网络实现。
OVN-Kubernetes 为 Kubernetes 提供基于覆盖网络的网络实现,包括一个基于 OVS 实现的负载均衡器
和网络策略。
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) 是一个基于 OVN 的 CNI
控制器插件提供基于云原生的服务功能链条Service Function ChainingSFC、多种 OVN 覆盖
网络、动态子网创建、动态虚拟网络创建、VLAN 驱动网络、直接驱动网络,并且可以
驳接其他的多网络插件,适用于基于边缘的、多集群联网的云原生工作负载。
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) 容器插件NCP
提供了 VMware NSX-T 与容器协调器(例如 Kubernetes之间的集成以及 NSX-T 与基于容器的
CaaS / PaaS 平台例如关键容器服务PKS和 OpenShift之间的集成。

View File

@ -853,7 +853,7 @@ You can fetch like this:
<!--
In addition to the queued requests,
the output includeas one phantom line for each priority level that is exempt from limitation.
the output includes one phantom line for each priority level that is exempt from limitation.
-->
针对每个优先级别,输出中还包含一条虚拟记录,对应豁免限制。
@ -881,4 +881,4 @@ You can make suggestions and feature requests via
-->
有关API优先级和公平性的设计细节的背景信息
请参阅[增强建议](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md)。
你可以通过 [SIG APIMachinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery) 提出建议和特性请求。
你可以通过 [SIG APIMachinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery) 提出建议和特性请求。

View File

@ -14,9 +14,9 @@ weight: 60
<!-- overview -->
<!--
Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
-->
应用和系统日志可以让你了解集群内部的运行状况。日志对调试问题和监控集群活动非常有用。
应用日志可以让你了解应用内部的运行状况。日志对调试问题和监控集群活动非常有用。
大部分现代化应用都有某种日志记录机制;同样地,大多数容器引擎也被设计成支持某种日志记录机制。
针对容器化应用,最简单且受欢迎的日志记录方式就是写入标准输出和标准错误流。
@ -45,14 +45,13 @@ the description of how logs are stored and handled on the node to be useful.
In this section, you can see an example of basic logging in Kubernetes that
outputs data to the standard output stream. This demonstration uses
a [pod specification](/examples/debug/counter-pod.yaml) with
a container that writes some text to standard output once per second.
a pod specification with a container that writes some text to standard output
once per second.
-->
## Kubernetes 中的基本日志记录
本节你会看到一个kubernetes 中生成基本日志的例子,该例子中数据被写入到标准输出。
这里通过一个特定的 [Pod 规约](/examples/debug/counter-pod.yaml) 演示创建一个容器,
并令该容器每秒钟向标准输出写入数据。
这里的示例为包含一个容器的 Pod 规约,该容器每秒钟向标准输出写入数据。
{{< codenew file="debug/counter-pod.yaml" >}}

View File

@ -140,6 +140,8 @@ imply any preferential status.
接下来的网络技术是按照首字母排序,顺序本身并无其他意义。
{{% thirdparty-content %}}
<!--
### ACI
@ -267,6 +269,19 @@ BCF 被 Gartner 认为是非常有远见的。
而 BCF 的一条关于 Kubernetes 的本地部署(其中包括 Kubernetes、DC/OS 和在不同地理区域的多个
DC 上运行的 VMware也在[这里](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/)被引用。
<!--
### Calico
[Calico](https://docs.projectcalico.org/) is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple data planes including: a pure Linux eBPF dataplane, a standard Linux networking dataplane, and a Windows HNS dataplane. Calico provides a full networking stack but can also be used in conjunction with [cloud provider CNIs](https://docs.projectcalico.org/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations) to provide network policy enforcement.
-->
### Calico
[Calico](https://docs.projectcalico.org/) 是一个开源的联网及网络安全方案,
用于基于容器、虚拟机和本地主机的工作负载。
Calico 支持多个数据面,包括:纯 Linux eBPF 的数据面、标准的 Linux 联网数据面
以及 Windwos HNS 数据面。Calico 在提供完整的联网堆栈的同时,还可与
[云驱动 CNIs](https://docs.projectcalico.org/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations) 联合使用,以保证网络策略实施。
<!--
### Cilium
@ -637,27 +652,6 @@ OVN 是一个由 Open vSwitch 社区开发的开源的网络虚拟化解决方
它允许创建逻辑交换器、逻辑路由、状态 ACL、负载均衡等等来建立不同的虚拟网络拓扑。
该项目有一个特定的Kubernetes插件和文档 [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)。
<!--
### Project Calico
[Project Calico](https://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet, for both Linux (open source) and Windows (proprietary - available from [Tigera](https://www.tigera.io/essentials/)). Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka [canal](https://github.com/tigera/canal), or native GCE, AWS or Azure networking.
-->
### Calico 项目 {#project-calico}
[Calico 项目](https://docs.projectcalico.org/) 是一个开源的容器网络提供者和网络策略引擎。
Calico 提供了高度可扩展的网络和网络解决方案,使用基于与 Internet 相同的 IP 网络原理来连接 Kubernetes Pod
适用于 Linux (开放源代码)和 Windows专有-可从 [Tigera](https://www.tigera.io/essentials/) 获得。
可以无需封装或覆盖即可部署 Calico以提供高性能高可扩的数据中心网络。
Calico 还通过其分布式防火墙为 Kubernetes Pod 提供了基于意图的细粒度网络安全策略。
Calico 还可以和其他的网络解决方案(比如 Flannel、[canal](https://github.com/tigera/canal)
或原生 GCE、AWS、Azure 网络等)一起以策略实施模式运行。
<!--
### Romana

View File

@ -174,7 +174,7 @@ The kubelet collects accelerator metrics through cAdvisor. To collect these metr
The responsibility for collecting accelerator metrics now belongs to the vendor rather than the kubelet. Vendors must provide a container that collects metrics and exposes them to the metrics service (for example, Prometheus).
The [`DisableAcceleratorUsageMetrics` feature gate](/docs/references/command-line-tools-reference/feature-gate.md#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false) disables metrics collected by the kubelet, with a [timeline for enabling this feature by default](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria).
The [`DisableAcceleratorUsageMetrics` feature gate](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false) disables metrics collected by the kubelet, with a [timeline for enabling this feature by default](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria).
-->
## 禁用加速器指标
@ -185,7 +185,9 @@ kubelet 在驱动程序上保持打开状态。这意味着为了执行基础结
现在,收集加速器指标的责任属于供应商,而不是 kubelet。供应商必须提供一个收集指标的容器
并将其公开给指标服务(例如 Prometheus
[`DisableAcceleratorUsageMetrics` 特性门控](/zh/docs/references/command-line-tools-reference/feature-gate.md#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false)禁止由 kubelet 收集的指标,并[带有一条时间线,默认情况下会启用此功能](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria)。
[`DisableAcceleratorUsageMetrics` 特性门控](/zh/docs/references/command-line-tools-reference/feature-gate.md#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false)
禁止由 kubelet 收集的指标。
关于[何时会在默认情况下启用此功能也有一定规划](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria)。
<!--
## Component metrics
@ -233,4 +235,4 @@ cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
-->
* 阅读有关指标的 [Prometheus 文本格式](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
* 查看 [Kubernetes 稳定指标](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)的列表
* 阅读有关 [Kubernetes 弃用策略](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)
* 阅读有关 [Kubernetes 弃用策略](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)

View File

@ -2,4 +2,6 @@
title: 概述
weight: 20
description: 获得 Kubernetes 及其构件的高层次概要。
sitemap:
priority: 0.9
---

View File

@ -3,7 +3,9 @@ title: Kubernetes API
content_type: concept
weight: 30
description: >
Kubernetes API 使你可以查询和操纵 Kubernetes 中对象的状态。Kubernetes 控制平面的核心是 API 服务器和它暴露的 HTTP API。 用户、集群的不同部分以及外部组件都通过 API 服务器相互通信。
Kubernetes API 使你可以查询和操纵 Kubernetes 中对象的状态。
Kubernetes 控制平面的核心是 API 服务器和它暴露的 HTTP API。
用户、集群的不同部分以及外部组件都通过 API 服务器相互通信。
card:
name: concepts
weight: 30
@ -14,13 +16,17 @@ card:
<!--
The core of Kubernetes' {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
is the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}. The API server
exposes an HTTP API that lets end users, different parts of your cluster, and external components
communicate with one another.
exposes an HTTP API that lets end users, different parts of your cluster, and
external components communicate with one another.
The Kubernetes API lets you query and manipulate the state of objects in the Kubernetes API
(for example: Pods, Namespaces, ConfigMaps, and Events).
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
Most operations can be performed through the
[kubectl](/docs/reference/kubectl/overview/) command-line interface or other
command-line tools, such as
[kubeadm](/docs/reference/setup-tools/kubeadm/), which in turn use the
API. However, you can also access the API directly using REST calls.
-->
Kubernetes {{< glossary_tooltip text="控制面" term_id="control-plane" >}}
的核心是 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}。
@ -29,38 +35,18 @@ API 服务器负责提供 HTTP API以供用户、集群中的不同部分和
Kubernetes API 使你可以查询和操纵 Kubernetes API
中对象例如Pod、Namespace、ConfigMap 和 Event的状态。
API 末端、资源类型以及示例都在[API 参考](/zh/docs/reference/kubernetes-api/)中描述。
<!-- body -->
大部分操作都可以通过 [kubectl](/zh/docs/reference/kubectl/overview/) 命令行接口或
类似 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 这类命令行工具来执行,
这些工具在背后也是调用 API。不过你也可以使用 REST 调用来访问这些 API。
<!--
## API changes
Any system that is successful needs to grow and change as new use cases emerge or existing ones change.
Therefore, Kubernetes has design features to allow the Kubernetes API to continuously change and grow.
The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that
compatibility for a length of time so that other projects have an opportunity to adapt.
In general, new API resources and new resource fields can be added often and frequently.
Elimination of resources or fields requires following the
[API deprecation policy](/docs/reference/using-api/deprecation-policy/).
What constitutes a compatible change, and how to change the API, are detailed in
[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme).
Consider using one of the [client libraries](/docs/reference/using-api/client-libraries/)
if you are writing an application using the Kubernetes API.
-->
## API 变更 {#api-changes}
如果你正在编写程序来访问 Kubernetes API可以考虑使用
[客户端库](/zh/docs/reference/using-api/client-libraries/)之一。
任何成功的系统都要随着新的使用案例的出现和现有案例的变化来成长和变化。
为此Kubernetes 的功能特性设计考虑了让 Kubernetes API 能够持续变更和成长的因素。
Kubernetes 项目的目标是 _不要_ 引发现有客户端的兼容性问题,并在一定的时期内
维持这种兼容性,以便其他项目有机会作出适应性变更。
一般而言,新的 API 资源和新的资源字段可以被频繁地添加进来。
删除资源或者字段则要遵从
[API 废弃策略](/docs/reference/using-api/deprecation-policy/)。
关于什么是兼容性的变更,如何变更 API 等详细信息,可参考
[API 变更](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme)。
<!-- body -->
<!--
## OpenAPI specification {#api-specification}
@ -70,7 +56,6 @@ Complete API details are documented using [OpenAPI](https://www.openapis.org/).
The Kubernetes API server serves an OpenAPI spec via the `/openapi/v2` endpoint.
You can request the response format using request headers as follows:
-->
## OpenAPI 规范 {#api-specification}
完整的 API 细节是用 [OpenAPI](https://www.openapis.org/) 来表述的。
@ -142,204 +127,137 @@ Kubernetes API 服务器通过 `/openapi/v2` 末端提供 OpenAPI 规范。
</table>
<!--
Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects.
Kubernetes implements an alternative Protobuf based serialization format that
is primarily intended for intra-cluster communication. For more information
about this format, see the [Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) design proposal and the
Interface Definition Language (IDL) files for each schema located in the Go
packages that define the API objects.
-->
Kubernetes 为 API 实现了一种基于 Protobuf 的序列化格式,主要用于集群内部的通信。
相关文档位于[设计提案](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md)。
每种 Schema 对应的 IDL 位于定义 API 对象的 Go 包中。
Kubernetes 为 API 实现了一种基于 Protobuf 的序列化格式,主要用于集群内部通信。
关于此格式的详细信息,可参考
[Kubernetes Protobuf 序列化](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md)
设计提案。每种模式对应的接口描述语言IDL位于定义 API 对象的 Go 包中。
<!--
## API versioning
## API changes
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports
multiple API versions, each at a different API path, such as `/api/v1` or
`/apis/rbac.authorization.k8s.io/v1alpha1`.
Any system that is successful needs to grow and change as new use cases emerge or existing ones change.
Therefore, Kubernetes has designed its features to allow the Kubernetes API to continuously change and grow.
The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that
compatibility for a length of time so that other projects have an opportunity to adapt.
In general, new API resources and new resource fields can be added often and frequently.
Elimination of resources or fields requires following the
[API deprecation policy](/docs/reference/using-api/deprecation-policy/).
What constitutes a compatible change, and how to change the API, are detailed in
[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme).
-->
## API 版本 {#api-versioning}
## API 变更 {#api-changes}
任何成功的系统都要随着新的使用案例的出现和现有案例的变化来成长和变化。
为此Kubernetes 的功能特性设计考虑了让 Kubernetes API 能够持续变更和成长的因素。
Kubernetes 项目的目标是 _不要_ 引发现有客户端的兼容性问题,并在一定的时期内
维持这种兼容性,以便其他项目有机会作出适应性变更。
一般而言,新的 API 资源和新的资源字段可以被频繁地添加进来。
删除资源或者字段则要遵从
[API 废弃策略](/zh/docs/reference/using-api/deprecation-policy/)。
关于什么是兼容性的变更、如何变更 API 等详细信息,可参考
[API 变更](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme)。
<!--
## API groups and versioning
To make it easier to eliminate fields or restructure resource representations,
Kubernetes supports multiple API versions, each at a different API path, such
as `/api/v1` or `/apis/rbac.authorization.k8s.io/v1alpha1`.
-->
## API 组和版本 {#api-groups-and-versioning}
为了简化删除字段或者重构资源表示等工作Kubernetes 支持多个 API 版本,
每一个版本都在不同 API 路径下,例如 `/api/v1`
`/apis/rbac.authorization.k8s.io/v1alpha1`
<!--
Versioning is done at the API level rather than at the resource or field level to ensure that the
API presents a clear, consistent view of system resources and behavior, and to enable controlling
access to end-of-life and/or experimental APIs.
The JSON and Protobuf serialization schemas follow the same guidelines for schema changes - all descriptions below cover both formats.
Versioning is done at the API level rather than at the resource or field level
to ensure that the API presents a clear, consistent view of system resources
and behavior, and to enable controlling access to end-of-life and/or
experimental APIs.
-->
版本化是在 API 级别而不是在资源或字段级别进行的,目的是为了确保 API
为系统资源和行为提供清晰、一致的视图,并能够控制对已废止的和/或实验性 API 的访问。
JSON 和 Protobuf 序列化模式遵循 schema 更改的相同准则 - 下面的所有描述都同时适用于这两种格式。
<!--
To make it easier to evolve and to extend its API, Kubernetes implements
[API groups](/docs/reference/using-api/#api-groups) that can be
[enabled or disabled](/docs/reference/using-api/#enabling-or-disabling).
-->
为了便于演化和扩展其 APIKubernetes 实现了
可被[启用或禁用](/zh/docs/reference/using-api/#enabling-or-disabling)的
[API 组](/docs/reference/using-api/#api-groups)。
<!--
Note that API versioning and Software versioning are only indirectly related. The
[Kubernetes Release Versioning](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)
proposal describes the relationship between API versioning and software versioning.
Different API versions imply different levels of stability and support. The criteria for each level are described
in more detail in the
[API Changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)
documentation. They are summarized here:
API resources are distinguished by their API group, resource type, namespace
(for namespaced resources), and name. The API server may serve the same
underlying data through multiple API version and handle the conversion between
API versions transparently. All these different versions are actually
representations of the same resource. For example, suppose there are two
versions `v1` and `v1beta1` for the same resource. An object created by the
`v1beta1` version can then be read, updated, and deleted by either the
`v1beta1` or the `v1` versions.
-->
请注意API 版本控制和软件版本控制只有间接相关性。
[Kubernetes 发行版本提案](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)
中描述了 API 版本与软件版本之间的关系。
不同的 API 版本名称意味着不同级别的软件稳定性和支持程度。
每个级别的判定标准在
[API 变更文档](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)
中有更详细的描述。
这些标准主要概括如下:
API 资源之间靠 API 组、资源类型、名字空间(对于名字空间作用域的资源而言)和
名字来相互区分。API 服务器可能通过多个 API 版本来向外提供相同的下层数据,
并透明地完成不同 API 版本之间的转换。所有这些不同的版本实际上都是同一资源
的(不同)表现形式。例如,假定同一资源有 `v1``v1beta1` 版本,
使用 `v1beta1` 创建的对象则可以使用 `v1beta1` 或者 `v1` 版本来读取、更改
或者删除。
<!--
- Alpha level:
- The version names contain `alpha` (e.g. `v1alpha1`).
- May be buggy. Enabling the feature may expose bugs. Disabled by default.
- Support for feature may be dropped at any time without notice.
- The API may change in incompatible ways in a later software release without notice.
- Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
Refer to [API versions reference](/docs/reference/using-api/#api-versioning)
for more details on the API version level definitions.
-->
- Alpha 级别:
- 版本名称包含 `alpha`(例如:`v1alpha1`
- API 可能是有缺陷的。启用该功能可能会带来问题,默认情况是禁用的
- 对相关功能的支持可能在没有通知的情况下随时终止
- API 可能在将来的软件发布中出现不兼容性的变更,此类变更不会另行通知
- 由于缺陷风险较高且缺乏长期支持,推荐仅在短暂的集群测试中使用
关于 API 版本级别的详细定义,请参阅
[API 版本参考](/zh/docs/reference/using-api/#api-versioning)。
<!--
- Beta level:
- The version names contain `beta` (e.g. `v2beta3`).
- Code is well tested. Enabling the feature is considered safe. Enabled by default.
- Support for the overall feature will not be dropped, though details may change.
- The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens,
we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating
API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
- Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have
multiple clusters which can be upgraded independently, you may be able to relax this restriction.
- **Please do try our beta features and give feedback on them! Once they exit beta, it may not be practical for us to make more changes.**
## API Extension
The Kubernetes API can be extended in one of two ways:
-->
- Beta 级别:
- 版本名称包含 `beta`(例如:`v2beta3`
- 代码已经充分测试过。启用该功能被认为是安全的,功能默认已启用。
- 所支持的功能作为一个整体不会被删除,尽管细节可能会发生变更。
- 对象的模式和/或语义可能会在后续的 beta 发行版或稳定版中以不兼容的方式进行更改。
发生这种情况时,我们将提供如何迁移到新版本的说明。
迁移操作可能需要删除、编辑和重新创建 API 对象。
执行编辑操作时可能需要动些脑筋。
迁移过程中可能需要停用依赖该功能的应用程序。
- 建议仅用于非业务关键性用途,因为后续版本中可能存在不兼容的更改。
如果你有多个可以独立升级的集群,则可以放宽此限制。
- **请尝试我们的 beta 版本功能并且给出反馈!一旦它们结束 beta 阶段,进一步变更可能就不太现实了。**
<!--
- Stable level:
- The version name is `vX` where `X` is an integer.
- Stable versions of features will appear in released software for many subsequent versions.
-->
- 稳定级别:
- 版本名称是 `vX`,其中 `X` 是整数。
- 功能的稳定版本将出现在许多后续版本的发行软件中。
## API 扩展 {#api-extension}
有两种途径来扩展 Kubernetes API
<!--
## API groups
To make it easier to extend the Kubernetes API, Kubernetes implemented [*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md).
The API group is specified in a REST path and in the `apiVersion` field of a serialized object.
1. [Custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
let you declaratively define how the API server should provide your chosen resource API.
1. You can also extend the Kubernetes API by implementing an
[aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
-->
## API 组 {#api-groups}
为了更容易地扩展 Kubernetes APIKubernetes 实现了
[*`API组`*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md)。
API 组在 REST 路径和序列化对象的 `apiVersion` 字段中指定。
<!--
There are several API groups in a cluster:
1. The *core* group, also referred to as the *legacy* group, is at the REST path `/api/v1` and uses `apiVersion: v1`.
1. *Named* groups are at REST path `/apis/$GROUP_NAME/$VERSION`, and use `apiVersion: $GROUP_NAME/$VERSION`
(e.g. `apiVersion: batch/v1`). The Kubernetes [API reference](/docs/reference/kubernetes-api/) has a
full list of available API groups.
-->
集群中存在若干 API 组:
1. *核心Core*组,通常被称为 *遗留Legacy* 组,位于 REST 路径 `/api/v1`
使用 `apiVersion: v1`
1. *命名Named* 组 REST 路径 `/apis/$GROUP_NAME/$VERSION`,使用
`apiVersion: $GROUP_NAME/$VERSION`(例如 `apiVersion: batch/v1`)。
[Kubernetes API 参考](/zh/docs/reference/kubernetes-api/)中枚举了可用的 API 组的完整列表。
<!--
There are two paths to extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/):
1. [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
lets you declaratively define how the API server should provide your chosen resource API.
1. You can also [implement your own extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/)
and use the [aggregator](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
to make it seamless for clients.
-->
有两种途径来扩展 Kubernetes API 以支持
[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
1. 使用 [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
你可以用声明式方式来定义 API 如何提供你所选择的资源 API。
1. 你也可以选择[实现自己的扩展 API 服务器](/zh/docs/tasks/extend-kubernetes/setup-extension-api-server/)
并使用[聚合器](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
为客户提供无缝的服务。
<!--
## Enabling or disabling API groups
Certain resources and API groups are enabled by default. They can be enabled or disabled by setting `-runtime-config`
on apiserver. `-runtime-config` accepts comma separated values. For example: to disable batch/v1, set
`-runtime-config=batch/v1=false`, to enable batch/v2alpha1, set `-runtime-config=batch/v2alpha1`.
The flag accepts comma separated set of key=value pairs describing runtime configuration of the apiserver.
Enabling or disabling groups or resources requires restarting apiserver and controller-manager
to pick up the `-runtime-config` changes.
-->
## 启用或禁用 API 组 {#enabling-or-disabling-api-groups}
某些资源和 API 组默认情况下处于启用状态。可以通过为 `kube-apiserver`
设置 `--runtime-config` 命令行选项来启用或禁用它们。
`--runtime-config` 接受逗号分隔的值。
例如:要禁用 `batch/v1`,设置 `--runtime-config=batch/v1=false`
要启用 `batch/v2alpha1`,设置`--runtime-config=batch/v2alpha1`。
该标志接受逗号分隔的一组"key=value"键值对,用以描述 API 服务器的运行时配置。
{{< note >}}
启用或禁用组或资源需要重新启动 `kube-apiserver``kube-controller-manager`
来使得 `--runtime-config` 更改生效。
{{< /note >}}
<!--
## Persistence
Kubernetes stores its serialized state in terms of the API resources by writing them into
{{< glossary_tooltip term_id="etcd" >}}.
-->
## 持久性 {#persistence}
Kubernetes 也将其 API 资源的序列化状态保存起来,写入到 {{< glossary_tooltip term_id="etcd" >}}。
1. 你可以使用[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
来以声明式方式定义 API 服务器如何提供你所选择的资源 API。
1. 你也可以选择实现自己的
[聚合层](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
来扩展 Kubernetes API。
## {{% heading "whatsnext" %}}
<!--
[Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes
how the cluster manages authentication and authorization for API access.
Overall API conventions are described in the
[API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions)
document.
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
- Learn how to extend the Kubernetes API by adding your own
[CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
- [Controlling Access To The Kubernetes API](/docs/concepts/security/controlling-access/) describes
how the cluster manages authentication and authorization for API access.
- Learn about API endpoints, resource types and samples by reading
[API Reference](/docs/reference/kubernetes-api/).
-->
* [控制 API 访问](/zh/docs/reference/access-authn-authz/controlling-access/)
描述了集群如何为 API 访问管理身份认证和权限判定;
* 总体的 API 约定描述位于 [API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md)中;
* API 末端、资源类型和示例等均在 [API 参考文档](/zh/docs/reference/kubernetes-api/)中描述
- 了解如何通过添加你自己的
[CustomResourceDefinition](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
来扩展 Kubernetes API。
- [控制 Kubernetes API 访问](/zh/docs/concepts/security/controlling-access/)
页面描述了集群如何针对 API 访问管理身份认证和鉴权。
- 通过阅读 [API 参考](/zh/docs/reference/kubernetes-api/)
了解 API 端点、资源类型以及示例。

View File

@ -9,7 +9,6 @@ card:
weight: 10
---
<!--
---
reviewers:
- bgrant0607
- mikedanese
@ -19,7 +18,6 @@ weight: 10
card:
name: concepts
weight: 10
---
-->
<!-- overview -->
@ -33,18 +31,22 @@ This page is an overview of Kubernetes.
<!--
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
-->
Kubernetes 是一个可移植的、可扩展的开源平台用于管理容器化的工作负载和服务可促进声明式配置和自动化。Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。
Kubernetes 是一个可移植的、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。
Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。
<!--
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a [decade and a half of experience that Google has with running production workloads at scale](https://research.google/pubs/pub43438), combined with best-of-breed ideas and practices from the community.
-->
名称 **Kubernetes** 源于希腊语,意为 "舵手" 或 "飞行员"。Google 在 2014 年开源了 Kubernetes 项目。Kubernetes 建立在 [Google 在大规模运行生产工作负载方面拥有十几年的经验](https://research.google/pubs/pub43438)的基础上,结合了社区中最好的想法和实践。
名称 **Kubernetes** 源于希腊语意为“舵手”或“飞行员”。Google 在 2014 年开源了 Kubernetes 项目。
Kubernetes 建立在 [Google 在大规模运行生产工作负载方面拥有十几年的经验](https://research.google/pubs/pub43438)
的基础上,结合了社区中最好的想法和实践。
<!--
## Going back in time
Let's take a look at why Kubernetes is so useful by going back in time.
-->
## 言归正传
## 时光回溯
让我们回顾一下为什么 Kubernetes 如此有用。
<!--
@ -54,24 +56,34 @@ Let's take a look at why Kubernetes is so useful by going back in time.
<!--
**Traditional deployment era:**
Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.
-->
**传统部署时代:**
早期,组织在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。例如,如果在物理服务器上运行多个应用程序,则可能会出现一个应用程序占用大部分资源的情况,结果可能导致其他应用程序的性能下降。一种解决方案是在不同的物理服务器上运行每个应用程序,但是由于资源利用不足而无法扩展,并且组织维护许多物理服务器的成本很高。
早期,组织在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。
例如,如果在物理服务器上运行多个应用程序,则可能会出现一个应用程序占用大部分资源的情况,
结果可能导致其他应用程序的性能下降。
一种解决方案是在不同的物理服务器上运行每个应用程序,但是由于资源利用不足而无法扩展,
并且组织维护许多物理服务器的成本很高。
<!--
**Virtualized deployment era:**
As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.
-->
**虚拟化部署时代:**
作为解决方案,引入了虚拟化功能,它允许您在单个物理服务器的 CPU 上运行多个虚拟机VM。虚拟化功能允许应用程序在 VM 之间隔离,并提供安全级别,因为一个应用程序的信息不能被另一应用程序自由地访问。
作为解决方案,引入了虚拟化。虚拟化技术允许你在单个物理服务器的 CPU 上运行多个虚拟机VM
虚拟化允许应用程序在 VM 之间隔离,并提供一定程度的安全,因为一个应用程序的信息
不能被另一应用程序随意访问。
<!--
Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more.
Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.
-->
因为虚拟化可以轻松地添加或更新应用程序、降低硬件成本等等,所以虚拟化可以更好地利用物理服务器中的资源,并可以实现更好的可伸缩性。
虚拟化技术能够更好地利用物理服务器上的资源,并且因为可轻松地添加或更新应用程序
而可以实现更好的可伸缩性,降低硬件成本等等。
每个 VM 是一台完整的计算机,在虚拟化硬件之上运行所有组件,包括其自己的操作系统。
@ -80,12 +92,15 @@ Each VM is a full machine running all the components, including its own operatin
Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.
-->
**容器部署时代:**
容器类似于 VM但是它们具有轻量级的隔离属性可以在应用程序之间共享操作系统OS。因此容器被认为是轻量级的。容器与 VM 类似具有自己的文件系统、CPU、内存、进程空间等。由于它们与基础架构分离因此可以跨云和 OS 分发进行移植。
容器类似于 VM但是它们具有被放宽的隔离属性可以在应用程序之间共享操作系统OS
因此,容器被认为是轻量级的。容器与 VM 类似具有自己的文件系统、CPU、内存、进程空间等。
由于它们与基础架构分离,因此可以跨云和 OS 发行版本进行移植。
<!--
Containers are becoming popular because they have many benefits. Some of the container benefits are listed below:
-->
容器因具有许多优势而变得流行起来。下面列出容器的一些好处:
容器因具有许多优势而变得流行起来。下面列出的是容器的一些好处:
<!--
* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
@ -100,13 +115,18 @@ Containers are becoming popular because they have many benefits. Some of the con
* Resource utilization: high efficiency and density.
-->
* 敏捷应用程序的创建和部署:与使用 VM 镜像相比,提高了容器镜像创建的简便性和效率。
* 持续开发、集成和部署:通过快速简单的回滚(由于镜像不可变性),提供可靠且频繁的容器镜像构建和部署。
* 关注开发与运维的分离:在构建/发布时而不是在部署时创建应用程序容器镜像,从而将应用程序与基础架构分离。
* 持续开发、集成和部署:通过快速简单的回滚(由于镜像不可变性),支持可靠且频繁的
容器镜像构建和部署。
* 关注开发与运维的分离:在构建/发布时而不是在部署时创建应用程序容器镜像,
从而将应用程序与基础架构分离。
* 可观察性不仅可以显示操作系统级别的信息和指标,还可以显示应用程序的运行状况和其他指标信号。
* 跨开发、测试和生产的环境一致性:在便携式计算机上与在云中相同地运行。
* 云和操作系统分发的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、Google Kubernetes Engine 和其他任何地方运行。
* 以应用程序为中心的管理:提高抽象级别,从在虚拟硬件上运行 OS 到使用逻辑资源在 OS 上运行应用程序。
* 松散耦合、分布式、弹性、解放的微服务:应用程序被分解成较小的独立部分,并且可以动态部署和管理 - 而不是在一台大型单机上整体运行。
* 跨云和操作系统发行版本的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、
Google Kubernetes Engine 和其他任何地方运行。
* 以应用程序为中心的管理:提高抽象级别,从在虚拟硬件上运行 OS 到使用逻辑资源在
OS 上运行应用程序。
* 松散耦合、分布式、弹性、解放的微服务:应用程序被分解成较小的独立部分,
并且可以动态部署和管理 - 而不是在一台大型单机上整体运行。
* 资源隔离:可预测的应用程序性能。
* 资源利用:高效率和高密度。
@ -118,59 +138,75 @@ Containers are becoming popular because they have many benefits. Some of the con
<!--
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?
-->
容器是打包和运行应用程序的好方式。在生产环境中,您需要管理运行应用程序的容器,并确保不会停机。例如,如果一个容器发生故障,则需要启动另一个容器。如果系统处理此行为,会不会更容易?
容器是打包和运行应用程序的好方式。在生产环境中,你需要管理运行应用程序的容器,并确保不会停机。
例如,如果一个容器发生故障,则需要启动另一个容器。如果系统处理此行为,会不会更容易?
<!--
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of your scaling requirements, failover, deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
-->
这就是 Kubernetes 的救援方法Kubernetes 为您提供了一个可弹性运行分布式系统的框架。Kubernetes 会满足您的扩展要求、故障转移、部署模式等。例如Kubernetes 可以轻松管理系统的 Canary 部署。
这就是 Kubernetes 来解决这些问题的方法!
Kubernetes 为你提供了一个可弹性运行分布式系统的框架。
Kubernetes 会满足你的扩展要求、故障转移、部署模式等。
例如Kubernetes 可以轻松管理系统的 Canary 部署。
<!--
Kubernetes provides you with:
-->
Kubernetes 为提供:
Kubernetes 为提供:
<!--
* **Service discovery and load balancing**
Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
-->
* **服务发现和负载均衡**
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器如果到容器的流量很大Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。
* **服务发现和负载均衡**
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大,
Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。
<!--
* **Storage orchestration**
Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
-->
* **存储编排**
Kubernetes 允许您自动挂载您选择的存储系统,例如本地存储、公共云提供商等。
* **存储编排**
Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等。
<!--
* **Automated rollouts and rollbacks**
You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
-->
* **自动部署和回滚**
您可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态更改为所需状态。例如,您可以自动化 Kubernetes 来为您的部署创建新容器,删除现有容器并将它们的所有资源用于新容器。
* **自动部署和回滚**
你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态
更改为期望状态。例如,你可以自动化 Kubernetes 来为你的部署创建新容器,
删除现有容器并将它们的所有资源用于新容器。
<!--
* **Automatic bin packing**
Kubernetes allows you to specify how much CPU and memory (RAM) each container needs. When containers have resource requests specified, Kubernetes can make better decisions to manage the resources for containers.
-->
* **自动二进制打包**
Kubernetes 允许您指定每个容器所需 CPU 和内存RAM。当容器指定了资源请求时Kubernetes 可以做出更好的决策来管理容器的资源。
* **自动完成装箱计算**
Kubernetes 允许你指定每个容器所需 CPU 和内存RAM
当容器指定了资源请求时Kubernetes 可以做出更好的决策来管理容器的资源。
<!--
* **Self-healing**
Kubernetes restarts containers that fail, replaces containers, kills containers that dont respond to your user-defined health check, and doesnt advertise them to clients until they are ready to serve.
-->
* **自我修复**
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。
* **自我修复**
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的
运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。
<!--
* **Secret and configuration management**
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
-->
* **密钥与配置管理**
Kubernetes 允许您存储和管理敏感信息例如密码、OAuth 令牌和 ssh 密钥。您可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。
* **密钥与配置管理**
Kubernetes 允许你存储和管理敏感信息例如密码、OAuth 令牌和 ssh 密钥。
你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。
<!--
## What Kubernetes is not
@ -180,7 +216,11 @@ Kubernetes 允许您存储和管理敏感信息例如密码、OAuth 令牌和
<!--
Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where it is important.
-->
Kubernetes 不是传统的、包罗万象的 PaaS平台即服务系统。由于 Kubernetes 在容器级别而不是在硬件级别运行,因此它提供了 PaaS 产品共有的一些普遍适用的功能例如部署、扩展、负载均衡、日志记录和监视。但是Kubernetes 不是单一的默认解决方案是可选和可插拔的。Kubernetes 提供了构建开发人员平台的基础,但是在重要的地方保留了用户的选择和灵活性。
Kubernetes 不是传统的、包罗万象的 PaaS平台即服务系统。
由于 Kubernetes 在容器级别而不是在硬件级别运行,它提供了 PaaS 产品共有的一些普遍适用的功能,
例如部署、扩展、负载均衡、日志记录和监视。
但是Kubernetes 不是单体系统,默认解决方案都是可选和可插拔的。
Kubernetes 提供了构建开发人员平台的基础,但是在重要的地方保留了用户的选择和灵活性。
<!--
Kubernetes:
@ -192,22 +232,33 @@ Kubernetes
* Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as technical requirements.
* Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks (for example, Spark), databases (for example, mysql), caches, nor cluster storage systems (for example, Ceph) as built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the Open Service Broker.
-->
* Kubernetes 不限制支持的应用程序类型。Kubernetes 旨在支持极其多种多样的工作负载,包括无状态、有状态和数据处理工作负载。如果应用程序可以在容器中运行,那么它应该可以在 Kubernetes 上很好地运行。
* Kubernetes 不部署源代码,也不构建您的应用程序。持续集成(CI)、交付和部署CI/CD工作流取决于组织的文化和偏好以及技术要求。
* Kubernetes 不提供应用程序级别的服务作为内置服务例如中间件例如消息中间件、数据处理框架例如Spark、数据库例如mysql、缓存、集群存储系统例如Ceph。这样的组件可以在 Kubernetes 上运行,并且/或者可以由运行在 Kubernetes 上的应用程序通过可移植机制(例如,[开放服务代理](https://openservicebrokerapi.org/))来访问。
* 不限制支持的应用程序类型。
Kubernetes 旨在支持极其多种多样的工作负载,包括无状态、有状态和数据处理工作负载。
如果应用程序可以在容器中运行,那么它应该可以在 Kubernetes 上很好地运行。
* 不部署源代码,也不构建你的应用程序。
持续集成(CI)、交付和部署CI/CD工作流取决于组织的文化和偏好以及技术要求。
* 不提供应用程序级别的服务作为内置服务,例如中间件(例如,消息中间件)、
数据处理框架例如Spark、数据库例如mysql、缓存、集群存储系统
例如Ceph。这样的组件可以在 Kubernetes 上运行,并且/或者可以由运行在
Kubernetes 上的应用程序通过可移植机制(例如,
[开放服务代理](https://openservicebrokerapi.org/))来访问。
<!--
* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics.
* Does not provide nor mandate a configuration language/system (for example, jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.
* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldnt matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.
-->
* Kubernetes 不指定日志记录、监视或警报解决方案。它提供了一些集成作为概念证明,并提供了收集和导出指标的机制。
* Kubernetes 不提供或不要求配置语言/系统(例如 jsonnet它提供了声明性 API该声明性 API 可以由任意形式的声明性规范所构成。
* Kubernetes 不提供也不采用任何全面的机器配置、维护、管理或自我修复系统。
* 此外Kubernetes 不仅仅是一个编排系统,实际上它消除了编排的需要。编排的技术定义是执行已定义的工作流程:首先执行 A然后执行 B再执行 C。相比之下Kubernetes 包含一组独立的、可组合的控制过程,这些过程连续地将当前状态驱动到所提供的所需状态。从 A 到 C 的方式无关紧要,也不需要集中控制,这使得系统更易于使用且功能更强大、健壮、弹性和可扩展性。
* 不要求日志记录、监视或警报解决方案。
它提供了一些集成作为概念证明,并提供了收集和导出指标的机制。
* 不提供或不要求配置语言/系统(例如 jsonnet它提供了声明性 API
该声明性 API 可以由任意形式的声明性规范所构成。
* 不提供也不采用任何全面的机器配置、维护、管理或自我修复系统。
* 此外Kubernetes 不仅仅是一个编排系统,实际上它消除了编排的需要。
编排的技术定义是执行已定义的工作流程:首先执行 A然后执行 B再执行 C。
相比之下Kubernetes 包含一组独立的、可组合的控制过程,
这些过程连续地将当前状态驱动到所提供的所需状态。
如何从 A 到 C 的方式无关紧要,也不需要集中控制,这使得系统更易于使用
且功能更强大、系统更健壮、更为弹性和可扩展。
## {{% heading "whatsnext" %}}
@ -215,5 +266,5 @@ Kubernetes
* Take a look at the [Kubernetes Components](/docs/concepts/overview/components/)
* Ready to [Get Started](/docs/setup/)?
-->
* 查阅 [Kubernetes 组件](/zh/docs/concepts/overview/components/)
* 开始 [Kubernetes 入门](/zh/docs/setup/)?
* 查阅 [Kubernetes 组件](/zh/docs/concepts/overview/components/)
* 开始 [Kubernetes 入门](/zh/docs/setup/)?

View File

@ -121,18 +121,18 @@ _注解Annotations_ 存储的形式是键/值对。有效的注解键分
并允许使用破折号(`-`),下划线(`_`),点(`.`)和字母数字。
前缀是可选的。如果指定则前缀必须是DNS子域一系列由点`.`分隔的DNS标签
总计不超过253个字符后跟斜杠`/`)。
如果省略前缀,则假定注释键对用户是私有的。 由系统组件添加的注释
如果省略前缀,则假定注解键对用户是私有的。 由系统组件添加的注解
(例如,`kube-scheduler``kube-controller-manager``kube-apiserver``kubectl`
或其他第三方组件),必须为终端用户添加注前缀。
或其他第三方组件),必须为终端用户添加注前缀。
<!--
The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components.
For example, heres the configuration file for a Pod that has the annotation `imageregistry: https://hub.docker.com/` :
For example, here's the configuration file for a Pod that has the annotation `imageregistry: https://hub.docker.com/` :
-->
`kubernetes.io/``k8s.io/` 前缀是为Kubernetes核心组件保留的。
例如,这是Pod的配置文件其注释为 `imageregistry: https://hub.docker.com/`
例如,下面是一个 Pod 的配置文件,其注解中包含 `imageregistry: https://hub.docker.com/`
```yaml
apiVersion: v1

View File

@ -191,11 +191,11 @@ and the `spec` format for a `Deployment` can be found
## {{% heading "whatsnext" %}}
<!--
* [Kubernetes API overview](/docs/reference/using-api/api-overview/) explains some more API concepts
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/pod-overview/).
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/).
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes.
* [Using the Kubernetes API](/docs/reference/using-api/) explains some more API concepts.
-->
* [Kubernetes API 总览](/zh/docs/reference/using-api/api-overview/) 提供关于 API 概念的进一步阐述
* 了解最重要的 Kubernetes 基本对象,例如 [Pod](/zh/docs/concepts/workloads/pods/)
* 了解 Kubernetes 中的[控制器](/zh/docs/concepts/architecture/controller/)
* 了解最重要的 Kubernetes 基本对象,例如 [Pod](/zh/docs/concepts/workloads/pods/)。
* 了解 Kubernetes 中的[控制器](/zh/docs/concepts/architecture/controller/)。
* [使用 Kubernetes API](/zh/docs/reference/using-api/) 一节解释了一些 API 概念。

View File

@ -307,7 +307,7 @@ and [`replicationcontrollers`](/docs/concepts/workloads/controllers/replicationc
also use label selectors to specify sets of other resources, such as
[pods](/docs/concepts/workloads/pods/).
-->
### 在 API 对象设置引用
### 在 API 对象设置引用
一些 Kubernetes 对象,例如 [`services`](/zh/docs/concepts/services-networking/service/)
和 [`replicationcontrollers`](/zh/docs/concepts/workloads/controllers/replicationcontroller/)
@ -323,9 +323,11 @@ Labels selectors for both objects are defined in `json` or `yaml` files using ma
-->
#### Service 和 ReplicationController
一个 `Service` 指向的一组 pods 是由标签选择算符定义的。同样,一个 `ReplicationController` 应该管理的 pods 的数量也是由标签选择算符定义的。
一个 `Service` 指向的一组 Pods 是由标签选择算符定义的。同样,一个 `ReplicationController`
应该管理的 pods 的数量也是由标签选择算符定义的。
两个对象的标签选择算符都是在 `json` 或者 `yaml` 文件中使用映射定义的,并且只支持 _基于等值_ 需求的选择算符:
两个对象的标签选择算符都是在 `json` 或者 `yaml` 文件中使用映射定义的,并且只支持
_基于等值_ 需求的选择算符:
```json
"selector": {

View File

@ -140,8 +140,7 @@ The following resource types are supported:
| `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. |
| `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
| `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
| `hugepages-<size>` | Across all pods in a non-terminal state, the number of
huge page requests of the specified size cannot exceed this value. |
| `hugepages-<size>` | Across all pods in a non-terminal state, the number of huge page requests of the specified size cannot exceed this value. |
| `cpu` | Same as `requests.cpu` |
| `memory` | Same as `requests.memory` |
-->

View File

@ -0,0 +1,333 @@
---
title: Kubernetes API 访问控制
content_type: concept
---
<!--
---
reviewers:
- erictune
- lavalamp
title: Controlling Access to the Kubernetes API
content_type: concept
---
-->
<!-- overview -->
<!--
This page provides an overview of controlling access to the Kubernetes API.
-->
本页面概述了对 Kubernetes API 的访问控制。
<!-- body -->
<!--
Users access the [Kubernetes API](/docs/concepts/overview/kubernetes-api/) using `kubectl`,
client libraries, or by making REST requests. Both human users and
[Kubernetes service accounts](/docs/tasks/configure-pod-container/configure-service-account/) can be
authorized for API access.
When a request reaches the API, it goes through several stages, illustrated in the
following diagram:
-->
用户使用 `kubectl`、客户端库或构造 REST 请求访问来 [Kubernetes API](/zh/docs/concepts/overview/kubernetes-api/)。
人类用户和 [Kubernetes 服务账户](/zh/docs/tasks/configure-pod-container/configure-service-account/)都可以被鉴权访问 API。
当请求到达 API 时,它会经历多个阶段,如下图所示:
![Kubernetes API 请求处理步骤示意图](/images/docs/admin/access-control-overview.svg)
<!-- ## Transport security -->
## 传输安全 {#transport-security}
<!--
In a typical Kubernetes cluster, the API serves on port 443, protected by TLS.
The API server presents a certificate. This certificate may be signed using
a private certificate authority (CA), or based on a public key infrastructure linked
to a generally recognized CA.
-->
在典型的 Kubernetes 集群中API 服务器在 443 端口上提供服务,受 TLS 保护。
API 服务器出示证书。
该证书可以使用私有证书颁发机构CA签名也可以基于链接到公认的 CA 的公钥基础架构签名。
<!--
If your cluster uses a private certificate authority, you need a copy of that CA
certifcate configured into your `~/.kube/config` on the client, so that you can
trust the connection and be confident it was not intercepted.
Your client can present a TLS client certificate at this stage.
-->
如果你的集群使用私有证书颁发机构,你需要在客户端的 `~/.kube/config` 文件中提供该 CA 证书的副本,
以便你可以信任该连接并确认该连接没有被拦截。
你的客户端可以在此阶段出示 TLS 客户端证书。
<!-- ## Authentication -->
## 认证 {#authentication}
<!--
Once TLS is established, the HTTP request moves to the Authentication step.
This is shown as step **1** in the diagram.
The cluster creation script or cluster admin configures the API server to run
one or more Authenticator modules.
Authenticators are described in more detail in
[Authentication](/docs/reference/access-authn-authz/authentication/).
-->
如上图步骤 **1** 所示,建立 TLS 后, HTTP 请求将进入认证Authentication步骤。
集群创建脚本或者集群管理员配置 API 服务器,使之运行一个或多个身份认证组件。
身份认证组件在[认证](/zh/docs/reference/access-authn-authz/authentication/)节中有更详细的描述。
<!--
The input to the authentication step is the entire HTTP request; however, it typically
just examines the headers and/or client certificate.
Authentication modules include client certificates, password, and plain tokens,
bootstrap tokens, and JSON Web Tokens (used for service accounts).
Multiple authentication modules can be specified, in which case each one is tried in sequence,
until one of them succeeds.
-->
认证步骤的输入整个 HTTP 请求;但是,通常组件只检查头部或/和客户端证书。
认证模块包含客户端证书、密码、普通令牌、引导令牌和 JSON Web 令牌JWT用于服务账户
可以指定多个认证模块,在这种情况下,服务器依次尝试每个验证模块,直到其中一个成功。
<!--
If the request cannot be authenticated, it is rejected with HTTP status code 401.
Otherwise, the user is authenticated as a specific `username`, and the user name
is available to subsequent steps to use in their decisions. Some authenticators
also provide the group memberships of the user, while other authenticators
do not.
While Kubernetes uses usernames for access control decisions and in request logging,
it does not have a `User` object nor does it store usernames or other information about
users in its API.
-->
如果请求认证不通过,服务器将以 HTTP 状态码 401 拒绝该请求。
反之,该用户被认证为特定的 `username`,并且该用户名可用于后续步骤以在其决策中使用。
部分验证器还提供用户的组成员身份,其他则不提供。
<!-- ## Authorization -->
## 鉴权 {#authorization}
<!--
After the request is authenticated as coming from a specific user, the request must be authorized. This is shown as step **2** in the diagram.
A request must include the username of the requester, the requested action, and the object affected by the action. The request is authorized if an existing policy declares that the user has permissions to complete the requested action.
For example, if Bob has the policy below, then he can read pods only in the namespace `projectCaribou`:
-->
如上图的步骤 **2** 所示,将请求验证为来自特定的用户后,请求必须被鉴权。
请求必须包含请求者的用户名、请求的行为以及受该操作影响的对象。
如果现有策略声明用户有权完成请求的操作,那么该请求被鉴权通过。
例如,如果 Bob 有以下策略,那么他只能在 `projectCaribou` 名称空间中读取 Pod。
```json
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "bob",
"namespace": "projectCaribou",
"resource": "pods",
"readonly": true
}
}
```
<!--
If Bob makes the following request, the request is authorized because he is allowed to read objects in the `projectCaribou` namespace:
-->
如果 Bob 执行以下请求,那么请求会被鉴权,因为允许他读取 `projectCaribou` 名称空间中的对象。
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"resourceAttributes": {
"namespace": "projectCaribou",
"verb": "get",
"group": "unicorn.example.org",
"resource": "pods"
}
}
}
```
<!--
If Bob makes a request to write (`create` or `update`) to the objects in the `projectCaribou` namespace, his authorization is denied.
If Bob makes a request to read (`get`) objects in a different namespace such as `projectFish`, then his authorization is denied.
Kubernetes authorization requires that you use common REST attributes to interact with existing organization-wide or cloud-provider-wide access control systems.
It is important to use REST formatting because these control systems might interact with other APIs besides the Kubernetes API.
-->
如果 Bob 在 `projectCaribou` 名字空间中请求写(`create` 或 `update`)对象,其鉴权请求将被拒绝。
如果 Bob 在诸如 `projectFish` 这类其它名字空间中请求读取(`get`)对象,其鉴权也会被拒绝。
Kubernetes 鉴权要求使用公共 REST 属性与现有的组织范围或云提供商范围的访问控制系统进行交互。
使用 REST 格式很重要,因为这些控制系统可能会与 Kubernetes API 之外的 API 交互。
<!--
Kubernetes supports multiple authorization modules, such as ABAC mode, RBAC Mode, and Webhook mode.
When an administrator creates a cluster, they configure the authorization modules that should be used in the API server.
If more than one authorization modules are configured, Kubernetes checks each module,
and if any module authorizes the request, then the request can proceed.
If all of the modules deny the request, then the request is denied (HTTP status code 403).
To learn more about Kubernetes authorization, including details about creating policies using the supported authorization modules,
see [Authorization](/docs/reference/access-authn-authz/authorization/).
-->
Kubernetes 支持多种鉴权模块,例如 ABAC 模式、RBAC 模式和 Webhook 模式等。
管理员创建集群时,他们配置应在 API 服务器中使用的鉴权模块。
如果配置了多个鉴权模块,则 Kubernetes 会检查每个模块,任意一个模块鉴权该请求,请求即可继续;
如果所有模块拒绝了该请求请求将会被拒绝HTTP 状态码 403
要了解更多有关 Kubernetes 鉴权的更多信息,包括有关使用支持鉴权模块创建策略的详细信息,
请参阅[鉴权](/zh/docs/reference/access-authn-authz/authorization/)。
<!-- ## Admission control -->
## 准入控制 {#admission-control}
<!--
Admission Control modules are software modules that can modify or reject requests.
In addition to the attributes available to Authorization modules, Admission
Control modules can access the contents of the object that is being created or modified.
Admission controllers act on requests that create, modify, delete, or connect to (proxy) an object.
Admission controllers do not act on requests that merely read objects.
When multiple admission controllers are configured, they are called in order.
-->
准入控制模块是可以修改或拒绝请求的软件模块。
除鉴权模块可用的属性外,准入控制模块还可以访问正在创建或修改的对象的内容。
准入控制器对创建、修改、删除或(通过代理)连接对象的请求进行操作。
准入控制器不会对仅读取对象的请求起作用。
有多个准入控制器被配置时,服务器将依次调用它们。
<!--
This is shown as step **3** in the diagram.
Unlike Authentication and Authorization modules, if any admission controller module
rejects, then the request is immediately rejected.
In addition to rejecting objects, admission controllers can also set complex defaults for
fields.
The available Admission Control modules are described in [Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/).
Once a request passes all admission controllers, it is validated using the validation routines
for the corresponding API object, and then written to the object store (shown as step **4**).
-->
这一操作如上图的步骤 **3** 所示。
与身份认证和鉴权模块不同,如果任何准入控制器模块拒绝某请求,则该请求将立即被拒绝。
除了拒绝对象之外,准入控制器还可以为字段设置复杂的默认值。
可用的准入控制模块在[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/)中进行了描述。
请求通过所有准入控制器后,将使用检验例程检查对应的 API 对象,然后将其写入对象存储(如步骤 **4** 所示)。
<!-- ## API server ports and IPs -->
## API 服务器端口和 IP {#api-server-ports-and-ips}
<!--
The previous discussion applies to requests sent to the secure port of the API server
(the typical case). The API server can actually serve on 2 ports:
By default the Kubernetes API server serves HTTP on 2 ports:
-->
前面的讨论适用于发送到 API 服务器的安全端口的请求(典型情况)。 API 服务器实际上可以在 2 个端口上提供服务:
默认情况下Kubernetes API 服务器在 2 个端口上提供 HTTP 服务:
<!--
1. `localhost` port:
- is intended for testing and bootstrap, and for other components of the master node
(scheduler, controller-manager) to talk to the API
- no TLS
- default is port 8080, change with `--insecure-port` flag.
- default IP is localhost, change with `--insecure-bind-address` flag.
- request **bypasses** authentication and authorization modules.
- request handled by admission control module(s).
- protected by need to have host access
2. “Secure port”:
- use whenever possible
- uses TLS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
- default is port 6443, change with `--secure-port` flag.
- default IP is first non-localhost network interface, change with `--bind-address` flag.
- request handled by authentication and authorization modules.
- request handled by admission control module(s).
- authentication and authorization modules run.
-->
1. `localhost` 端口:
- 用于测试和引导,以及主控节点上的其他组件(调度器,控制器管理器)与 API 通信
- 没有 TLS
- 默认为端口 8080使用 `--insecure-port` 进行更改
- 默认 IP 为 localhost使用 `--insecure-bind-address` 进行更改
- 请求 **绕过** 身份认证和鉴权模块
- 由准入控制模块处理的请求
- 受需要访问主机的保护
2. “安全端口”:
- 尽可能使用
- 使用 TLS。 用 `--tls-cert-file` 设置证书,用 `--tls-private-key-file` 设置密钥
- 默认端口 6443使用 `--secure-port` 更改
- 默认 IP 是第一个非本地网络接口,使用 `--bind-address` 更改
- 请求须经身份认证和鉴权组件处理
- 请求须经准入控制模块处理
- 身份认证和鉴权模块运行
## {{% heading "whatsnext" %}}
<!--
Read more documentation on authentication, authorization and API access control:
- [Authenticating](/docs/reference/access-authn-authz/authentication/)
- [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/)
- [Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
- [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
- [Authorization](/docs/reference/access-authn-authz/authorization/)
- [Role Based Access Control](/docs/reference/access-authn-authz/rbac/)
- [Attribute Based Access Control](/docs/reference/access-authn-authz/abac/)
- [Node Authorization](/docs/reference/access-authn-authz/node/)
- [Webhook Authorization](/docs/reference/access-authn-authz/webhook/)
- [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/)
- including [CSR approval](/docs/reference/access-authn-authz/certificate-signing-requests/#approval-rejection)
and [certificate signing](/docs/reference/access-authn-authz/certificate-signing-requests/#signing)
- Service accounts
- [Developer guide](/docs/tasks/configure-pod-container/configure-service-account/)
- [Administration](/docs/reference/access-authn-authz/service-accounts-admin/)
You can learn about:
- how Pods can use
[Secrets](/docs/concepts/configuration/secret/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials)
to obtain API credentials.
-->
阅读更多有关身份认证、鉴权和 API 访问控制的文档:
- [认证](/zh/docs/reference/access-authn-authz/authentication/)
- [使用 Bootstrap 令牌进行身份认证](/zh/docs/reference/access-authn-authz/bootstrap-tokens/)
- [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/)
- [动态准入控制](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
- [鉴权](/zh/docs/reference/access-authn-authz/authorization/)
- [基于角色的访问控制](/zh/docs/reference/access-authn-authz/rbac/)
- [基于属性的访问控制](/zh/docs/reference/access-authn-authz/abac/)
- [节点鉴权](/zh/docs/reference/access-authn-authz/node/)
- [Webhook 鉴权](/zh/docs/reference/access-authn-authz/webhook/)
- [证书签名请求](/zh/docs/reference/access-authn-authz/certificate-signing-requests/)
- 包括 [CSR 认证](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#approval-rejection)
和[证书签名](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#signing)
- 服务账户
- [开发者指导](/zh/docs/tasks/configure-pod-container/configure-service-account/)
- [管理](/zh/docs/reference/access-authn-authz/service-accounts-admin/)
你可以了解
- Pod 如何使用
[Secrets](/zh/docs/concepts/configuration/secret/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials)
获取 API 凭证.

View File

@ -10,19 +10,43 @@ content_type: concept
weight: 50
-->
{{< toc >}}
<!-- overview -->
<!--
A network policy is a specification of how groups of {{< glossary_tooltip text="pods" term_id="pod">}} are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use {{< glossary_tooltip text="labels" term_id="label">}} to select pods and define rules which specify what traffic is allowed to the selected pods.
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a {{< glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network.
-->
如果你希望在 IP 地址或端口层面OSI 第 3 层或第 4 层)控制网络流量,
则你可以考虑为集群中特定应用使用 Kubernetes 网络策略NetworkPolicy
NetworkPolicy 是一种以应用为中心的结构,允许你设置如何允许
{{< glossary_tooltip text="Pod" term_id="pod">}} 与网络上的各类网络“实体”
(我们这里使用实体以避免过度使用诸如“端点”和“服务”这类常用术语,
这些术语在 Kubernetes 中有特定含义)通信。
网络策略NetworkPolicy是一种关于 {{< glossary_tooltip text="Pod" term_id="pod">}} 间及与其他网络端点间所允许的通信规则的规范。
<!--
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
NetworkPolicy 资源使用 {{< glossary_tooltip text="标签" term_id="label">}} 选择 Pod并定义选定 Pod 所允许的通信规则。
1. Other pods that are allowed (exception: a pod cannot block access to itself)
2. Namespaces that are allowed
3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
-->
Pod 可以通信的 Pod 是通过如下三个标识符的组合来辩识的:
1. 其他被允许的 Pods例外Pod 无法阻塞对自身的访问)
2. 被允许的名字空间
3. IP 组块(例外:与 Pod 运行所在的节点的通信总是被允许的,
无论 Pod 或节点的 IP 地址)
<!--
When defining a pod- or namespace- based NetworkPolicy, you use a {{< glossary_tooltip text="selector" term_id="selector">}} to specify what traffic is allowed to and from the Pod(s) that match the selector.
Meanwhile, when IP based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
-->
在定义基于 Pod 或名字空间的 NetworkPolicy 时,你会使用
{{< glossary_tooltip text="选择算符" term_id="selector">}} 来设定哪些流量
可以进入或离开与该算符匹配的 Pod。
同时,当基于 IP 的 NetworkPolicy 被创建时,我们基于 IP 组块CIDR 范围)
来定义策略。
<!-- body -->
@ -31,12 +55,11 @@ NetworkPolicy 资源使用 {{< glossary_tooltip text="标签" term_id="label">}}
Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.
-->
## 前提
## 前置条件 {#prerequisites}
网络策略通过[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
来实现。要使用网络策略,用户必须使用支持 NetworkPolicy 的网络解决方案。
创建一个资源对象而没有控制器来使它生效的话,是没有任何作用的。
来实现。要使用网络策略,必须使用支持 NetworkPolicy 的网络解决方案。
创建一个 NetworkPolicy 资源对象而没有控制器来使它生效的话,是没有任何作用的。
<!--
## Isolated and Non-isolated Pods
@ -47,17 +70,18 @@ Pods become isolated by having a NetworkPolicy that selects them. Once there is
Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.
-->
## 隔离和非隔离的 Pod
## 隔离和非隔离的 Pod {#isolated-and-non-isolated-pods}
默认情况下Pod 是非隔离的,它们接受任何来源的流量。
Pod 可以通过相关的网络策略进行隔离。一旦命名空间中有网络策略选择了特定的 Pod
该 Pod 会拒绝网络策略所不允许的连接。
(命名空间下其他未被网络策略所选择的 Pod 会继续接收所有的流量)
Pod 在被某 NetworkPolicy 选中时进入被隔离状态。
一旦名字空间中有 NetworkPolicy 选择了特定的 Pod该 Pod 会拒绝该 NetworkPolicy
所不允许的连接。
(名字空间下其他未被 NetworkPolicy 所选择的 Pod 会继续接受所有的流量)
网络策略不会冲突,它们是累积的。
如果任何一个或多个策略选择了一个 Pod, 则该 Pod 受限于这些策略的
ingress/egress 规则的并集。因此评估的顺序并不会影响策略的结果。
入站Ingress/出站Egress规则的并集。因此评估的顺序并不会影响策略的结果。
<!--
## The NetworkPolicy resource {#networkpolicy-resource}
@ -66,10 +90,10 @@ See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "vers
An example NetworkPolicy might look like this:
-->
## NetworkPolicy 资源 {#networkpolicy-resource}
查看 [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) 来了解完整的资源定义。
参阅 [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io)
来了解资源的完整定义。
下面是一个 NetworkPolicy 的示例:
@ -127,27 +151,41 @@ and [Object Management](/docs/concepts/overview/working-with-objects/object-mana
__spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace.
__podSelector__: Each NetworkPolicy includes a `podSelector` which selects the grouping of pods to which the policy applies. The example policy selects pods with the label "role=db". An empty `podSelector` selects all pods in the namespace.
-->
__必需字段__与所有其他的 Kubernetes 配置一样NetworkPolicy 需要 `apiVersion`
`kind``metadata` 字段。关于配置文件操作的一般信息,请参考
[使用 ConfigMap 配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/),
和[对象管理](/zh/docs/concepts/overview/working-with-objects/object-management)。
__spec__NetworkPolicy [规约](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
中包含了在一个名字空间中定义特定网络策略所需的所有信息。
__podSelector__每个 NetworkPolicy 都包括一个 `podSelector`,它对该策略所
适用的一组 Pod 进行选择。示例中的策略选择带有 "role=db" 标签的 Pod。
空的 `podSelector` 选择名字空间下的所有 Pod。
<!--
__policyTypes__: Each NetworkPolicy includes a `policyTypes` list which may include either `Ingress`, `Egress`, or both. The `policyTypes` field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no `policyTypes` are specified on a NetworkPolicy then by default `Ingress` will always be set and `Egress` will be set if the NetworkPolicy has any egress rules.
__ingress__: Each NetworkPolicy may include a list of whitelist `ingress` rules. Each rule allows traffic which matches both the `from` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`.
__ingress__: Each NetworkPolicy may include a list of allowed `ingress` rules. Each rule allows traffic which matches both the `from` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`.
__egress__: Each NetworkPolicy may include a list of whitelist `egress` rules. Each rule allows traffic which matches both the `to` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port to any destination in `10.0.0.0/24`.
__egress__: Each NetworkPolicy may include a list of allowed `egress` rules. Each rule allows traffic which matches both the `to` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port to any destination in `10.0.0.0/24`.
-->
__必填字段__: 与所有其他的 Kubernetes 配置一样NetworkPolicy 需要 `apiVersion`、`kind` 和 `metadata` 字段。
关于配置文件操作的一般信息,请参考 [使用 ConfigMap 配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/),
和[对象管理](/zh/docs/concepts/overview/working-with-objects/object-management)。
__policyTypes__: 每个 NetworkPolicy 都包含一个 `policyTypes` 列表,其中包含
`Ingress``Egress` 或两者兼具。`policyTypes` 字段表示给定的策略是应用于
进入所选 Pod 的入站流量还是来自所选 Pod 的出站流量,或两者兼有。
如果 NetworkPolicy 未指定 `policyTypes` 则默认情况下始终设置 `Ingress`
如果 NetworkPolicy 有任何出口规则的话则设置 `Egress`
__spec__: NetworkPolicy [规约](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 中包含了在一个命名空间中定义特定网络策略所需的所有信息。
__ingress__: 每个 NetworkPolicy 可包含一个 `ingress` 规则的白名单列表。
每个规则都允许同时匹配 `from``ports` 部分的流量。示例策略中包含一条
简单的规则: 它匹配某个特定端口,来自三个来源中的一个,第一个通过 `ipBlock`
指定,第二个通过 `namespaceSelector` 指定,第三个通过 `podSelector` 指定。
__podSelector__: 每个 NetworkPolicy 都包括一个 `podSelector` ,它对该策略所应用的一组 Pod 进行选择。示例中的策略选择带有 "role=db" 标签的 Pod。空的 `podSelector` 选择命名空间下的所有 Pod。
__policyTypes__: 每个 NetworkPolicy 都包含一个 `policyTypes` 列表,其中包含 `Ingress``Egress` 或两者兼具。`policyTypes` 字段表示给定的策略是否应用于进入所选 Pod 的入口流量或者来自所选 Pod 的出口流量,或两者兼有。如果 NetworkPolicy 未指定 `policyTypes` 则默认情况下始终设置 `Ingress`,如果 NetworkPolicy 有任何出口规则的话则设置 `Egress`
__ingress__: 每个 NetworkPolicy 可包含一个 `ingress` 规则的白名单列表。每个规则都允许同时匹配 `from``ports` 部分的流量。示例策略中包含一条简单的规则: 它匹配一个单一的端口,来自三个来源中的一个, 第一个通过 `ipBlock` 指定,第二个通过 `namespaceSelector` 指定,第三个通过 `podSelector` 指定。
__egress__: 每个 NetworkPolicy 可包含一个 `egress` 规则的白名单列表。每个规则都允许匹配 `to``port` 部分的流量。该示例策略包含一条规则,该规则将单个端口上的流量匹配到 `10.0.0.0/24` 中的任何目的地。
__egress__: 每个 NetworkPolicy 可包含一个 `egress` 规则的白名单列表。
每个规则都允许匹配 `to``port` 部分的流量。该示例策略包含一条规则,
该规则将指定端口上的流量匹配到 `10.0.0.0/24` 中的任何目的地。
<!--
So, the example NetworkPolicy:
@ -162,18 +200,22 @@ So, the example NetworkPolicy:
See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples.
-->
所以,该网络策略示例:
1. 隔离 "default" 命名空间下 "role=db" 的 Pod (如果它们不是已经被隔离的话)。
2. Ingress 规则)允许以下 Pod 连接到 "default" 命名空间下的带有 “role=db” 标签的所有 Pod 的 6379 TCP 端口:
1. 隔离 "default" 名字空间下 "role=db" 的 Pod (如果它们不是已经被隔离的话)。
2. Ingress 规则)允许以下 Pod 连接到 "default" 名字空间下的带有 "role=db"
标签的所有 Pod 的 6379 TCP 端口:
* "default" 名空间下任意带有 "role=frontend" 标签的 Pod
* 带有 "project=myproject" 标签的任意命名空间中的 Pod
* IP 地址范围为 172.17.0.0172.17.0.255 和 172.17.2.0172.17.255.255(即,除了 172.17.1.0/24 之外的所有 172.17.0.0/16
3. Egress 规则)允许从带有 "role=db" 标签的命名空间下的任何 Pod 到 CIDR 10.0.0.0/24 下 5978 TCP 端口的连接。
* "default" 名空间下带有 "role=frontend" 标签的所有 Pod
* 带有 "project=myproject" 标签的所有名字空间中的 Pod
* IP 地址范围为 172.17.0.0172.17.0.255 和 172.17.2.0172.17.255.255
(即,除了 172.17.1.0/24 之外的所有 172.17.0.0/16
查看[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/) 来进行更多的示例演练。
3. Egress 规则)允许从带有 "role=db" 标签的名字空间下的任何 Pod 到 CIDR
10.0.0.0/24 下 5978 TCP 端口的连接。
参阅[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)演练
了解更多示例。
<!--
## Behavior of `to` and `from` selectors
@ -186,16 +228,19 @@ __namespaceSelector__: This selects particular namespaces for which all Pods sho
__namespaceSelector__ *and* __podSelector__: A single `to`/`from` entry that specifies both `namespaceSelector` and `podSelector` selects particular Pods within particular namespaces. Be careful to use correct YAML syntax; this policy:
-->
## 选择器 `to``from` 的行为 {#behavior-of-to-and-from-selectors}
## 选择器 `to``from` 的行为
可以在 `ingress``from` 部分或 `egress``to` 部分中指定四种选择器:
可以在 `ingress` `from` 部分或 `egress` `to` 部分中指定四种选择器:
__podSelector__: 此选择器将在与 NetworkPolicy 相同的名字空间中选择特定的
Pod应将其允许作为入站流量来源或出站流量目的地。
__podSelector__: 这将在与 NetworkPolicy 相同的命名空间中选择特定的 Pod应将其允许作为入口源或出口目的地。
__namespaceSelector__此选择器将选择特定的名字空间应将所有 Pod 用作其
入站流量来源或出站流量目的地。
__namespaceSelector__: 这将选择特定的命名空间,应将所有 Pod 用作其输入源或输出目的地。
__namespaceSelector__ *和* __podSelector__: 一个指定 `namespaceSelector``podSelector``to`/`from` 条目选择特定命名空间中的特定 Pod。注意使用正确的 YAML 语法;这项策略:
__namespaceSelector__ *和* __podSelector__ 一个指定 `namespaceSelector`
`podSelector``to`/`from` 条目选择特定名字空间中的特定 Pod。
注意使用正确的 YAML 语法;下面的策略:
```yaml
...
@ -213,7 +258,8 @@ __namespaceSelector__ *和* __podSelector__: 一个指定 `namespaceSelector`
<!--
contains a single `from` element allowing connections from Pods with the label `role=client` in namespaces with the label `user=alice`. But *this* policy:
-->
`from` 数组中仅包含一个元素,只允许来自标有 `role=client` 的 Pod 且该 Pod 所在的命名空间中标有 `user=alice` 的连接。但是 *这项* 策略:
`from` 数组中仅包含一个元素,只允许来自标有 `role=client` 的 Pod 且
该 Pod 所在的名字空间中标有 `user=alice` 的连接。但是 *这项* 策略:
```yaml
...
@ -230,7 +276,11 @@ contains a single `from` element allowing connections from Pods with the label `
<!--
contains two elements in the `from` array, and allows connections from Pods in the local Namespace with the label `role=client`, *or* from any Pod in any namespace with the label `user=alice`.
-->
`from` 数组中包含两个元素,允许来自本地名字空间中标有 `role=client`
Pod 的连接,*或* 来自任何名字空间中标有 `user=alice` 的任何 Pod 的连接。
<!--
When in doubt, use `kubectl describe` to see how Kubernetes has interpreted the policy.
__ipBlock__: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.
@ -247,18 +297,21 @@ the NetworkPolicy acts on may be the IP of a `LoadBalancer` or of the Pod's node
For egress, this means that connections from pods to `Service` IPs that get rewritten to
cluster-external IPs may or may not be subject to `ipBlock`-based policies.
-->
`from` 数组中包含两个元素,允许来自本地命名空间中标有 `role=client` 的 Pod 的连接,*或* 来自任何命名空间中标有 `user = alice` 的任何 Pod 的连接。
如有疑问,请使用 `kubectl describe` 查看 Kubernetes 如何解释该策略。
__ipBlock__: 这将选择特定的 IP CIDR 范围以用作入口源或出口目的地。 这些应该是群集外部 IP因为 Pod IP 存在时间短暂的且随机产生。
__ipBlock__: 此选择器将选择特定的 IP CIDR 范围以用作入站流量来源或出站流量目的地。
这些应该是集群外部 IP因为 Pod IP 存在时间短暂的且随机产生。
群集的入口和出口机制通常需要重写数据包的源 IP 或目标 IP。在发生这种情况的情况下不确定在 NetworkPolicy 处理之前还是之后发生,并且对于网络插件,云提供商,`Service` 实现等的不同组合,其行为可能会有所不同。
集群的入站和出站机制通常需要重写数据包的源 IP 或目标 IP。
在发生这种情况时,不确定在 NetworkPolicy 处理之前还是之后发生,
并且对于网络插件、云提供商、`Service` 实现等的不同组合,其行为可能会有所不同。
在进入的情况下,这意味着在某些情况下,您可以根据实际的原始源 IP 过滤传入的数据包而在其他情况下NetworkPolicy 所作用的 `源IP` 则可能是 `LoadBalancer` 或 Pod 的节点等。
对入站流量而言,这意味着在某些情况下,你可以根据实际的原始源 IP 过滤传入的数据包,
而在其他情况下NetworkPolicy 所作用的 `源IP` 则可能是 `LoadBalancer`
Pod 的节点等。
对于出口,这意味着从 Pod 到被重写为集群外部 IP 的 `Service` IP 的连接可能会或可能不会受到基于 `ipBlock` 的策略的约束。
对于出站流量而言,这意味着从 Pod 到被重写为集群外部 IP 的 `Service` IP
的连接可能会或可能不会受到基于 `ipBlock` 的策略的约束。
<!--
## Default policies
@ -266,37 +319,38 @@ __ipBlock__: 这将选择特定的 IP CIDR 范围以用作入口源或出口目
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The following examples let you change the default behavior
in that namespace.
-->
## 默认策略 {#default-policies}
## 默认策略
默认情况下,如果命名空间中不存在任何策略,则所有进出该命名空间中的 Pod 的流量都被允许。以下示例使您可以更改该命名空间中的默认行为。
默认情况下,如果名字空间中不存在任何策略,则所有进出该名字空间中 Pod 的流量都被允许。
以下示例使你可以更改该名字空间中的默认行为。
<!--
### Default deny all ingress traffic
You can create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods.
-->
### 默认拒绝所有入站流量
### 默认拒绝所有入口流量
您可以通过创建选择所有容器但不允许任何进入这些容器的入口流量的 NetworkPolicy 来为命名空间创建 "default" 隔离策略。
你可以通过创建选择所有容器但不允许任何进入这些容器的入站流量的 NetworkPolicy
来为名字空间创建 "default" 隔离策略。
{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}}
<!--
This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated. This policy does not change the default egress isolation behavior.
-->
这样可以确保即使容器没有选择其他任何 NetworkPolicy也仍然可以被隔离。此策略不会更改默认的出口隔离行为。
这样可以确保即使容器没有选择其他任何 NetworkPolicy也仍然可以被隔离。
此策略不会更改默认的出口隔离行为。
<!--
### Default allow all ingress traffic
If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all traffic in that namespace.
-->
### 默认允许所有入站流量
### 默认允许所有入口流量
如果要允许所有流量进入某个命名空间中的所有 Pod即使添加了导致某些 Pod 被视为“隔离”的策略),则可以创建一个策略来明确允许该命名空间中的所有流量。
如果要允许所有流量进入某个名字空间中的所有 Pod即使添加了导致某些 Pod 被视为
“隔离”的策略),则可以创建一个策略来明确允许该名字空间中的所有流量。
{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}}
@ -305,10 +359,10 @@ If you want to allow all traffic to all pods in a namespace (even if policies ar
You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any egress traffic from those pods.
-->
### 默认拒绝所有出站流量
### 默认拒绝所有出口流量
您可以通过创建选择所有容器但不允许来自这些容器的任何出口流量的 NetworkPolicy 来为命名空间创建 "default" egress 隔离策略。
你可以通过创建选择所有容器但不允许来自这些容器的任何出站流量的 NetworkPolicy
来为名字空间创建 "default" egress 隔离策略。
{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}}
@ -316,18 +370,18 @@ You can create a "default" egress isolation policy for a namespace by creating a
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not
change the default ingress isolation behavior.
-->
这样可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许流出流量。此策略不会更改默认的 ingress 隔离行为。
此策略可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许流出流量。
此策略不会更改默认的入站流量隔离行为。
<!--
### Default allow all egress traffic
If you want to allow all traffic from all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all egress traffic in that namespace.
-->
### 默认允许所有出站流量
### 默认允许所有出口流量
如果要允许来自命名空间中所有 Pod 的所有流量(即使添加了导致某些 Pod 被视为“隔离”的策略),则可以创建一个策略,该策略明确允许该命名空间中的所有出口流量。
如果要允许来自名字空间中所有 Pod 的所有流量(即使添加了导致某些 Pod 被视为“隔离”的策略),
则可以创建一个策略,该策略明确允许该名字空间中的所有出站流量。
{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}}
@ -336,41 +390,91 @@ If you want to allow all traffic from all pods in a namespace (even if policies
You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by creating the following NetworkPolicy in that namespace.
-->
### 默认拒绝所有入口和所有出站流量
### 默认拒绝所有入口和所有出口流量
您可以为命名空间创建 "default" 策略,以通过在该命名空间中创建以下 NetworkPolicy 来阻止所有入站和出站流量。
你可以为名字空间创建“默认”策略,以通过在该名字空间中创建以下 NetworkPolicy
来阻止所有入站和出站流量。
{{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}}
<!--
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.
-->
这样可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许入或出流量。
此策略可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被
允许入或出流量。
<!--
## SCTP support
-->
## SCTP 支持
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
<!--
To use this feature, you (or your cluster administrator) will need to enable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=true,…`.
As a beta feature, this is enabled by default. To disable SCTP at a cluster level, you (or your cluster administrator) will need to disable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `-feature-gates=SCTPSupport=false,...`.
When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`.
-->
要启用此特性,你(或你的集群管理员)需要通过为 API server 指定 `--feature-gates=SCTPSupport=true,…`
来启用 `SCTPSupport` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
启用该特性开关后,用户可以将 NetworkPolicy 的 `protocol` 字段设置为 `SCTP`
作为一个 Beta 特性SCTP 支持默认是被启用的。
要在集群层面禁用 SCTP或你的集群管理员需要为 API 服务器指定
`--feature-gates=SCTPSupport=false,...`
来禁用 `SCTPSupport` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
启用该特性门控后,用户可以将 NetworkPolicy 的 `protocol` 字段设置为 `SCTP`
<!--
You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP protocol NetworkPolicies.
-->
{{< note >}}
必须使用支持 SCTP 协议网络策略的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 插件。
必须使用支持 SCTP 协议网络策略的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 插件。
{{< /note >}}
<!--
## What you can't do with network policies (at least, not yet)
As of Kubernetes 1.20, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API. Some (but not all) of these user stories are actively being discussed for future releases of the NetworkPolicy API.
-->
## 你通过网络策略(至少目前还)无法完成的工作
到 Kubernetes v1.20 为止NetworkPolicy API 还不支持以下功能,不过
你可能可以使用操作系统组件(如 SELinux、OpenVSwitch、IPTables 等等)
或者第七层技术Ingress 控制器、服务网格实现)或准入控制器来实现一些
替代方案。
如果你对 Kubernetes 中的网络安全性还不太了解,了解使用 NetworkPolicy API
还无法实现下面的用户场景是很值得的。
对这些用户场景中的一部分(而非全部)的讨论扔在进行,或许在将来 NetworkPolicy
API 中会给出一定支持。
<!--
- Forcing internal cluster traffic to go through a common gateway (this might be best served with a service mesh or other proxy).
- Anything TLS related (use a service mesh or ingress controller for this).
- Node specific policies (you can use CIDR notation for these, but you cannot target nodes by their Kubernetes identities specifically).
- Targeting of namespaces or services by name (you can, however, target pods or namespaces by their {{< glossary_tooltip text="labels" term_id="label" >}}, which is often a viable workaround).
- Creation or management of "Policy requests" that are fulfilled by a third party.
-->
- 强制集群内部流量经过某公用网关(这种场景最好通过服务网格或其他代理来实现);
- 与 TLS 相关的场景(考虑使用服务网格或者 Ingress 控制器);
- 特定于节点的策略(你可以使用 CIDR 来表达这一需求不过你无法使用节点在
Kubernetes 中的其他标识信息来辩识目标节点);
- 基于名字来选择名字空间或者服务(不过,你可以使用 {{< glossary_tooltip text="标签" term_id="label" >}}
来选择目标 Pod 或名字空间,这也通常是一种可靠的替代方案);
- 创建或管理由第三方来实际完成的“策略请求”;
<!--
- Default policies which are applied to all namespaces or pods (there are some third party Kubernetes distributions and projects which can do this).
- Advanced policy querying and reachability tooling.
- The ability to target ranges of Ports in a single policy declaration.
- The ability to log network security events (for example connections that are blocked or accepted).
- The ability to explicitly deny policies (currently the model for NetworkPolicies are deny by default, with only the ability to add allow rules).
- The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost access, nor do they have the ability to block access from their resident node).
-->
- 实现适用于所有名字空间或 Pods 的默认策略(某些第三方 Kubernetes 发行版本
或项目可以做到这点);
- 高级的策略查询或者可达性相关工具;
- 在同一策略声明中选择目标端口范围的能力;
- 生成网络安全事件日志的能力(例如,被阻塞或接收的连接请求);
- 显式地拒绝策略的能力目前NetworkPolicy 的模型默认采用拒绝操作,
其唯一的能力是添加允许策略);
- 禁止本地回路或指向宿主的网络流量Pod 目前无法阻塞 localhost 访问,
它们也无法禁止来自所在节点的访问请求)。
## {{% heading "whatsnext" %}}
<!--
@ -378,9 +482,8 @@ You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin tha
walkthrough for further examples.
- See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource.
-->
- 查看 [声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)
来进行更多的示例演练
- 有关 NetworkPolicy 资源启用的常见场景的更多信息,请参见
- 参阅[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)
演练了解更多示例;
- 有关 NetworkPolicy 资源所支持的常见场景的更多信息,请参见
[此指南](https://github.com/ahmetb/kubernetes-network-policy-recipes)。

View File

@ -864,7 +864,7 @@ Kubernetes `ServiceTypes` 允许指定一个需要的类型的 Service默认
* [`NodePort`](#nodeport):通过每个 Node 上的 IP 和静态端口(`NodePort`)暴露服务。
`NodePort` 服务会路由到 `ClusterIP` 服务,这个 `ClusterIP` 服务会自动创建。
通过请求 `<NodeIP>:<NodePort>`,可以从集群的外部访问一个 `NodePort` 服务。
* [`LoadBalancer`](#loadbalancer):使用云提供商的负载衡器,可以向外部暴露服务。
* [`LoadBalancer`](#loadbalancer):使用云提供商的负载衡器,可以向外部暴露服务。
外部的负载均衡器可以路由到 `NodePort` 服务和 `ClusterIP` 服务。
* [`ExternalName`](#externalname):通过返回 `CNAME` 和它的值,可以将服务映射到 `externalName`
字段的内容(例如, `foo.bar.example.com`)。

View File

@ -3,3 +3,116 @@ title: "工作负载"
weight: 50
description: 理解 PodsKubernetes 中可部署的最小计算对象,以及辅助它运行它们的高层抽象对象。
---
<!--
title: "Workloads"
weight: 50
description: >
Understand Pods, the smallest deployable compute object in Kubernetes, and the higher-level abstractions that help you to run them.
no_list: true
-->
{{< glossary_definition term_id="workload" length="short" >}}
<!--
Whether your workload is a single component or several that work together, on Kubernetes you run
it inside a set of [Pods](/docs/concepts/workloads/pods).
In Kubernetes, a Pod represents a set of running {{< glossary_tooltip text="containers" term_id="container" >}}
on your cluster.
A Pod has a defined lifecycle. For example, once a Pod is running in your cluster then
a critical failure on the {{< glossary_tooltip text="node" term_id="node" >}} where that
Pod is running means that all the Pods on that node fail. Kubernetes treats that level
of failure as final: you would need to create a new Pod even if the node later recovers.
-->
无论你的负载是单一组件还是由多个一同工作的组件构成,在 Kubernetes 中你
可以在一组 [Pods](/zh/docs/concepts/workloads/pods) 中运行它。
在 Kubernetes 中Pod 代表的是集群上处于运行状态的一组
{{< glossary_tooltip text="容器" term_id="container" >}}。
Pod 有确定的生命周期。例如,一旦某 Pod 在你的集群中运行Pod 运行所在的
{{< glossary_tooltip text="节点" term_id="node" >}} 出现致命错误时,
所有该节点上的 Pods 都会失败。Kubernetes 将这类失败视为最终状态:
即使节点后来恢复正常运行,你也需要创建新的 Pod。
<!--
However, to make life considerably easier, you don't need to manage each Pod directly.
Instead, you can use _workload resources_ that manage a set of Pods on your behalf.
These resources configure {{< glossary_tooltip term_id="controller" text="controllers" >}}
that make sure the right number of the right kind of Pod are running, to match the state
you specified.
Those workload resources include:
-->
不过,为了让用户的日子略微好过一些,你并不需要直接管理每个 Pod。
相反,你可以使用 _负载资源_ 来替你管理一组 Pods。
这些资源配置 {{< glossary_tooltip term_id="controller" text="控制器" >}}
来确保合适类型的、处于运行状态的 Pod 个数是正确的,与你所指定的状态相一致。
这些工作负载资源包括:
<!--
* [Deployment](/docs/concepts/workloads/controllers/deployment/) and [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
(replacing the legacy resource {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}});
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/);
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) for running Pods that provide
node-local facilities, such as a storage driver or network plugin;
* [Job](/docs/concepts/workloads/controllers/job/) and
[CronJob](/docs/concepts/workloads/controllers/cron-jobs/)
for tasks that run to completion.
-->
* [Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 和
[ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/)
(替换原来的资源 {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}}
* [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/);
* 用来运行提供节点本地支撑设施(如存储驱动或网络插件)的 Pods 的
[DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/)
* 用来执行运行到结束为止的
[Job](/zh/docs/concepts/workloads/controllers/job/) 和
[CronJob](/zh/docs/concepts/workloads/controllers/cron-jobs/)。
<!--
There are also two supporting concepts that you might find relevant:
* [Garbage collection](/docs/concepts/workloads/controllers/garbage-collection/) tidies up objects
from your cluster after their _owning resource_ has been removed.
* The [_time-to-live after finished_ controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
removes Jobs once a defined time has passed since they completed.
-->
你可能发现还有两种支撑概念很有用:
* [垃圾收集](/zh/docs/concepts/workloads/controllers/garbage-collection/)机制负责在
对象的 _属主资源_ 被删除时在集群中清理这些对象。
* [_结束后存在时间_ 控制器](/zh/docs/concepts/workloads/controllers/ttlafterfinished/)
会在 Job 结束之后的指定时间间隔之后删除它们。
## {{% heading "whatsnext" %}}
<!--
As well as reading about each resource, you can learn about specific tasks that relate to them:
* [Run a stateless application using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/)
* Run a stateful application either as a [single instance](/docs/tasks/run-application/run-single-instance-stateful-application/)
or as a [replicated set](/docs/tasks/run-application/run-replicated-stateful-application/)
* [Run Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
-->
除了阅读了解每类资源外,你还可以了解与这些资源相关的任务:
* [使用 Deployment 运行一个无状态的应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)
* 以[单实例](/zh/docs/tasks/run-application/run-single-instance-stateful-application/)
或者[多副本集合](/zh/docs/tasks/run-application/run-replicated-stateful-application/)
的形式运行有状态的应用;
* [使用 CronJob 运行自动化的任务](/zh/docs/tasks/job/automated-tasks-with-cron-jobs/)
<!--
Once your application is running, you might want to make it available on the internet as
a [Service](/docs/concepts/services-networking/service/) or, for web application only,
using an [Ingress](/docs/concepts/services-networking/ingress).
You can also visit [Configuration](/docs/concepts/configuration/) to learn about Kubernetes'
mechanisms for separating code from configuration.
-->
一旦你的应用处于运行状态,你就可能想要
以[服务](/zh/docs/concepts/services-networking/service/)
使之在互联网上可访问;或者对于 Web 应用而言,使用
[Ingress](/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。

View File

@ -0,0 +1,42 @@
---
title: 对象
id: object
date: 2020-10-12
full_link: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects
short_description: >
Kubernetes 系统中的实体, 代表了集群的部分状态。
aka:
tags:
- fundamental
---
<!--
---
title: Object
id: object
date: 2020-10-12
full_link: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects
short_description: >
A entity in the Kubernetes system, representing part of the state of your cluster.
aka:
tags:
- fundamental
---
-->
<!--
An entity in the Kubernetes system. The Kubernetes API uses these entities to represent the state
of your cluster.
-->
Kubernetes 系统中的实体。Kubernetes API 用这些实体表示集群的状态。
<!--more-->
<!--
A Kubernetes object is typically a “record of intent”—once you create the object, the Kubernetes
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} works constantly to ensure
that the item it represents actually exists.
By creating an object, you're effectively telling the Kubernetes system what you want that part of
your cluster's workload to look like; this is your cluster's desired state.
-->
Kubernetes 对象通常是一个“目标记录”-一旦你创建了一个对象Kubernetes
{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}
不断工作,以确保它代表的项目确实存在。
创建一个对象相当于告知 Kubernetes 系统:你期望这部分集群负载看起来像什么;这也就是你集群的期望状态。

View File

@ -0,0 +1,25 @@
---
title: Turnkey 云解决方案
content_type: concept
weight: 30
---
<!--
---
title: Turnkey Cloud Solutions
content_type: concept
weight: 30
---
-->
<!-- overview -->
<!--
This page provides a list of Kubernetes certified solution providers. From each
provider page, you can learn how to install and setup production
ready clusters.
-->
本页列示 Kubernetes 认证解决方案供应商。
在每一个供应商分页,你可以学习如何安装和设置生产就绪的集群。
<!-- body -->
{{< cncf-landscape helpers=true category="certified-kubernetes-hosted" >}}

View File

@ -1,4 +0,0 @@
---
title: Turnkey 云解决方案
weight: 30
---

View File

@ -1,44 +0,0 @@
---
reviewers:
- colemickens
- brendandburns
title: 在阿里云上运行 Kubernetes
---
<!--
---
reviewers:
- colemickens
- brendandburns
title: Running Kubernetes on Alibaba Cloud
---
-->
<!--
## Alibaba Cloud Container Service
The [Alibaba Cloud Container Service](https://www.alibabacloud.com/product/container-service) lets you run and manage Docker applications on a cluster of Alibaba Cloud ECS instances. It supports the popular open source container orchestrators: Docker Swarm and Kubernetes.
To simplify cluster deployment and management, use [Kubernetes Support for Alibaba Cloud Container Service](https://www.alibabacloud.com/product/kubernetes). You can get started quickly by following the [Kubernetes walk-through](https://www.alibabacloud.com/help/doc-detail/86737.htm), and there are some [tutorials for Kubernetes Support on Alibaba Cloud](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1) in Chinese.
To use custom binaries or open source Kubernetes, follow the instructions below.
-->
## 阿里云容器服务
[阿里云容器服务](https://www.alibabacloud.com/product/container-service)使您可以在阿里云 ECS 实例集群上运行和管理 Docker 应用程序。它支持流行的开源容器编排引擎Docker Swarm 和 Kubernetes。
为了简化集群的部署和管理,请使用 [容器服务 Kubernetes 版](https://www.alibabacloud.com/product/kubernetes)。您可以按照 [Kubernetes 演练](https://www.alibabacloud.com/help/doc-detail/86737.htm)快速入门,其中有一些使用中文书写的[容器服务 Kubernetes 版教程](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1)。
要使用自定义二进制文件或开源版本的 Kubernetes请按照以下说明进行操作。
<!--
## Custom Deployments
The source code for [Kubernetes with Alibaba Cloud provider implementation](https://github.com/AliyunContainerService/kubernetes) is open source and available on GitHub.
For more information, see "[Quick deployment of Kubernetes - VPC environment on Alibaba Cloud](https://www.alibabacloud.com/forum/read-830)" in English and [Chinese](https://yq.aliyun.com/articles/66474).
-->
## 自定义部署
[阿里云 Kubernetes Cloud Provider 实现](https://github.com/AliyunContainerService/kubernetes) 的源代码是开源的,可在 GitHub 上获得。
有关更多信息,请参阅中文版本[快速部署 Kubernetes - 阿里云上的VPC环境](https://yq.aliyun.com/articles/66474)和[英文版本](https://www.alibabacloud.com/forum/read-830)。

View File

@ -1,166 +0,0 @@
---
title: 在 AWS EC2 上运行 Kubernetes
content_type: task
---
<!--
reviewers:
- justinsb
- clove
title: Running Kubernetes on AWS EC2
content_type: task
-->
<!-- overview -->
<!--
This page describes how to install a Kubernetes cluster on AWS.
-->
本页面介绍了如何在 AWS 上安装 Kubernetes 集群。
## {{% heading "prerequisites" %}}
<!--
To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.
-->
在 AWS 上创建 Kubernetes 集群,你将需要 AWS 的 Access Key ID 和 Secret Access Key。
<!--
### Supported Production Grade Tools
-->
### 支持的生产级别工具
<!--
* [conjure-up](/docs/getting-started-guides/ubuntu/) is an open-source installer for Kubernetes that creates Kubernetes clusters with native AWS integrations on Ubuntu.
-->
* [conjure-up](/zh/docs/setup/) 是 Kubernetes 的开源安装程序,可在 Ubuntu 上创建与原生 AWS 集成的 Kubernetes 集群。
<!--
* [Kubernetes Operations](https://github.com/kubernetes/kops) - Production Grade K8s Installation, Upgrades, and Management. Supports running Debian, Ubuntu, CentOS, and RHEL in AWS.
-->
* [Kubernetes Operations](https://github.com/kubernetes/kops) - 生产级 K8s 的安装、升级和管理。支持在 AWS 运行 Debian、Ubuntu、CentOS 和 RHEL。
<!--
* [kube-aws](https://github.com/kubernetes-incubator/kube-aws), creates and manages Kubernetes clusters with [Flatcar Linux](https://www.flatcar-linux.org/) nodes, using AWS tools: EC2, CloudFormation and Autoscaling.
-->
* [kube-aws](https://github.com/kubernetes-incubator/kube-aws) 使用 [Flatcar Linux](https://www.flatcar-linux.org/) 节点创建和管理 Kubernetes 集群,它使用了 AWS 工具EC2、CloudFormation 和 Autoscaling。
<!--
* [KubeOne](https://github.com/kubermatic/kubeone) is an open source cluster lifecycle management tool that creates, upgrades and manages Kubernetes Highly-Available clusters.
-->
* [KubeOne](https://github.com/kubermatic/kubeone) 是一个开源集群生命周期管理工具,它可用于创建,升级和管理高可用 Kubernetes 集群。
<!-- steps -->
<!--
## Getting started with your cluster
-->
## 集群入门
<!--
### Command line administration tool: kubectl
-->
### 命令行管理工具kubectl
<!--
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
Next, add the appropriate binary folder to your `PATH` to access kubectl:
-->
集群启动脚本将在你的工作站上为你提供一个 `kubernetes` 目录。
或者,你可以从[此页面](https://github.com/kubernetes/kubernetes/releases)下载最新的 Kubernetes 版本。
接下来,将适当的二进制文件夹添加到你的 `PATH` 以访问 kubectl
```shell
# macOS
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
<!--
An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/)
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
For more information, please read [kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
-->
此工具的最新文档页面位于此处:[kubectl 手册](/zh/docs/reference/kubectl/kubectl/)
默认情况下,`kubectl` 将使用在集群启动期间生成的 `kubeconfig` 文件对 API 进行身份验证。
有关更多信息,请阅读 [kubeconfig 文件](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)。
<!--
### Examples
See [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)
-->
### 示例
请参阅[一个简单的 nginx 示例](/zh/docs/tasks/run-application/run-stateless-application-deployment/)试用你的新集群。
“Guestbook” 应用程序是另一个入门 Kubernetes 的流行示例:[guestbook 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)。
有关更完整的应用程序,请查看[示例目录](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)。
<!--
## Scaling the cluster
Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the [Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation.
-->
## 集群伸缩
不支持通过 `kubectl` 添加和删除节点。你仍然可以通过调整在安装过程中创建的
[Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html)
中的 “Desired” 和 “Max” 属性来手动伸缩节点数量。
<!--
## Tearing down the cluster
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
`kubernetes` directory:
-->
## 集群拆除
确保你用于配置集群的环境变量已被导出,然后在运行如下在 Kubernetes 目录的脚本:
```shell
cluster/kube-down.sh
```
<!--
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ----------------------------
AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community
AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/docs/getting-started-guides/ubuntu) | 100% | Commercial, Community
AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | [docs](https://github.com/kubermatic/kubeone) | 100% | Commercial, Community
-->
## 支持等级
IaaS 提供商 | 配置管理 | 操作系统 | 网络 | 文档 | 符合率 | 支持等级
-------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ----------------------------
AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
AWS | CoreOS | CoreOS | flannel | [docs](/zh/docs/setup/) | | Community
AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/zh/docs/setup/) | 100% | Commercial, Community
AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | [docs](https://github.com/kubermatic/kubeone) | 100% | Commercial, Community
<!--
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.
-->
## 进一步阅读
请参阅 [Kubernetes 文档](/zh/docs/)了解有关管理和使用 Kubernetes 集群的更多详细信息。

View File

@ -1,76 +0,0 @@
---
reviewers:
- colemickens
- brendandburns
title: 在 Azure 上运行 Kubernetes
---
<!--
---
reviewers:
- colemickens
- brendandburns
title: Running Kubernetes on Azure
---
-->
<!--
## Azure Kubernetes Service (AKS)
The [Azure Kubernetes Service](https://azure.microsoft.com/en-us/services/kubernetes-service/) offers simple
deployments for Kubernetes clusters.
For an example of deploying a Kubernetes cluster onto Azure via the Azure Kubernetes Service:
**[Microsoft Azure Kubernetes Service](https://docs.microsoft.com/zh-cn/azure/aks/intro-kubernetes)**
-->
## Azure Kubernetes 服务 (AKS)
[Azure Kubernetes 服务](https://azure.microsoft.com/zh-cn/services/kubernetes-service/)提供了简单的
Kubernetes 集群部署方式。
有关通过 Azure Kubernetes 服务将 Kubernetes 集群部署到 Azure 的示例:
**[微软 Azure Kubernetes 服务](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)**
<!--
## Custom Deployments: AKS-Engine
The core of the Azure Kubernetes Service is **open source** and available on GitHub for the community
to use and contribute to: **[AKS-Engine](https://github.com/Azure/aks-engine)**. The legacy [ACS-Engine](https://github.com/Azure/acs-engine) codebase has been deprecated in favor of AKS-engine.
AKS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Kubernetes
Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple
agent pools, and more. Some community contributions to AKS-Engine may even become features of the Azure Kubernetes Service.
The input to AKS-Engine is an apimodel JSON file describing the Kubernetes cluster. It is similar to the Azure Resource Manager (ARM) template syntax used to deploy a cluster directly with the Azure Kubernetes Service. The resulting output is an ARM template that can be checked into source control and used to deploy Kubernetes clusters to Azure.
You can get started by following the **[AKS-Engine Kubernetes Tutorial](https://github.com/Azure/aks-engine/blob/master/docs/tutorials/README.md)**.
-->
## 定制部署AKS 引擎
Azure Kubernetes 服务的核心是**开源**,并且可以在 GitHub 上让社区使用和参与贡献:**[AKS 引擎](https://github.com/Azure/aks-engine)**。旧版 [ACS 引擎](https://github.com/Azure/acs-engine) 代码库已被弃用以支持AKS-engine。
如果您需要在 Azure Kubernetes 服务正式支持的范围之外对部署进行自定义,则 AKS 引擎是一个不错的选择。这些自定义包括部署到现有虚拟网络中,利用多个代理程序池等。一些社区对 AKS 引擎的贡献甚至可能成为 Azure Kubernetes 服务的特性。
AKS 引擎的输入是一个描述 Kubernetes 集群的 apimodel JSON 文件。它和用于直接通过 Azure Kubernetes 服务部署集群的 Azure 资源管理器ARM模板语法相似。产生的输出是一个 ARM 模板,可以将其签入源代码管理,并使用它将 Kubernetes 集群部署到 Azure。
您可以按照 **[AKS 引擎 Kubernetes 教程](https://github.com/Azure/aks-engine/blob/master/docs/tutorials/README.md)**开始使用。
<!--
## CoreOS Tectonic for Azure
The CoreOS Tectonic Installer for Azure is **open source** and available on GitHub for the community to use and contribute to: **[Tectonic Installer](https://github.com/coreos/tectonic-installer)**.
Tectonic Installer is a good choice when you need to make cluster customizations as it is built on [Hashicorp's Terraform](https://www.terraform.io/docs/providers/azurerm/) Azure Resource Manager (ARM) provider. This enables users to customize or integrate using familiar Terraform tooling.
You can get started using the [Tectonic Installer for Azure Guide](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html).
-->
## 适用于 Azure 的 CoreOS Tectonic
适用于 Azure 的 CoreOS Tectonic Installer 是**开源的**,它可以让社区在 GitHub 上使用和参与贡献:**[Tectonic Installer](https://github.com/coreos/tectonic-installer)**。
当您需要进行自定义集群时Tectonic Installer是一个不错的选择因为它是基于 [Hashicorp 的 Terraform](https://www.terraform.io/docs/providers/azurerm/)Azure资源管理器ARM提供程序构建的。这使用户可以使用熟悉的 Terraform 工具进行自定义或集成。
您可以开始使用 [在 Azure 上安装 Tectonic 指南](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html)。

View File

@ -1,398 +0,0 @@
---
title: 在谷歌计算引擎上运行 Kubernetes
content_type: task
---
<!--
---
reviewers:
- brendandburns
- jbeda
- mikedanese
- thockin
title: Running Kubernetes on Google Compute Engine
content_type: task
---
-->
<!-- overview -->
<!--
The example below creates a Kubernetes cluster with 3 worker node Virtual Machines and a master Virtual Machine (i.e. 4 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
-->
下面的示例创建了一个 Kubernetes 集群,其中包含 3 个工作节点虚拟机和 1 个主虚拟机(即集群中有 4 个虚拟机)。
这个集群是在你的工作站(或你认为方便的任何地方)设置和控制的。
## {{% heading "prerequisites" %}}
<!--
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted cluster installation and management.
-->
如果你想要一个简化的入门体验和 GUI 来管理集群,
请考虑尝试[谷歌 Kubernetes 引擎](https://cloud.google.com/kubernetes-engine/)来安装和管理托管集群。
<!--
For an easy way to experiment with the Kubernetes development environment, click the button below
to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo.
-->
有一个简单的方式可以使用 Kubernetes 开发环境进行实验,
就是点击下面的按钮,打开 Google Cloud Shell其中包含了 Kubernetes 源仓库自动克隆的副本。
<!--
[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
-->
[![在 Cloud Shell 中打卡](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
<!--
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
-->
如果你想要使用定制的二进制或者纯开源的 Kubernetes请继续阅读下面的指导。
<!-- ### Prerequisites -->
### 前提条件 {#prerequisites}
<!--
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](https://console.cloud.google.com) for more details.
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
1. Enable the [Compute Engine Instance Group Manager API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview) in the [Google Cloud developers console](https://console.developers.google.com/apis/library).
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
1. Make sure you have credentials for GCloud by running `gcloud auth login`.
1. (Optional) In order to make API calls against GCE, you must also run `gcloud auth application-default login`.
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
1. Make sure you can SSH into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
-->
1. 你需要一个启用了计费的谷歌云平台账号。
更多细节请访问[谷歌开发者控制台](https://console.cloud.google.com)。
1. 根据需要安装 `gcloud`
`gcloud` 可作为[谷歌云 SDK](https://cloud.google.com/sdk/) 的一部分安装。
1. 在[谷歌云开发者控制台](https://console.developers.google.com/apis/library)
启用[计算引擎实例组管理器 API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview)
1. 确保将 gcloud 设置成使用你想要的谷歌云平台项目。
你可以使用 `gcloud config list project` 检查当前项目,
并通过 `gcloud config set project <project-id>` 修改它。
1. 通过运行 `gcloud auth login`,确保你拥有 GCloud 的凭据。
1. (可选)如果需要调用 GCE 的 API你也必须运行 `gcloud auth application-default login`
1. 确保你能通过命令行启动 GCE 虚拟机。
至少确保你可以完成 GCE 快速入门的[创建实例](https://cloud.google.com/compute/docs/instances/#startinstancegcloud)部分。
1. 确保你在没有交互式提示的情况下 SSH 到虚拟机。
查看 GCE 快速入门的[登录实例](https://cloud.google.com/compute/docs/instances/#sshing)部分。
<!-- steps -->
<!-- ## Starting a cluster -->
## 启动集群
<!--
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
-->
你可以安装一个客户端,并使用这些命令的其中之一来启动集群(我们列出的两种情况,因为你的机器可能只安装了二者之一):
```shell
curl -sS https://get.k8s.io | bash
```
```shell
wget -q -O - https://get.k8s.io | bash
```
<!--
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
-->
这条命令结完成后,你将会有 1 个主虚拟机和 4 个工作虚拟机,它们一起作为 Kubernetes 集群运行。
<!--
By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services.
-->
默认情况下,有一些容器已经在你的集群上运行。
`fluentd` 这样的容器提供[日志记录](/zh/docs/concepts/cluster-administration/logging/)
`heapster` 提供[监控](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md)服务。
<!--
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
-->
由上述命令运行的脚本创建了一个名称/前缀为“kubernetes”的集群。
它定义了一个特定的集群配置,所以此脚本只能运行一次。
<!--
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
-->
或者,你可以通过[这个页面](https://github.com/kubernetes/kubernetes/releases)下载和安装最新版本的 Kubernetes
然后运行 `<kubernetes>/cluster/kube-up.sh` 脚本启动集群:
```shell
cd kubernetes
cluster/kube-up.sh
```
<!--
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
-->
如果你希望在项目中运行多个集群,希望使用一个不同名称,或者不同数量工作节点的集群,
请查看 `<kubernetes>/cluster/gce/config-default.sh` 文件,以便在启动集群之前进行更细粒度的配置。
<!--
If you run into trouble, please see the section on [troubleshooting](/docs/setup/production-environment/turnkey/gce/#troubleshooting), post to the
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on `#gke` Slack channel.
-->
如果你遇到了问题,请参阅[错误排查](#troubleshooting)一节,
发布到 [Kubernetes 论坛](https://discuss.kubernetes.io),或者来 `#gke` Slack 频道中提问。
<!-- The next few steps will show you: -->
接下来的几个步骤会告诉你:
<!--
1. How to set up the command line client on your workstation to manage the cluster
2. Examples of how to use the cluster
3. How to delete the cluster
4. How to start clusters with non-default options (like larger clusters)
-->
1. 如何在你的工作站设置命令行客户端来管理集群
2. 如何使用集群的示例
3. 如何删除集群
4. 如果以非默认选项启动集群(如规模较大的集群)
<!-- ## Installing the Kubernetes command line tools on your workstation -->
## 在你的工作站安装 Kubernetes 命令行工具
<!--
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
-->
集群启动脚本将在你的工作站上留下一个正在运行的集群和一个 `kubernetes` 目录。
<!--
The [kubectl](/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster
manager. It lets you inspect your cluster resources, create, delete, and update
components, and much more. You will use it to look at your new cluster and bring
up example apps.
-->
[kubectl](/zh/docs/reference/kubectl/kubectl/) 工具控制 Kubernetes 集群管理器。
它允许你检查集群资源,创建、删除和更新组件等等。
你将使用它来查看新集群并启动示例应用程序。
<!--
You can use `gcloud` to install the `kubectl` command-line tool on your workstation:
-->
你可以使用 `gcloud` 在工作站上安装 `kubectl` 命令行工具:
```shell
gcloud components install kubectl
```
{{< note >}}
<!--
The kubectl version bundled with `gcloud` may be older than the one
downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/tools/install-kubectl/)
document to see how you can set up the latest `kubectl` on your workstation.
-->
`gcloud` 绑定的 kubectl 版本可能比 get.k8s.io 安装脚本所下载的更老。。
查看[安装 kubectl](/zh/docs/tasks/tools/install-kubectl/) 文档,了解如何在工作站上设置最新的 `kubectl`
{{< /note >}}
<!-- ## Getting started with your cluster -->
## 开始使用你的集群
<!-- ### Inspect your cluster -->
### 检查你的集群
<!--
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
-->
一旦 `kubectl` 存在于你的路径中,你就可以使用它来查看集群,例如,运行:
```
kubectl get --all-namespaces services
```
<!--
should show a set of [services](/docs/concepts/services-networking/service/) that look something like this:
-->
应该显示 [services](/zh/docs/concepts/services-networking/service/) 集合,看起来像这样:
```
NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
kube-system kube-dns ClusterIP 10.0.0.2 <none> 53/TCP,53/UDP 1d
kube-system kube-ui ClusterIP 10.0.0.3 <none> 80/TCP 1d
...
```
<!--
Similarly, you can take a look at the set of [pods](/docs/concepts/workloads/pods/) that were created during cluster startup.
You can do this via the
-->
类似的,你可以查看在集群启动时创建的 [pods](/zh/docs/concepts/workloads/pods/) 的集合。
你可以通过命令:
```
kubectl get --all-namespaces pods
```
<!--
You'll see a list of pods that looks something like this (the name specifics will be different):
-->
你将会看到 Pod 的列表,看起来像这样(名称和细节会有所不同):
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5f4fbb68df-mc8z8 1/1 Running 0 15m
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
```
<!--
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
-->
一些 Pod 启动可能需要几秒钟(在此期间它们会显示 `Pending`
但是在短时间后请检查它们是否都显示为 `Running`
<!-- ### Run some examples -->
### 运行示例
<!--
Then, see [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
-->
那么,看[一个简单的 nginx 示例](/zh/docs/tasks/run-application/run-stateless-application-deployment/)来试试你的新集群。
<!--
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough.
-->
要获得完整的应用,请查看 [examples 目录](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)。
[guestbook 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
是一个很好的“入门”演练。
<!-- ## Tearing down the cluster -->
## 拆除集群
<!-- To remove/delete/teardown the cluster, use the `kube-down.sh` script. -->
要移除/删除/拆除集群,请使用 `kube-down.sh` 脚本。
```shell
cd kubernetes
cluster/kube-down.sh
```
<!--
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
-->
同样地,同一目录下的 `kube-up.sh` 脚本会让集群重新运行起来。
你不需要再次运行 `curl``wget` 命令:现在 Kubernetes 集群所需的一切都在你的工作站上。
<!-- ## Customizing -->
## 定制
<!--
The script above relies on Google Storage to stage the Kubernetes release. It
then will start (by default) a single master VM along with 3 worker VMs. You
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
You can view a transcript of a successful cluster creation
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
-->
上面的脚本依赖于谷歌存储来保存 Kubernetes 发行版本。
该脚本然后(默认情况下)会启动 1 个主虚拟机和 3 个工作虚拟机。
你可以通过编辑 `kubernetes/cluster/gce/config-default.sh` 来调整这些参数。
你可以在[这里](https://gist.github.com/satnam6502/fc689d1b46db9772adea)查看成功创建集群的记录。
<!-- ## Troubleshooting -->
## 故障排除 {#troubleshooting}
<!-- ### Project settings -->
### 项目设置
<!--
You need to have the Google Cloud Storage API, and the Google Cloud Storage
JSON API enabled. It is activated by default for new projects. Otherwise, it
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
details.
-->
你需要启用 Google Cloud Storage API 和 Google Cloud Storage JSON API。
默认情况下,对新项目都是激活的。
如果未激活,可以在谷歌云控制台设置。
更多细节,请查看[谷歌云存储 JSON API 概览](https://cloud.google.com/storage/docs/json_api/)。
<!--
Also ensure that-- as listed in the [Prerequisites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
-->
也要确保——正如在[前提条件](#prerequisites)中列出的那样——
你已经启用了 `Compute Engine Instance Group Manager API`
并且可以像 [GCE 快速入门](https://cloud.google.com/compute/docs/quickstart)指导那样从命令行启动 GCE 虚拟机。
<!-- ### Cluster initialization hang -->
### 集群初始化过程停滞
<!--
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
-->
如果 Kubernetes 启动脚本停滞,等待 API 可达,
你可以 SSH 登录到主虚拟机和工作虚拟机,
通过查看 `/var/log/startupscript.log` 日志来排除故障。
<!--
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
-->
**一旦解决了这个问题,你应该在部分集群创建之后运行 `kube-down.sh` 来进行清理**,然后再运行 `kube-up.sh` 重试。
### SSH
<!--
If you're having trouble SSHing into your instances, ensure the GCE firewall
isn't blocking port 22 to your VMs. By default, this should work but if you
have edited firewall rules or created a new non-default network, you'll need to
expose it: `gcloud compute firewall-rules create default-ssh --network=<network-name>
--description "SSH allowed from anywhere" --allow tcp:22`
-->
如果在 SSH 登录实例时遇到困难,确保 GCE 防火墙没有阻塞你虚拟机的 22 端口。
默认情况下应该可用,但是如果你编辑了防火墙规则或者创建了一个新的非默认网络,
你需要公开它:`gcloud compute firewall-rules create default-ssh --network=<network-name> --description "SSH allowed from anywhere" --allow tcp:22`
<!--
Additionally, your GCE SSH key must either have no passcode or you need to be
using `ssh-agent`.
-->
此外,你的 GCE SSH 密钥不能有密码,否则你需要使用 `ssh-agent`
<!-- ### Networking -->
### 网络
<!--
The instances must be able to connect to each other using their private IP. The
script uses the "default" network which should have a firewall rule called
"default-allow-internal" which allows traffic on any port on the private IPs.
If this rule is missing from the default network or if you change the network
being used in `cluster/config-default.sh` create a new rule with the following
field values:
-->
虚拟机实例必须能够使用它们的私有 IP 彼此连接。
该脚本使用 "default" 网络,此网络应该有一个名为 "default-allow-internal" 的防火墙规则,
此规则允许通过私有 IP 上的任何端口进行通信。
如果默认网络中缺少此规则,或者更改了 `cluster/config-default.sh` 中使用的网络,
用以下字段值创建一个新规则:
<!--
* Source Ranges: `10.0.0.0/8`
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
-->
* 源范围:`10.0.0.0/8`
* 允许的协议和端口:`tcp:1-65535;udp:1-65535;icmp`
<!-- ## Support Level -->
## 支持等级
<!--
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/production-environment/turnkey/gce/) | | Project
-->
IaaS 提供商 | 配置管理 | 操作系统 | 网络 | 文档 | 符合率 | 支持等级
---------- | --------- | ------ | ---- | --------------------------------------------------------- | ----- | -------
GCE | Saltstack | Debian | GCE | [docs](/zh/docs/setup/production-environment/turnkey/gce/) | | Project

View File

@ -1,162 +0,0 @@
---
reviewers:
- bradtopol
title: 使用 IBM Cloud Private 在多个云上运行 Kubernetes
---
<!--
---
reviewers:
- bradtopol
title: Running Kubernetes on Multiple Clouds with IBM Cloud Private
---
-->
<!--
IBM® Cloud Private is a turnkey cloud solution and an on-premises turnkey cloud solution. IBM Cloud Private delivers pure upstream Kubernetes with the typical management components that are required to run real enterprise workloads. These workloads include health management, log management, audit trails, and metering for tracking usage of workloads on the platform.
-->
IBM® Cloud Private 是一个 一站式云解决方案并且是一个本地的一站式云解决方案。 IBM Cloud Private 提供纯上游 Kubernetes以及运行实际企业工作负载所需的典型管理组件。这些工作负载包括健康管理、日志管理、审计跟踪以及用于跟踪平台上工作负载使用情况的计量。
<!--
IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from [Docker Hub](https://hub.docker.com/r/ibmcom/icp-inception/). The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform. If you want to try IBM Cloud Private, you can use either the hosted trial, the tutorial, or the self-guided demo. You can also try the free community edition. For details, see [Get started with IBM Cloud Private](https://www.ibm.com/cloud/private/get-started).
-->
IBM Cloud Private 提供了社区版和全支持的企业版。可从 [Docker Hub](https://hub.docker.com/r/ibmcom/icp-inception/) 免费获得社区版本。企业版支持高可用性拓扑,并包括 IBM 对 Kubernetes 和 IBM Cloud Private 管理平台的商业支持。如果您想尝试 IBM Cloud Private您可以使用托管试用版、教程或自我指导演示。您也可以尝试免费的社区版。有关详细信息请参阅 [IBM Cloud Private 入门](https://www.ibm.com/cloud/private/get-started)。
<!--
For more information, explore the following resources:
* [IBM Cloud Private](https://www.ibm.com/cloud/private)
* [Reference architecture for IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud)
* [IBM Cloud Private documentation](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html)
-->
有关更多信息,请浏览以下资源:
* [IBM Cloud Private](https://www.ibm.com/cloud/private)
* [IBM Cloud Private 参考架构](https://github.com/ibm-cloud-architecture/refarch-privatecloud)
* [IBM Cloud Private 文档](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html)
<!--
## IBM Cloud Private and Terraform
The following modules are available where you can deploy IBM Cloud Private by using Terraform:
* AWS: [Deploy IBM Cloud Private to AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws)
* Azure: [Deploy IBM Cloud Private to Azure](https://github.com/ibm-cloud-architecture/terraform-icp-azure)
* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud)
* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack)
* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy)
* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware)
-->
## IBM Cloud Private 和 Terraform
您可以利用一下模块使用 Terraform 部署 IBM Cloud Private
* AWS[将 IBM Cloud Private 部署到 AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws)
* Azure[将 IBM Cloud Private 部署到 Azure](https://github.com/ibm-cloud-architecture/terraform-icp-azure)
* IBM Cloud[将 IBM Cloud Private 集群部署到 IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud)
* OpenStack[将IBM Cloud Private 部署到 OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack)
* Terraform 模块:[在任何支持的基础架构供应商上部署 IBM Cloud Private](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy)
* VMware[将 IBM Cloud Private 部署到 VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware)
<!--
## IBM Cloud Private on AWS
-->
## AWS 上的 IBM Cloud Private
<!--
You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) by using either AWS CloudFormation or Terraform.
-->
您可以使用 AWS CloudFormation 或 Terraform 在 Amazon Web ServicesAWS上部署 IBM Cloud Private 集群。
<!--
IBM Cloud Private has a Quick Start that automatically deploys IBM Cloud Private into a new virtual private cloud (VPC) on the AWS Cloud. A regular deployment takes about 60 minutes, and a high availability (HA) deployment takes about 75 minutes to complete. The Quick Start includes AWS CloudFormation templates and a deployment guide.
-->
IBM Cloud Private 快速入门可以自动将 IBM Cloud Private 部署到 AWS Cloud 上的新虚拟私有云VPC中。常规部署大约需要60分钟而高可用性HA部署大约需要75分钟。快速入门包括 AWS CloudFormation 模板和部署指南。
<!--
This Quick Start is for users who want to explore application modernization and want to accelerate meeting their digital transformation goals, by using IBM Cloud Private and IBM tooling. The Quick Start helps users rapidly deploy a high availability (HA), production-grade, IBM Cloud Private reference architecture on AWS. For all of the details and the deployment guide, see the [IBM Cloud Private on AWS Quick Start](https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/).
-->
这个快速入门适用于希望探索应用程序现代化并希望通过使用 IBM Cloud Private 和 IBM 工具加速实现其数字化转换目标的用户。快速入门可帮助用户在 AWS 上快速部署高可用性HA、生产级的 IBM Cloud Private 参考架构。有关所有详细信息和部署指南,请参阅 [IBM Cloud Private 在 AWS 上的快速入门 ](https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/)。
<!--
IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md).
-->
IBM Cloud Private 也可以通过使用 Terraform 在 AWS 云平台上运行。要在 AWS EC2 环境中部署 IBM Cloud Private请参阅[在 AWS 上安装 IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md)。
<!--
## IBM Cloud Private on Azure
You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [IBM Cloud Private on Azure](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/azure_overview.html).
-->
## Azure 上的 IBM Cloud Private
您可以启用 Microsoft Azure 作为 IBM Cloud Private 部署的云提供者,并利用 Azure 公共云上的所有 IBM Cloud Private 功能。有关更多信息,请参阅 [Azure 上的 IBM Cloud Private](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/azure_overview.html)。
<!--
## IBM Cloud Private with Red Hat OpenShift
-->
## 带有 Red Hat OpenShift 的 IBM Cloud Private
<!--
You can deploy IBM certified software containers that are running on IBM Cloud Private onto Red Hat OpenShift.
-->
您可以将在 IBM Cloud Private 上运行的 IBM 认证的软件容器部署到 Red Hat OpenShift 上。
<!--
Integration capabilities:
* Supports Linux® 64-bit platform in offline-only installation mode
* Single-master configuration
* Integrated IBM Cloud Private cluster management console and catalog
* Integrated core platform services, such as monitoring, metering, and logging
* IBM Cloud Private uses the OpenShift image registry
-->
整合能力:
* 在仅脱机安装模式下支持 Linux®64 位平台
* 单主控节点配置
* 集成的 IBM Cloud Private 集群管理控制台和目录
* 集成的核心平台服务,例如监控、计量和日志
* IBM Cloud Private 使用 OpenShift 镜像仓库
<!--
For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/openshift/overview.html).
-->
有关更多信息,请参阅 [OpenShift 上的 IBM Cloud Private](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/openshift/overview.html)。
<!--
## IBM Cloud Private on VirtualBox
To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox).
-->
## VirtualBox 上的 IBM Cloud Private
要将 IBM Cloud Private 安装到 VirtualBox 环境,请参阅[在 VirtualBox 上安装 IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox)。
<!--
## IBM Cloud Private on VMware
-->
## VMware 上的 IBM Cloud Private
<!--
You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects:
-->
您可以使用 Ubuntu 或 RHEL 镜像在 VMware 上安装 IBM Cloud Private。有关详细信息请参见以下项目
<!--
* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md)
* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel)
-->
* [使用 Ubuntu 安装IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md)
* [使用 Red Hat Enterprise 安装 IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel)
<!--
The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud.
-->
IBM Cloud Private Hosted 服务会自动在您的 VMware vCenter Server 实例上部署 IBM Cloud Private Hosted。此服务将微服务和容器的功能带到 IBM Cloud上的VMware 环境中。使用此服务,您可以将同样熟悉的 VMware 和 IBM Cloud Private 操作模型和工具从本地扩展到 IBM Cloud。
<!--
For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/vmwaresolutions?topic=vmwaresolutions-icp_overview).
-->
有关更多信息,请参阅 [IBM Cloud Private Hosted 服务](https://cloud.ibm.com/docs/vmwaresolutions?topic=vmwaresolutions-icp_overview)。

View File

@ -1,44 +0,0 @@
---
title: 在腾讯云容器服务上运行 Kubernetes
---
<!--
---
title: Running Kubernetes on Tencent Kubernetes Engine
---
-->
<!--
## Tencent Kubernetes Engine
[Tencent Cloud Tencent Kubernetes Engine (TKE)](https://intl.cloud.tencent.com/product/tke) provides native Kubernetes container management services. You can deploy and manage a Kubernetes cluster with TKE in just a few steps. For detailed directions, see [Deploy Tencent Kubernetes Engine](https://intl.cloud.tencent.com/document/product/457/11741).
TKE is a [Certified Kubernetes product](https://www.cncf.io/certification/software-conformance/).It is fully compatible with the native Kubernetes API.
-->
## 腾讯云容器服务
[腾讯云容器服务TKE](https://intl.cloud.tencent.com/product/tke)提供本地 Kubernetes 容器管理服务。您只需几个步骤即可使用 TKE 部署和管理 Kubernetes 集群。有关详细说明,请参阅[部署腾讯云容器服务](https://intl.cloud.tencent.com/document/product/457/11741)。
TKE 是[认证的 Kubernetes 产品](https://www.cncf.io/certification/software-conformance/)。它与原生 Kubernetes API 完全兼容。
<!--
## Custom Deployment
The core of Tencent Kubernetes Engine is open source and available [on GitHub](https://github.com/TencentCloud/tencentcloud-cloud-controller-manager/).
When using TKE to create a Kubernetes cluster, you can choose managed mode or independent deployment mode. In addition, you can customize the deployment as needed; for example, you can choose an existing Cloud Virtual Machine instance for cluster creation or enable Kube-proxy in IPVS mode.
-->
## 定制部署
腾讯 Kubernetes Engine 的核心是开源的,并且可以在 [GitHub](https://github.com/TencentCloud/tencentcloud-cloud-controller-manager/) 上使用。
使用 TKE 创建 Kubernetes 集群时,可以选择托管模式或独立部署模式。另外,您可以根据需要自定义部署。例如,您可以选择现有的 Cloud Virtual Machine 实例来创建集群,也可以在 IPVS 模式下启用 Kube-proxy。
<!--
## What's Next
To learn more, see the [TKE documentation](https://intl.cloud.tencent.com/document/product/457).
-->
## 下一步
要了解更多信息,请参阅 [TKE 文档](https://intl.cloud.tencent.com/document/product/457)。

View File

@ -201,7 +201,7 @@ JSON/YAML 格式的 Pod 定义文件。
<!--
1. Create a YAML file and store it on a web server so that you can pass the URL of that file to the kubelet.
-->
1. 创建一个 YAML 文件,并保存在保存在 web 服务上,为 kubelet 生成一个 URL。
1. 创建一个 YAML 文件,并保存在 web 服务上,为 kubelet 生成一个 URL。
```yaml
apiVersion: v1

898
content/zh/docs/test.md Normal file
View File

@ -0,0 +1,898 @@
---
title: 测试页面(中文版)
main_menu: false
---
<!--
title: Docs smoke test page
main_menu: false
-->
<!--
This page serves two purposes:
- Demonstrate how the Kubernetes documentation uses Markdown
- Provide a "smoke test" document we can use to test HTML, CSS, and template
changes that affect the overall documentation.
-->
本页面服务于两个目的:
- 展示 Kubernetes 中文版文档中应如何使用 Markdown
- 提供一个测试用文档,用来测试可能影响所有文档的 HTML、CSS 和模板变更
<!--
## Heading levels
The above heading is an H2. The page title renders as an H1. The following
sections show H3-H6.
### H3
This is in an H3 section.
#### H4
This is in an H4 section.
##### H5
This is in an H5 section.
###### H6
This is in an H6 section.
-->
## 标题级别
上面的标题是 H2 级别。页面标题Title会渲染为 H1。以下各节分别展示 H3-H6
的渲染结果。
### H3
此处为 H3 节内容。
#### H4
此处为 H4 节内容。
##### H5
此处为 H5 节内容。
###### H6
此处为 H6 节内容。
<!--
## Inline elements
Inline elements show up within the text of paragraph, list item, admonition, or
other block-level element.
-->
## 内联元素Inline elements
内联元素显示在段落文字、列表条目、提醒信息或者块级别元素之内。
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis
nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu
fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum.
### 内联文本风格
<!--
- **bold**
- _italic_
- ***bold italic***
- ~~strikethrough~~
- <u>underline</u>
- _<u>underline italic</u>_
- **<u>underline bold</u>**
- ***<u>underline bold italic</u>***
- `monospace text`
- **`monospace bold`**
-->
- **粗体字**
- _斜体字_
- ***粗斜体字***
- ~~删除线~~
- <u>下划线</u>
- _<u>带下划线的斜体</u>_
- ***<u>带下划线的粗斜体</u>***
- `monospace text` <- 等宽字体
- **`monospace bold`** <- 粗等宽字体
## 列表
<!--
Markdown doesn't have strict rules about how to process lists. When we moved
from Jekyll to Hugo, we broke some lists. To fix them, keep the following in
mind:
- Make sure you indent sub-list items **4 spaces** rather than the 2 that you
may be used to. Counter-intuitively, you need to indent block-level content
within a list item an extra 4 spaces too.
- To end a list and start another, you need a HTML comment block on a new line
between the lists, flush with the left-hand border. The first list won't end
otherwise, no matter how many blank lines you put between it and the second.
-->
Markdown 在如何处理列表方面没有严格的规则。在我们从 Jekyll 迁移到 Hugo 时,
我们遇到了一些问题。为了处理这些问题,请注意以下几点:
- 确保你将子列表的条目缩进**四个空格**而不是你可能熟悉的两个空格。
有一点是不那么直观的,你需要将列表中的块级别内容多缩进四个空格。
- 要结束一个列表并开始一个新的列表,你需要在两个列表之间添加加一个 HTML 注释块,
并将其置于独立的一行,左边顶边对齐。否则前一个列表不会结束,无论你在它与
第二个列表之间放多少个空行。
<!--
### Bullet lists
- This is a list item
* This is another list item in the same list
- You can mix `-` and `*`
- To make a sub-item, indent two tabstops (4 spaces). **This is different
from Jekyll and Kramdown.**
- This is a sub-sub-item. Indent two more tabstops (4 more spaces).
- Another sub-item.
-->
### 项目符号列表
- 此为列表条目
* 此为另一列表条目,位于同一列表中
- 你可以将 `-``*` 混合使用
- 要开始子列表,缩进两个 TAB (四个空格)。**Jekyll 和 Markdown
在这点上有所不同**。
- 这是另一个子子条目。进一步多缩进两个空格。
- 另一个子条目
<!-- separate lists -->
<!--
- This is a new list. With Hugo, you need to use a HTML comment to separate two
consecutive lists. **The HTML comment needs to be at the left margin.**
- Bullet lists can have paragraphs or block elements within them.
Indent the content to be one tab stop beyond the text of the bullet
point. **This paragraph and the code block line up with the second `l` in
`Bullet` above.**
```bash
ls -l
```
- And a sub-list after some block-level content
-->
- 这是一个新的列表。使用 Hugo 时,你需要用一行 HTML 注释将两个紧挨着的列表分开。
**这里的 HTML 注释需要按左侧顶边对齐。**
- 项目符号列表可以中包含文字段落或块元素。
段落内容与第一行文字左侧对齐。
**此段文字和下面的代码段都与前一行中的“项”字对齐。**
```bash
ls -l
```
- 在块级内容之后还可以有子列表内容。
<!--
- A bullet list item can contain a numbered list.
1. Numbered sub-list item 1
2. Numbered sub-list item 2
-->
- 项目符号列表条目中还可以包含编号列表。
1. 编号子列表条目一
1. 编号子列表条目二
- 项目符号列表条目中包含编号列表的另一种形式(推荐形式)。让子列表的编号数字
与项目符号列表文字左对齐。
1. 编号子列表条目一,左侧编号与前一行的“项”字左对齐。
1. 编号子列表条目二,条目文字与数字之间多了一个空格。
<!--
### Numbered lists
1. This is a list item
2. This is another list item in the same list. The number you use in Markdown
does not necessarily correlate to the number in the final output. By
convention, we keep them in sync.
3. {{<note>}}
For single-digit numbered lists, using two spaces after the period makes
interior block-level content line up better along tab-stops.
{{</note>}}
-->
### 编号列表
1. 此为列表条目
1. 此为列表中的第二个条目。在 Markdown 源码中所给的编号数字与最终输出的数字
可能不同。建议在紧凑列表中编号都使用 1。如果条目之间有其他内容比如注释
掉的英文)存在,则需要显式给出编号。
2. {{<note>}}
对于单个数字的编号列表,在句点(`.`)后面加两个空格。这样有助于将列表的
内容更好地对齐。
{{</note>}}
<!-- separate lists -->
<!--
1. This is a new list. With Hugo, you need to use a HTML comment to separate
two consecutive lists. **The HTML comment needs to be at the left margin.**
2. Numbered lists can have paragraphs or block elements within them.
Just indent the content to be one tab stop beyond the text of the bullet
point. **This paragraph and the code block line up with the `m` in
`Numbered` above.**
```bash
ls -l
```
- And a sub-list after some block-level content. This is at the same
"level" as the paragraph and code block above, despite being indented
more.
-->
1. 这是一个新的列表。 使用 Hugo 时,你需要用 HTML 注释将两个紧挨着的列表分开。
**HTML 注释需要按左边顶边对齐。**
2. 编号列表条目中也可以包含额外的段落或者块元素。
后续段落应该按编号列表文字的第一行左侧对齐。
**此段落及下面的代码段都与本条目中的第一个字“编”对齐。**
```bash
ls -l
```
- 编号列表条目中可以在块级内容之后有子列表。子列表的符号项要与上层列表条目
文字左侧对齐。
### 中文译文的编号列表格式 1
<!--
1. English item 1
-->
1. 译文条目一
<!--
1. English item 2
-->
2. 译文条目二,由于前述原因,条目 2 与 1 之间存在注释行,如果此条目不显式给出
起始编号,会被 Hugo 当做两个独立的列表。
### 中文译文的编号列表格式 2
<!--
1. English item 1
-->
1. 译文条目一
<!-- trunk of english text -->
中文译文段落。
<!--
```shell
# list services
kubectl get svc
```
-->
带注释的代码段(**注意以上英文注释 `<!--``-->` 的缩进空格数**)。
```shell
# 列举服务
kubectl get svc
```
<!--
1. English item 2
-->
2. 译文条目二,由于前述原因,条目 2 与 1 之间存在注释行,如果此条目不显式给出
起始编号,会被 Hugo 当做两个独立的列表。
<!--
### Tab lists
Tab lists can be used to conditionally display content, e.g., when multiple
options must be documented that require distinct instructions or context.
-->
### 标签列表
标签列表可以用来有条件地显式内容,例如,当有多种选项可供选择时,每个选项
可能需要完全不同的指令或者上下文。
<!--
{{</* tabs name="tab_lists_example" */>}}
{{%/* tab name="Choose one..." */%}}
请注意这里对英文原文短代码的处理。目的是确保其中的 tabs 短代码失效。
由于 Hugo 的局限性,如果不作类似处理,这里的 tabs 尽管已经被包含在
HTML 注释块中,仍然会生效!
Please select an option.
{{%/* /tab */%}}
-->
{{< tabs name="tab_lists_example" >}}
{{% tab name="请选择..." %}}
请选择一个选项。
{{% /tab %}}
<!--
{{%/* tab name="Formatting tab lists" */%}}
-->
{{% tab name="在标签页中格式化列表" %}}
<!--
Tabs may also nest formatting styles.
1. Ordered
1. (Or unordered)
1. Lists
```bash
echo 'Tab lists may contain code blocks!'
```
-->
标签页中也可以包含嵌套的排版风格,其中的英文注释处理也同正文中
的处理基本一致。
1. 编号列表
1. (或者没有编号的)
1. 列表
```bash
echo '标签页里面也可以有代码段!'
```
{{% /tab %}}
<!--
{{%/* tab name="Nested headers" */%}}
-->
{{% tab name="嵌套的子标题" %}}
<!--
### Headers in Tab list
Nested header tags may also be included.
-->
### 在标签页中的子标题
标签页中也可以包含嵌套的子标题。
<!--
{{</* warning */>}}
Headers within tab lists will not appear in the Table of Contents.
{{</* /warning */>}}
-->
{{< warning >}}
标签页中的子标题不会在目录中出现。
{{< /warning >}}
{{% /tab %}}
{{< /tabs >}}
<!--
### Checklists
Checklists are technically bullet lists, but the bullets are suppressed by CSS.
- [ ] This is a checklist item
- [x] This is a selected checklist item
-->
### 检查项列表 Checklists
检查项列表本质上也是一种项目符号列表,只是这里的项目符号部分被 CSS 压制了。
- [ ] 此为第一个检查项
- [x] 此为被选中的检查项
<!--
## Code blocks
You can create code blocks two different ways by surrounding the code block with
three back-tick characters on lines before and after the code block. **Only use
back-ticks (code fences) for code blocks.** This allows you to specify the
language of the enclosed code, which enables syntax highlighting. It is also more
predictable than using indentation.
-->
## 代码段
你可以用两种方式来创建代码块。一种方式是将在代码块之前和之后分别加上包含三个
反引号的独立行。**反引号应该仅用于代码段。**
用这种方式标记代码段时,你还可以指定所包含的代码的编程语言,从而启用语法加亮。
这种方式也比使用空格缩进的方式可预测性更好。
<!--
```
this is a code block created by back-ticks
```
-->
```
这是用反引号创建的代码段
```
<!--
The back-tick method has some advantages.
- It works nearly every time
- It is more compact when viewing the source code.
- It allows you to specify what language the code block is in, for syntax
highlighting.
- It has a definite ending. Sometimes, the indentation method breaks with
languages where spacing is significant, like Python or YAML.
-->
反引号标记代码段的方式有以下优点:
- 这种方式几乎总是能正确工作
- 在查看源代码时,内容相对紧凑
- 允许你指定代码块的编程语言,以便启用语法加亮
- 代码段的结束位置有明确标记。有时候,采用缩进空格的方式会使得一些对空格
很敏感的语言(如 Python、YAML很难处理。
<!--
To specify the language for the code block, put it directly after the first
grouping of back-ticks:
-->
要为代码段指定编程语言,可以在第一组反引号之后加上编程语言名称:
```bash
ls -l
```
<!--
Common languages used in Kubernetes documentation code blocks include:
- `bash` / `shell` (both work the same)
- `go`
- `json`
- `yaml`
- `xml`
- `none` (disables syntax highlighting for the block)
-->
Kubernetes 文档中代码块常用语言包括:
- `bash` / `shell` (二者几乎完全相同)
- `go`
- `json`
- `yaml`
- `xml`
- `none` (禁止对代码块执行语法加亮)
<!--
### Code blocks containing Hugo shortcodes
To show raw Hugo shortcodes as in the above example and prevent Hugo
from interpreting them, use C-style comments directly after the `<` and before
the `>` characters. The following example illustrates this (view the Markdown
source for this page).
-->
### 包含 Hugo 短代码的代码块
如果要像上面的例子一样显示 Hugo 短代码Shortcode不希望 Hugo 将其当做短代码来处理,
可以在 `<``>` 之间使用 C 语言风格的注释。
下面的示例展示如何实现这点(查看本页的 Markdown 源码):
```none
{{</* codenew file="pods/storage/gce-volume.yaml" */>}}
```
<!--
## Links
To format a link, put the link text inside square brackets, followed by the
link target in parentheses. [Link to Kubernetes.io](https://kubernetes.io/) or
[Relative link to Kubernetes.io](/)
You can also use HTML, but it is not preferred.
<a href="https://kubernetes.io/">Link to Kubernetes.io</a>
-->
## 链接
要格式化链接,将链接显示文本放在方括号中,后接用圆括号括起来的链接目标。
[指向 Kubernetes.io 的连接](https://kubernetes.io/) 或
[到 Kubernetes.io 的相对链接](/)。
你也可以使用 HTML但这种方式不是推荐的方式。
<a href="https://kubernetes.io/">到 Kubernetes.io 的链接</a>
### 中文链接
中文版本文档中的链接要注意以下两点:
- 指向 Kubernetes 文档的站内链接,需要在英文链接之前添加前缀 `/zh`
例如,原链接目标为 `/docs/foo/bar` 时,译文中的链接目标应为
`/zh/docs/foo/bar`。例如:
- 英文版本链接 [Kubernetes Components](/docs/concepts/overview/components/)
- 对应中文链接 [Kubernetes 组件](/zh/docs/concepts/overview/components/)
- 英文页面子标题会生成对应锚点Anchor例如子标题 `## Using object` 会生成
对应标签 `#using-objects`。在翻译为中文之后,对应锚点可能会失效。对此,有
两种方法处理。假定译文中存在以下子标题:
```
<!--
## Clean up
You can do this ...
-->
## 清理现场
你可以这样 ...
```
并且在本页或其他页面有指向 `#clean-up` 的链接如下:
```
..., please refer to the [clean up](#clean-up) section.
```
第一种处理方法是将链接改为中文锚点,即将引用该子标题的文字全部改为中文锚点。
例如:
```
..., 请参考[清理工作](#清理现场)一节。
```
第二种方式(也是推荐的方式)是将原来可能生成的锚点(尽管在英文原文中未明确
给出)显式标记在译文的子标题上。
```
<!--
## Clean up
You can do this ...
-->
## 清理现场 {#clean-up}
你可以这样 ...
```
之所以优选第二种方式是因为可以避免文档站点中其他引用此子标题的链接失效。
<!--
## Images
To format an image, use similar syntax to [links](#links), but add a leading `!`
character. The square brackets contain the image's alt text. Try to always use
alt text so that people using screen readers can get some benefit from the
image.
-->
## 图片
要显示图片,可以使用与链接类似的语法(`[links](#links)`),不过要在整个链接
之前添加一个感叹号(`!`)。方括号中给出的是图片的替代文本。
请坚持为图片设定替代文本,这样使用屏幕阅读器的人也能够了解图片中包含的是什么。
![pencil icon](/images/pencil.png)
<!--
To specify extended attributes, such as width, title, caption, etc, use the
<a href="https://gohugo.io/content-management/shortcodes/#figure">figure shortcode</a>,
which is preferred to using a HTML `<img>` tag. Also, if you need the image to
also be a hyperlink, use the `link` attribute, rather than wrapping the whole
figure in Markdown link syntax as shown below.
-->
要设置扩展的属性,例如 width、title、caption 等等,可以使用
<a href="https://gohugo.io/content-management/shortcodes/#figure">figure</a>
短代码,而不是使用 HTML 的 `<img>` 标签。
此外,如果你需要让图片本身变成超链接,可以使用短代码的 `link` 属性,而不是
将整个图片放到 Markdown 的链接语法之内。下面是一个例子:
<!--
{{</* figure src="/static/images/pencil.png" title="Pencil icon" caption="Image used to illustrate the figure shortcode" width="200px" */>}}
-->
{{< figure src="/images/pencil.png" title="铅笔图标" caption="用来展示 figure 短代码的图片" width="200px" >}}
<!--
Even if you choose not to use the figure shortcode, an image can also be a link. This
time the pencil icon links to the Kubernetes website. Outer square brackets enclose
the entire image tag, and the link target is in the parentheses at the end.
[![pencil icon](/images/pencil.png)](https://kubernetes.io)
You can also use HTML for images, but it is not preferred.
<img src="/images/pencil.png" alt="pencil icon" />
-->
即使你不想使用 figure 短代码,图片也可以展示为链接。这里,铅笔图标指向
Kubernetes 网站。外层的方括号将整个 image 标签封装起来,链接目标在
末尾的圆括号之间给出。
[![pencil icon](/images/pencil.png)](https://kubernetes.io)
你也可以使用 HTML 来嵌入图片,不过这种方式是不推荐的。
<img src="/images/pencil.png" alt="铅笔图标" />
<!--
## Tables
Simple tables have one row per line, and columns are separated by `|`
characters. The header is separated from the body by cells containing nothing
but at least three `-` characters. For ease of maintenance, try to keep all the
cell separators even, even if you heed to use extra space.
-->
## 表格
简单的表格可能每行只有一个独立的数据行,各个列之间用 `|` 隔开。
表格的标题行与表格内容之间用独立的一行隔开,在这一行中每个单元格的内容
只有 `-` 字符,且至少三个。出于方便维护考虑,请尝试将各个单元格间的
分割线对齐,尽管这样意味着你需要多输入几个空格。
<!--
| Heading cell 1 | Heading cell 2 |
|----------------|----------------|
| Body cell 1 | Body cell 2 |
-->
| 标题单元格 1 | 标题单元格 2 |
|----------------|----------------|
| 内容单元格 1 | 内容单元格 2 |
<!--
The header is optional. Any text separated by `|` will render as a table.
-->
标题行是可选的。所有用 `|` 隔开的内容都会被渲染成表格。
<!--
Markdown tables have a hard time with block-level elements within cells, such as
list items, code blocks, or multiple paragraphs. For complex or very wide
tables, use HTML instead.
-->
Markdown 表格在处理块级元素方面还很笨拙。例如在单元格中嵌入列表条目、代码段、
或者在其中划分多个段落方面的能力都比较差。对于复杂的或者很宽的表格,可以使用
HTML。
<table>
<thead>
<tr>
<!-- th>Heading cell 1</th -->
<th>标题单元格 1</th>
<!-- th>Heading cell 2</th -->
<th>标题单元格 2</th>
</tr>
</thead>
<tbody>
<tr>
<!-- td>Body cell 1</td -->
<td>内容单元格 1</td>
<!-- td>Body cell 2</td -->
<td>内容单元格 2</td>
</tr>
</tbody>
</table>
<!--
## Visualizations with Mermaid
You can use [Mermaid JS](https://mermaidjs.github.io) visualizations.
The Mermaid JS version is specified in [/layouts/partials/head.html](https://github.com/kubernetes/website/blob/master/layouts/partials/head.html)
-->
## 使用 Mermaid 来可视化
你可以使用 [Mermaid JS](https://mermaidjs.github.io) 来进行可视化展示。
Mermaid JS 版本在 [/layouts/partials/head.html](https://github.com/kubernetes/website/blob/master/layouts/partials/head.html)
中设置。
<!--
{{</* mermaid */>}}
graph TD;
A->B;
A->C;
B->D;
C->D;
{{</* mermaid */>}}
-->
```
{{</* mermaid */>}}
graph TD;
甲-->乙;
甲-->丙;
乙-->丁;
丙-->丁;
{{</*/ mermaid */>}}
```
<!--
Produces:
-->
会产生:
<!--
{{</* mermaid */>}}
graph TD;
A->B;
A->C;
B->D;
C->D;
{{</*/ mermaid */>}}
-->
{{< mermaid >}}
graph TD;
甲-->乙;
甲-->丙;
乙-->丁;
丙-->丁;
{{</ mermaid >}}
<!--
```
{{</* mermaid */>}}
sequenceDiagram
Alice ->> Bob: Hello Bob, how are you?
Bob->>John: How about you John?
Bob-x Alice: I am good thanks!
Bob-x John: I am good thanks!
Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.
Bob->Alice: Checking with John...
Alice->John: Yes... John, how are you?
{{</*/ mermaid */>}}
```
-->
```
{{</* mermaid */>}}
sequenceDiagram
张三 ->> 李四: 李四,锄禾日当午?
李四-->>王五: 王五,锄禾日当午?
李四--x 张三: 汗滴禾下土!
李四-x 王五: 汗滴禾下土!
Note right of 王五: 李四想啊想啊<br/>一直想啊想,太阳<br/>都下山了,他还没想出来<br/>,文本框都放不下了。
李四-->张三: 跑去问王五...
张三->王五: 好吧... 王五,白日依山尽?
{{</*/ mermaid */>}}
```
<!--
Produces:
-->
产生:
<!--
{{< mermaid >}}
sequenceDiagram
Alice ->> Bob: Hello Bob, how are you?
Bob->>John: How about you John?
Bob-x Alice: I am good thanks!
Bob-x John: I am good thanks!
Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.
Bob->Alice: Checking with John...
Alice->John: Yes... John, how are you?
{{</ mermaid >}}
-->
{{< mermaid >}}
sequenceDiagram
张三 ->> 李四: 李四,锄禾日当午?
李四-->>王五: 王五,锄禾日当午?
李四--x 张三: 汗滴禾下土!
李四-x 王五: 汗滴禾下土!
Note right of 王五: 李四想啊想啊一直想,<br/>想到太阳都下山了,<br/>他还没想出来,<br/>文本框都放不下了。
李四-->张三: 跑去问王五...
张三->王五: 好吧... 王五,白日依山尽?
{{</ mermaid >}}
<!--
<br>More [examples](https://mermaid-js.github.io/mermaid/#/examples) from the offical docs.
-->
<br>在官方网站上有更多的[示例](https://mermaid-js.github.io/mermaid/#/examples)。
<!--
## Sidebars and Admonitions
Sidebars and admonitions provide ways to add visual importance to text. Use
them sparingly.
-->
## 侧边栏和提醒框
侧边栏和提醒框可以为文本提供直观的重要性强调效果,可以偶尔一用。
<!--
### Sidebars
A sidebar offsets text visually, but without the visual prominence of
[admonitions](#admonitions).
-->
### 侧边栏Sidebar
侧边栏可以将文字横向平移,只是其显示效果可能不像[提醒](#admonitions)那么明显。
<!--
> This is a sidebar.
>
> You can have paragraphs and block-level elements within a sidebar.
>
> You can even have code blocks.
>
> ```bash
> sudo dmesg
> ```
-->
> 此为侧边栏。
>
> 你可以在侧边栏内排版段落和块级元素。
>
> 你甚至可以在其中包含代码块。
>
> ```bash
> sudo dmesg
> ```
<!--
### Admonitions
Admonitions (notes, warnings, etc) use Hugo shortcodes.
-->
### 提醒框 {#admonitions}
提醒框(说明、警告等等)都是用 Hugo 短代码的形式展现。
<!--
{{< note >}}
Notes catch the reader's attention without a sense of urgency.
You can have multiple paragraphs and block-level elements inside an admonition.
| Or | a | table |
{{< /note >}}
-->
{{< note >}}
说明信息用来引起读者的注意,但不过分强调其紧迫性。
你可以在提醒框内包含多个段落和块级元素。
| 甚至 | 包含 | 表格 |
{{< /note >}}
<!--
{{< caution >}}
The reader should proceed with caution.
{{< /caution >}}
-->
{{< caution >}}
读者继续此操作时要格外小心。
{{< /caution >}}
<!--
{{< warning >}}
Warnings point out something that could cause harm if ignored.
{{< /warning >}}
-->
{{< warning >}}
警告信息试图为读者指出一些不应忽略的、可能引发问题的事情。
{{< /warning >}}
注意,在较老的 Hugo 版本中,直接将 `note`、`warning` 或 `caution` 短代码
括入 HTML 注释当中是有问题的。这些短代码仍然会起作用。目前,在 0.70.0
以上版本中似乎已经修复了这一问题。
<!--
## Includes
To add shortcodes to includes.
-->
## 包含其他页面
要包含其他页面,可使用短代码。
{{< note >}}
{{< include "task-tutorial-prereqs.md" >}}
{{< /note >}}
<!--
## Katacoda Embedded Live Environment
-->
## 嵌入的 Katacoda 环境
{{< kat-button >}}

View File

@ -0,0 +1,10 @@
---
title: "示例:配置 java 微服务"
weight: 10
---
<!--
---
title: "Example: Configuring a Java Microservice"
weight: 10
---
-->

View File

@ -0,0 +1,37 @@
---
title: "互动教程 - 配置 java 微服务"
weight: 20
---
<!--
---
title: "Interactive Tutorial - Configuring a Java Microservice"
weight: 20
---
-->
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
<script src="https://katacoda.com/embed.js"></script>
<div class="layout" id="top">
<main class="content katacoda-content">
<div class="katacoda">
<div class="katacoda__alert">
<!-- To interact with the Terminal, please use the desktop/tablet version -->
如需要与终端交互,请使用台式机/平板电脑版
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/9" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;"></div>
</div>
</main>
</div>
</body>
</html>

View File

@ -0,0 +1,93 @@
---
title: "使用 MicroProfile、ConfigMaps、Secrets 实现外部化应用配置"
content_type: tutorial
weight: 10
---
<!--
---
title: "Externalizing config using MicroProfile, ConfigMaps and Secrets"
content_type: tutorial
weight: 10
---
-->
<!-- overview -->
<!--
In this tutorial you will learn how and why to externalize your microservices configuration. Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment variables and then consume them using MicroProfile Config.
-->
在本教程中,你会学到如何以及为什么要实现外部化微服务应用配置。
具体来说,你将学习如何使用 Kubernetes ConfigMaps 和 Secrets 设置环境变量,
然后在 MicroProfile config 中使用它们。
## {{% heading "prerequisites" %}}
<!--
### Creating Kubernetes ConfigMaps & Secrets
There are several ways to set environment variables for a Docker container in Kubernetes, including: Dockerfile, kubernetes.yml, Kubernetes ConfigMaps, and Kubernetes Secrets. In the tutorial, you will learn how to use the latter two for setting your environment variables whose values will be injected into your microservices. One of the benefits for using ConfigMaps and Secrets is that they can be re-used across multiple containers, including being assigned to different environment variables for the different containers.
-->
### 创建 Kubernetes ConfigMaps 和 Secrets {#creating-kubernetes-configmaps-secrets}
在 Kubernetes 中,为 docker 容器设置环境变量有几种不同的方式,比如:
Dockerfile、kubernetes.yml、Kubernetes ConfigMaps、和 Kubernetes Secrets。
在本教程中,你将学到怎么用后两个方式去设置你的环境变量,而环境变量的值将注入到你的微服务里。
使用 ConfigMaps 和 Secrets 的一个好处是他们能在多个容器间复用,
比如赋值给不同的容器中的不同环境变量。
<!--
ConfigMaps are API Objects that store non-confidential key-value pairs. In the Interactive Tutorial you will learn how to use a ConfigMap to store the application's name. For more information regarding ConfigMaps, you can find the documentation [here].
Although Secrets are also used to store key-value pairs, they differ from ConfigMaps in that they're intended for confidential/sensitive information and are stored using Base64 encoding. This makes secrets the appropriate choice for storing such things as credentials, keys, and tokens, the former of which you'll do in the Interactive Tutorial. For more information on Secrets, you can find the documentation [here](/docs/concepts/configuration/secret/).
-->
ConfigMaps 是存储非机密键值对的 API 对象。
在互动教程中,你会学到如何用 ConfigMap 来保存应用名字。
ConfigMap 的更多信息,你可以在[这里](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)找到文档。
Secrets 尽管也用来存储键值对,但区别于 ConfigMaps 的是:它针对机密/敏感数据,且存储格式为 Base64 编码。
secrets 的这种特性使得它适合于存储证书、密钥、令牌,上述内容你将在交互教程中实现。
Secrets 的更多信息,你可以在[这里](/zh/docs/concepts/configuration/secret/)找到文档。
<!--
### Externalizing Config from Code
Externalized application configuration is useful because configuration usually changes depending on your environment. In order to accomplish this, we'll use Java's Contexts and Dependency Injection (CDI) and MicroProfile Config. MicroProfile Config is a feature of MicroProfile, a set of open Java technologies for developing and deploying cloud-native microservices.
-->
### 从代码外部化配置
外部化应用配置之所以有用处,是因为配置常常根据环境的不同而变化。
为了实现此功能,我们用到了 Java 上下文和依赖注入Contexts and Dependency Injection, CDI、MicroProfile 配置。
MicroProfile config 是 MicroProfile 的功能特性,
是一组开放 Java 技术,用于开发、部署云原生微服务。
<!--
CDI provides a standard dependency injection capability enabling an application to be assembled from collaborating, loosely-coupled beans. MicroProfile Config provides apps and microservices a standard way to obtain config properties from various sources, including the application, runtime, and environment. Based on the source's defined priority, the properties are automatically combined into a single set of properties that the application can access via an API. Together, CDI & MicroProfile will be used in the Interactive Tutorial to retrieve the externally provided properties from the Kubernetes ConfigMaps and Secrets and get injected into your application code.
Many open source frameworks and runtimes implement and support MicroProfile Config. Throughout the interactive tutorial, you'll be using Open Liberty, a flexible open-source Java runtime for building and running cloud-native apps and microservices. However, any MicroProfile compatible runtime could be used instead.
-->
CDI 提供一套标准的依赖注入能力,使得应用程序可以由相互协作的、松耦合的 beans 组装而成。
MicroProfile Config 为 app 和微服务提供从各种来源,比如应用、运行时、环境,获取配置参数的标准方法。
基于来源定义的优先级,属性可以自动的合并到单独一组应用可以通过 API 访问到的属性。
CDI & MicroProfile 都会被用在互动教程中,
用来从 Kubernetes ConfigMaps 和 Secrets 获得外部提供的属性,并注入应用程序代码中。
很多开源框架、运行时支持 MicroProfile Config。
对于整个互动教程,你都可以使用开放的库、灵活的开源 Java 运行时,去构建并运行云原生的 apps 和微服务。
然而,任何 MicroProfile 兼容的运行时都可以用来做替代品。
## {{% heading "objectives" %}}
<!--
* Create a Kubernetes ConfigMap and Secret
* Inject microservice configuration using MicroProfile Config
-->
* 创建 Kubernetes ConfigMap 和 Secret
* 使用 MicroProfile Config 注入微服务配置
<!-- lessoncontent -->
<!--
## Example: Externalizing config using MicroProfile, ConfigMaps and Secrets
### [Start Interactive Tutorial](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/)
-->
## 示例:使用 MicroProfile、ConfigMaps、Secrets 实现外部化应用配置
### [启动互动教程](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/)

View File

@ -1,7 +1,7 @@
# i18n strings for the English (main) site.
# NOTE: Please keep the entries in alphabetical order when editing
[announcement_title]
other = "<img src=\"images/kccnc-na-virtual-2020-white.svg\" style=\"float: right; height: 80px;\" /><a href=\"https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kuberntes.io&utm_medium=search&utm_campaign=KC_CNC_Virtual\">KubeCon + CloudNativeCon NA 2020</a> <em>virtual</em>."
other = "<img src=\"/images/kccnc-na-virtual-2020-white.svg\" style=\"float: right; height: 80px;\" /><a href=\"https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kuberntes.io&utm_medium=search&utm_campaign=KC_CNC_Virtual\">KubeCon + CloudNativeCon NA 2020</a> <em>virtual</em>."
[announcement_message]
other = "4 days of incredible opportunities to collaborate, learn, and share with the entire community!<br />November 17 20 2020"

View File

@ -241,14 +241,13 @@ a {
}
.fullbutton {
display: block;
display: inline-block;
margin: auto;
margin-top: 2rem;
width: 156px;
background-color: #0662EE;
color: white;
font-size: 18px;
padding: 2%;
padding: 2% 2.5%;
letter-spacing: 0.07em;
font-weight: bold;