Rename Google Container Engine to Google Kubernetes Engine
parent
bfc642a066
commit
4f38e0aefd
|
@ -19,8 +19,8 @@ toc:
|
|||
|
||||
- title: Hosted Solutions
|
||||
section:
|
||||
- title: Running Kubernetes on Google Container Engine
|
||||
path: https://cloud.google.com/container-engine/docs/before-you-begin/
|
||||
- title: Running Kubernetes on Google Kubernetes Engine
|
||||
path: https://cloud.google.com/kubernetes-engine/docs/before-you-begin/
|
||||
- title: Running Kubernetes on Azure Container Service
|
||||
path: https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough
|
||||
- title: Running Kubernetes on IBM Bluemix Container Service
|
||||
|
|
|
@ -653,10 +653,10 @@
|
|||
},
|
||||
{
|
||||
type: 3,
|
||||
name: 'Google Container Engine',
|
||||
name: 'Google Kubernetes Engine',
|
||||
logo: 'gcp',
|
||||
link: 'https://cloud.google.com/container-engine/',
|
||||
blurb: 'Google - Google Container Engine'
|
||||
link: 'https://cloud.google.com/kubernetes-engine/',
|
||||
blurb: 'Google - Google Kubernetes Engine'
|
||||
},
|
||||
{
|
||||
type: 3,
|
||||
|
|
|
@ -26,7 +26,7 @@ css: /css/style_peardeck.css
|
|||
|
||||
<div class="col2">
|
||||
<h2>Solution</h2>
|
||||
In 2016, the company began moving their code from Heroku to <a href="https://www.docker.com/">Docker</a> containers running on <a href="https://cloud.google.com/container-engine/">Google Container Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.
|
||||
In 2016, the company began moving their code from Heroku to <a href="https://www.docker.com/">Docker</a> containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.
|
||||
<br>
|
||||
<br>
|
||||
<h2>Impact</h2>
|
||||
|
@ -51,22 +51,22 @@ But once it launched, the user base began growing steadily at a rate of 30 perce
|
|||
<br><br>
|
||||
On top of that, many of Pear Deck’s customers are behind government firewalls and connect through Firebase, not Pear Deck’s servers, making troubleshooting even more difficult.
|
||||
<br><br>
|
||||
The team began looking around for another solution, and finally decided in early 2016 to start moving the app from Heroku to <a href="https://www.docker.com/">Docker</a> containers running on <a href="https://cloud.google.com/container-engine/">Google Container Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.
|
||||
The team began looking around for another solution, and finally decided in early 2016 to start moving the app from Heroku to <a href="https://www.docker.com/">Docker</a> containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<div class="banner3">
|
||||
<div class="banner3text">
|
||||
"When it became clear that Google Container Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch.
|
||||
"When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch.
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<section class="section3">
|
||||
<div class="fullcol">
|
||||
They had considered other options like Google’s App Engine (which they were already using for one service) and Amazon’s <a href="https://aws.amazon.com/ec2/">Elastic Compute Cloud</a> (EC2), while experimenting with running one small service that wasn’t accessible to the Internet in Kubernetes. "When it became clear that Google Container Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch. "We didn’t really consider Terraform and the other competitors because the abstractions offered by Kubernetes just jumped off the page to us."<br><br>
|
||||
They had considered other options like Google’s App Engine (which they were already using for one service) and Amazon’s <a href="https://aws.amazon.com/ec2/">Elastic Compute Cloud</a> (EC2), while experimenting with running one small service that wasn’t accessible to the Internet in Kubernetes. "When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch. "We didn’t really consider Terraform and the other competitors because the abstractions offered by Kubernetes just jumped off the page to us."<br><br>
|
||||
Once the team started porting its Heroku apps into Kubernetes, which was "super easy," he says, the impact was immediate. "Before, to make a new version of the app meant going to Heroku and reconfiguring 10 new services, so basically no one was willing to do it, and we never staged things," he says. "Now we can deploy our exact same configuration in lots of different clusters in 30 seconds. We have a full set up that’s always running, and then any of our developers or designers can stage new versions with one command, including their recent changes. We stage all the time now, and everyone stopped talking about how cool it is because it’s become invisible how great it is."
|
||||
<br><br>
|
||||
Along with Kubernetes came Prometheus. "Until pretty recently we didn’t have any kind of visibility into aggregate server metrics or performance," says Eynon-Lynch. The team had tried to use GKE’s <a href="https://cloud.google.com/stackdriver/">Stackdriver</a> monitoring, but had problems making it work, and considered <a href="https://newrelic.com/">New Relic</a>. When they started looking at Prometheus in the fall of 2016, "the fit between the abstractions in Prometheus and the way we think about how our system works, was so clear and obvious," he says.<br><br>
|
||||
Along with Kubernetes came Prometheus. "Until pretty recently we didn’t have any kind of visibility into aggregate server metrics or performance," says Eynon-Lynch. The team had tried to use Google Kubernetes Engine’s <a href="https://cloud.google.com/stackdriver/">Stackdriver</a> monitoring, but had problems making it work, and considered <a href="https://newrelic.com/">New Relic</a>. When they started looking at Prometheus in the fall of 2016, "the fit between the abstractions in Prometheus and the way we think about how our system works, was so clear and obvious," he says.<br><br>
|
||||
The integration with Kubernetes made set-up easy. Once Helm installed Prometheus, "We started getting a graph of the health of all our Kubernetes nodes and pods immediately. I think we were pretty hooked at that point," Eynon-Lynch says. "Then we got our own custom instrumentation working in 15 minutes, and had an actively updated count of requests that we could do, rate on and get a sense of how many users are connected at a given point. And then it was another hour before we had alarms automatically showing up in our Slack channel. All that was in one afternoon. And it was an afternoon of gasping with delight, basically!"
|
||||
</div>
|
||||
</section>
|
||||
|
|
|
@ -56,7 +56,7 @@ title: 创建大规模集群
|
|||
|
||||
### 管理节点和组件的规格
|
||||
|
||||
在 GCE/GKE 或 AWS平台中, `kube-up` 会根据集群的节点规模合理地设置管理节点的规格。 在其他云平台上,用户需要手动配置。 作为参考,GCE使用的规格为:
|
||||
在 GCE/Google Kubernetes Engine 或 AWS平台中, `kube-up` 会根据集群的节点规模合理地设置管理节点的规格。 在其他云平台上,用户需要手动配置。 作为参考,GCE使用的规格为:
|
||||
|
||||
* 1-5 节点: n1-standard-1
|
||||
* 6-10 节点: n1-standard-2
|
||||
|
|
|
@ -9,7 +9,7 @@ title: 构建高可用集群
|
|||
本文描述了如何构建一个高可用(high-availability, HA)的Kubernetes集群。这是一个非常高级的主题。
|
||||
|
||||
对于仅希望使用Kubernetes进行试验的用户,推荐使用更简单的配置工具进行搭建,例如:
|
||||
[Minikube](/docs/getting-started-guides/minikube/),或者尝试使用[Google Container Engine](https://cloud.google.com/container-engine/) 来运行Kubernetes。
|
||||
[Minikube](/docs/getting-started-guides/minikube/),或者尝试使用[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) 来运行Kubernetes。
|
||||
|
||||
此外,当前在我们的端到端(e2e)测试环境中,没有对Kubernetes高可用的支持进行连续测试。我们将会增加这个连续测试项,但当前对单节点master的安装测试得更加严格。
|
||||
|
||||
|
|
|
@ -68,4 +68,4 @@ Master 组件通过非安全(没有加密或认证)端口和集群的 apiser
|
|||
### SSH 隧道
|
||||
|
||||
|
||||
[Google Container Engine](https://cloud.google.com/container-engine/docs/) 使用 SSH 隧道保护 Master -> Cluster 通信路径。在这种配置下,apiserver 发起一个到集群中每个节点的 SSH 隧道(连接到在 22 端口监听的 ssh 服务)并通过这个隧道传输所有到 kubelet、node、pod 或者 service 的流量。这个隧道保证流量不会在集群运行的私有 GCE 网络之外暴露。
|
||||
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/) 使用 SSH 隧道保护 Master -> Cluster 通信路径。在这种配置下,apiserver 发起一个到集群中每个节点的 SSH 隧道(连接到在 22 端口监听的 ssh 服务)并通过这个隧道传输所有到 kubelet、node、pod 或者 service 的流量。这个隧道保证流量不会在集群运行的私有 GCE 网络之外暴露。
|
||||
|
|
|
@ -31,7 +31,7 @@ title: 镜像
|
|||
|
||||
- 使用Google Container Registry
|
||||
- 每个集群分别配置
|
||||
- 在Google Compute Engine 或者 Google Container Engine上自动配置
|
||||
- 在Google Compute Engine 或者 Google Kubernetes Engine上自动配置
|
||||
- 所有的pod都能读取项目的私有仓库
|
||||
- 使用 AWS EC2 Container Registry (ECR)
|
||||
- 使用IAM角色和策略来控制对ECR仓库的访问
|
||||
|
@ -51,7 +51,7 @@ title: 镜像
|
|||
### 使用 Google Container Registry
|
||||
Kuberetes运行在Google Compute Engine (GCE)时原生支持[Google ContainerRegistry (GCR)]
|
||||
(https://cloud.google.com/tools/container-registry/)。如果kubernetes集群运行在GCE
|
||||
或者Google Container Engine (GKE)上,使用镜像全名(e.g. gcr.io/my_project/image:tag)即可。
|
||||
或者Google Kubernetes Engine 上,使用镜像全名(e.g. gcr.io/my_project/image:tag)即可。
|
||||
|
||||
集群中的所有pod都会有读取这个仓库中镜像的权限。
|
||||
|
||||
|
@ -112,7 +112,7 @@ Kubelet会获取并且定期刷新ECR的凭证。它需要以下权限
|
|||
|
||||
### 配置Nodes对私有仓库认证
|
||||
|
||||
**注意:** 如果在Google Container Engine (GKE)上运行集群,每个节点上都会有`.dockercfg`文件,它包含对Google Container Registry的凭证。
|
||||
**注意:** 如果在Google Kubernetes Engine 上运行集群,每个节点上都会有`.dockercfg`文件,它包含对Google Container Registry的凭证。
|
||||
不需要使用以下方法。
|
||||
|
||||
**注意:** 如果在AWS EC2上运行集群且准备使用EC2 Container Registry (ECR),每个node上的kubelet会管理和更新ECR的登录凭证。不需要使用以下方法。
|
||||
|
@ -175,7 +175,7 @@ $ kubectl describe pods/private-image-test-1 | grep "Failed"
|
|||
|
||||
### 提前拉取镜像
|
||||
|
||||
**注意:** 如果在Google Container Engine (GKE)上运行集群,每个节点上都会有`.dockercfg`文件,它包含对Google Container Registry的凭证。
|
||||
**注意:** 如果在Google Kubernetes Engine 上运行集群,每个节点上都会有`.dockercfg`文件,它包含对Google Container Registry的凭证。
|
||||
不需要使用以下方法。
|
||||
|
||||
**注意:** 该方法适用于能够对节点进行配置的情况。该方法在GCE及在其它能自动配置节点的云平台上并不适合。
|
||||
|
@ -193,7 +193,7 @@ $ kubectl describe pods/private-image-test-1 | grep "Failed"
|
|||
|
||||
### 在pod上指定ImagePullSecrets
|
||||
|
||||
**注意:** GKE,GCE及其他自动创建node的云平台上,推荐使用本方法。
|
||||
**注意:** Google Kubernetes Engine,GCE及其他自动创建node的云平台上,推荐使用本方法。
|
||||
|
||||
Kuberentes支持在pod中指定仓库密钥。
|
||||
|
||||
|
@ -262,7 +262,7 @@ spec:
|
|||
|
||||
也可以在[serviceAccount](/docs/user-guide/service-accounts) 资源中设置imagePullSecrets自动设置`imagePullSecrets`
|
||||
|
||||
`imagePullSecrets`可以和每个node上的`.docker/config.json`一起使用,他们将共同生效。本方法在Google Container Engine (GKE)
|
||||
`imagePullSecrets`可以和每个node上的`.docker/config.json`一起使用,他们将共同生效。本方法在Google Kubernetes Engine
|
||||
也能正常工作。
|
||||
|
||||
### 使用场景
|
||||
|
|
|
@ -21,7 +21,7 @@ title: 配置你的云平台防火墙
|
|||
|
||||
|
||||
当以 `spec.type: LoadBalancer` 使用服务时,你可以使用 `spec.loadBalancerSourceRanges` 指定允许访问负载均衡的 IP 段。
|
||||
这个字段采用 CIDR 的 IP 段,Kubernetes 会使用这个段配置防火墙。支持这个功能的平台目前有 Google Compute Engine,Google Container Engine 和 AWS。
|
||||
这个字段采用 CIDR 的 IP 段,Kubernetes 会使用这个段配置防火墙。支持这个功能的平台目前有 Google Compute Engine,Google Kubernetes Engine 和 AWS。
|
||||
如果云服务商不支持这个功能,这个字段会被忽略。
|
||||
|
||||
|
||||
|
@ -112,7 +112,7 @@ $ gcloud compute firewall-rules create my-rule --allow=tcp:<port>
|
|||
|
||||
|
||||
|
||||
因此,在 Google Compute Engine 或者 Google Container Engine 上开启防火墙端口时请
|
||||
因此,在 Google Compute Engine 或者 Google Kubernetes Engine 上开启防火墙端口时请
|
||||
小心。你可能无意间把其他服务也暴露给了 internet。
|
||||
|
||||
|
||||
|
|
|
@ -57,13 +57,13 @@ cluster/gce/upgrade.sh release/stable
|
|||
```
|
||||
|
||||
|
||||
### 升级 Google Container Engine (GKE) 集群
|
||||
### 升级 Google Kubernetes Engine 集群
|
||||
|
||||
|
||||
Google Container Engine 自动升级 master 组件(例如 `kube-apiserver`、`kube-scheduler`)至最新版本。它还负责 master 运行的操作系统和其它组件。
|
||||
Google Kubernetes Engine 自动升级 master 组件(例如 `kube-apiserver`、`kube-scheduler`)至最新版本。它还负责 master 运行的操作系统和其它组件。
|
||||
|
||||
|
||||
节点升级过程由用户初始化,[GKE 文档](https://cloud.google.com/container-engine/docs/clusters/upgrade) 里有相关描述。
|
||||
节点升级过程由用户初始化,[Google Kubernetes Engine 文档](https://cloud.google.com/kubernetes-engine/docs/clusters/upgrade) 里有相关描述。
|
||||
|
||||
|
||||
### 在其他平台上升级集群
|
||||
|
@ -80,7 +80,7 @@ Google Container Engine 自动升级 master 组件(例如 `kube-apiserver`、`
|
|||
## 调整集群大小
|
||||
|
||||
|
||||
如果集群资源短缺,您可以轻松的添加更多的机器,如果集群正运行在[节点自注册模式](/docs/admin/node/#self-registration-of-nodes)下的话。如果正在使用的是 GCE 或者 GKE,这将通过调整管理节点的实例组的大小完成。在 [Google Cloud Console page](https://console.developers.google.com) 的 `Compute > Compute Engine > Instance groups > your group > Edit group` 下修改实例数量或使用 gcloud CLI 都可以完成这个任务。
|
||||
如果集群资源短缺,您可以轻松的添加更多的机器,如果集群正运行在[节点自注册模式](/docs/admin/node/#self-registration-of-nodes)下的话。如果正在使用的是 GCE 或者 Google Kubernetes Engine,这将通过调整管理节点的实例组的大小完成。在 [Google Cloud Console page](https://console.developers.google.com) 的 `Compute > Compute Engine > Instance groups > your group > Edit group` 下修改实例数量或使用 gcloud CLI 都可以完成这个任务。
|
||||
|
||||
```shell
|
||||
gcloud compute instance-groups managed resize kubernetes-minion-group --size 42 --zone $ZONE
|
||||
|
@ -96,7 +96,7 @@ gcloud compute instance-groups managed resize kubernetes-minion-group --size 42
|
|||
### 集群自动伸缩
|
||||
|
||||
|
||||
如果正在使用 GCE 或者 GKE,您可以配置您的集群,使其能够基于 pod 需求自动重新调整大小。
|
||||
如果正在使用 GCE 或者 Google Kubernetes Engine,您可以配置您的集群,使其能够基于 pod 需求自动重新调整大小。
|
||||
|
||||
|
||||
如 [Compute Resource](/docs/concepts/configuration/manage-compute-resources-container/) 所述,用户可以控制预留多少 CPU 和内存来分配给 pod。这个信息被 Kubernetes scheduler 用来寻找一个运行 pod 的地方。如果没有一个节点有足够的空闲容量(或者不能满足其他 pod 的需求),这个 pod 就需要等待某些 pod 结束,或者一个新的节点被添加。
|
||||
|
@ -108,7 +108,7 @@ gcloud compute instance-groups managed resize kubernetes-minion-group --size 42
|
|||
如果发现在一段延时时间内(默认10分钟,将来有可能改变)某些节点不再需要,集群 autoscaler 也会缩小集群。
|
||||
|
||||
|
||||
集群 autoscaler 在每一个实例组(GCE)或节点池(GKE)上配置。
|
||||
集群 autoscaler 在每一个实例组(GCE)或节点池(Google Kubernetes Engine)上配置。
|
||||
|
||||
|
||||
如果您使用 GCE,那么您可以在使用 kube-up.sh 脚本创建集群的时候启用它。要想配置集群 autoscaler,您需要设置三个环境变量:
|
||||
|
@ -126,7 +126,7 @@ KUBE_ENABLE_CLUSTER_AUTOSCALER=true KUBE_AUTOSCALER_MIN_NODES=3 KUBE_AUTOSCALER_
|
|||
```
|
||||
|
||||
|
||||
在 GKE 上,您可以在创建、更新集群或创建一个特别的节点池(您希望自动伸缩的)时,通过给对应的 `gcloud` 命令传递 `--enable-autoscaling` `--min-nodes` 和 `--max-nodes` 来配置集群 autoscaler。
|
||||
在 Google Kubernetes Engine 上,您可以在创建、更新集群或创建一个特别的节点池(您希望自动伸缩的)时,通过给对应的 `gcloud` 命令传递 `--enable-autoscaling` `--min-nodes` 和 `--max-nodes` 来配置集群 autoscaler。
|
||||
|
||||
|
||||
示例:
|
||||
|
|
|
@ -229,7 +229,7 @@ client_address=10.240.0.5
|
|||
```
|
||||
|
||||
|
||||
然而,如果你的集群运行在 GKE/GCE 上,设置 `service.spec.externalTrafficPolicy` 字段值为 `Local` 可以强制使*没有* endpoints 的节点把他们自己从负载均衡流量的可选节点名单中删除。这是通过故意使它们健康检查失败达到的。
|
||||
然而,如果你的集群运行在 Google Kubernetes Engine/GCE 上,设置 `service.spec.externalTrafficPolicy` 字段值为 `Local` 可以强制使*没有* endpoints 的节点把他们自己从负载均衡流量的可选节点名单中删除。这是通过故意使它们健康检查失败达到的。
|
||||
|
||||
|
||||
形象的:
|
||||
|
|
|
@ -56,7 +56,7 @@ When creating a cluster, existing salt scripts:
|
|||
|
||||
### Size of master and master components
|
||||
|
||||
On GCE/GKE and AWS, `kube-up` automatically configures the proper VM size for your master depending on the number of nodes
|
||||
On GCE/Google Kubernetes Engine, and AWS, `kube-up` automatically configures the proper VM size for your master depending on the number of nodes
|
||||
in your cluster. On other providers, you will need to configure it manually. For reference, the sizes we use on GCE are
|
||||
|
||||
* 1-5 nodes: n1-standard-1
|
||||
|
|
|
@ -154,7 +154,7 @@ to be able to talk to federation apiserver. You can view this by running
|
|||
`kubectl config view`.
|
||||
|
||||
Note: Dynamic provisioning for persistent volume currently works only on
|
||||
AWS, GKE, and GCE. However, you can edit the created `Deployments` to suit
|
||||
AWS, Google Kubernetes Engine, and GCE. However, you can edit the created `Deployments` to suit
|
||||
your needs, if required.
|
||||
|
||||
## Registering Kubernetes clusters with federation
|
||||
|
@ -378,7 +378,7 @@ to be able to talk to federation apiserver. You can view this by running
|
|||
|
||||
Note: `federation-up.sh` creates the federation-apiserver pod with an etcd
|
||||
container that is backed by a persistent volume, so as to persist data. This
|
||||
currently works only on AWS, GKE, and GCE. You can edit
|
||||
currently works only on AWS, Google Kubernetes Engine, and GCE. You can edit
|
||||
`federation/manifests/federation-apiserver-deployment.yaml` to suit your needs,
|
||||
if required.
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ title: Building High-Availability Clusters
|
|||
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
|
||||
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such
|
||||
as [Minikube](/docs/getting-started-guides/minikube/)
|
||||
or try [Google Container Engine](https://cloud.google.com/container-engine/) for hosted Kubernetes.
|
||||
or try [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted Kubernetes.
|
||||
|
||||
Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will
|
||||
be working to add this continuous testing, but for now the single-node master installations are more heavily tested.
|
||||
|
|
|
@ -99,7 +99,7 @@ public networks.
|
|||
|
||||
### SSH Tunnels
|
||||
|
||||
[Google Container Engine](https://cloud.google.com/container-engine/docs/) uses
|
||||
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) uses
|
||||
SSH tunnels to protect the Master -> Cluster communication paths. In this
|
||||
configuration, the apiserver initiates an SSH tunnel to each node in the
|
||||
cluster (connecting to the ssh server listening on port 22) and passes all
|
||||
|
|
|
@ -19,7 +19,7 @@ Before choosing a guide, here are some considerations:
|
|||
|
||||
- Do you just want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
|
||||
- **If you are designing for high-availability**, learn about configuring [clusters in multiple zones](/docs/concepts/cluster-administration/federation/).
|
||||
- Will you be using **a hosted Kubernetes cluster**, such as [Google Container Engine (GKE)](https://cloud.google.com/container-engine/), or **hosting your own cluster**?
|
||||
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
|
||||
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
|
||||
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best. One option for custom networking is [*OpenVSwitch GRE/VxLAN networking*](/docs/admin/ovs-networking/), which uses OpenVSwitch to set up networking between pods across Kubernetes nodes.
|
||||
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
|
||||
|
|
|
@ -39,7 +39,7 @@ Credentials can be provided in several ways:
|
|||
|
||||
- Using Google Container Registry
|
||||
- Per-cluster
|
||||
- automatically configured on Google Compute Engine or Google Container Engine
|
||||
- automatically configured on Google Compute Engine or Google Kubernetes Engine
|
||||
- all pods can read the project's private registry
|
||||
- Using AWS EC2 Container Registry (ECR)
|
||||
- use IAM roles and policies to control access to ECR repositories
|
||||
|
@ -60,7 +60,7 @@ Each option is described in more detail below.
|
|||
|
||||
Kubernetes has native support for the [Google Container
|
||||
Registry (GCR)](https://cloud.google.com/tools/container-registry/), when running on Google Compute
|
||||
Engine (GCE). If you are running your cluster on GCE or Google Container Engine (GKE), simply
|
||||
Engine (GCE). If you are running your cluster on GCE or Google Kubernetes Engine, simply
|
||||
use the full image name (e.g. gcr.io/my_project/image:tag).
|
||||
|
||||
All pods in a cluster will have read access to images in this registry.
|
||||
|
@ -128,8 +128,7 @@ Once you have those variables filled in you can
|
|||
|
||||
### Configuring Nodes to Authenticate to a Private Repository
|
||||
|
||||
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
|
||||
with credentials for Google Container Registry. You cannot use this approach.
|
||||
**Note:** if you are running on Google Kubernetes Engine, there will already be a `.dockercfg` on each node with credentials for Google Container Registry. You cannot use this approach.
|
||||
|
||||
**Note:** if you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will
|
||||
manage and update the ECR login credentials. You cannot use this approach.
|
||||
|
@ -199,8 +198,7 @@ It should also work for a private registry such as quay.io, but that has not bee
|
|||
|
||||
### Pre-pulling Images
|
||||
|
||||
**Note:** if you are running on Google Container Engine (GKE), there will already be a `.dockercfg` on each node
|
||||
with credentials for Google Container Registry. You cannot use this approach.
|
||||
**Note:** if you are running on Google Kubernetes Engine, there will already be a `.dockercfg` on each node with credentials for Google Container Registry. You cannot use this approach.
|
||||
|
||||
**Note:** this approach is suitable if you can control node configuration. It
|
||||
will not work reliably on GCE, and any other cloud provider that does automatic
|
||||
|
@ -219,7 +217,7 @@ All pods will have read access to any pre-pulled images.
|
|||
|
||||
### Specifying ImagePullSecrets on a Pod
|
||||
|
||||
**Note:** This approach is currently the recommended approach for GKE, GCE, and any cloud-providers
|
||||
**Note:** This approach is currently the recommended approach for Google Kubernetes Engine, GCE, and any cloud-providers
|
||||
where node creation is automated.
|
||||
|
||||
Kubernetes supports specifying registry keys on a pod.
|
||||
|
@ -295,7 +293,7 @@ However, setting of this field can be automated by setting the imagePullSecrets
|
|||
in a [serviceAccount](/docs/user-guide/service-accounts) resource.
|
||||
|
||||
You can use this in conjunction with a per-node `.docker/config.json`. The credentials
|
||||
will be merged. This approach will work on Google Container Engine (GKE).
|
||||
will be merged. This approach will work on Google Kubernetes Engine.
|
||||
|
||||
### Use Cases
|
||||
|
||||
|
@ -305,7 +303,7 @@ common use cases and suggested solutions.
|
|||
1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
|
||||
- Use public images on the Docker hub.
|
||||
- No configuration required.
|
||||
- On GCE/GKE, a local mirror is automatically used for improved speed and availability.
|
||||
- On GCE/Google Kubernetes Engine, a local mirror is automatically used for improved speed and availability.
|
||||
1. Cluster running some proprietary images which should be hidden to those outside the company, but
|
||||
visible to all cluster users.
|
||||
- Use a hosted private [Docker registry](https://docs.docker.com/registry/).
|
||||
|
@ -313,7 +311,7 @@ common use cases and suggested solutions.
|
|||
- Manually configure .docker/config.json on each node as described above.
|
||||
- Or, run an internal private registry behind your firewall with open read access.
|
||||
- No Kubernetes configuration is required.
|
||||
- Or, when on GCE/GKE, use the project's Google Container Registry.
|
||||
- Or, when on GCE/Google Kubernetes Engine, use the project's Google Container Registry.
|
||||
- It will work better with cluster autoscaling than manual node configuration.
|
||||
- Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`.
|
||||
1. Cluster with a proprietary images, a few of which require stricter access control.
|
||||
|
|
|
@ -53,7 +53,7 @@ Summary of container benefits:
|
|||
* **Environmental consistency across development, testing, and production**:
|
||||
Runs the same on a laptop as it does in the cloud.
|
||||
* **Cloud and OS distribution portability**:
|
||||
Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Container Engine, and anywhere else.
|
||||
Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else.
|
||||
* **Application-centric management**:
|
||||
Raises the level of abstraction from running an OS on virtual hardware to run an application on an OS using logical resources.
|
||||
* **Loosely coupled, distributed, elastic, liberated [micro-services](https://martinfowler.com/articles/microservices.html)**:
|
||||
|
|
|
@ -46,9 +46,9 @@ It can be configured to give services externally-reachable URLs, load balance tr
|
|||
|
||||
Before you start using the Ingress resource, there are a few things you should understand. The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. You need an Ingress controller to satisfy an Ingress, simply creating the resource will have no effect.
|
||||
|
||||
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://git.k8s.io/ingress#running-multiple-ingress-controllers) and [here](https://git.k8s.io/ingress-gce/BETA_LIMITATIONS.md#disabling-glbc).
|
||||
GCE/Google Kubernetes Engine deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://git.k8s.io/ingress#running-multiple-ingress-controllers) and [here](https://git.k8s.io/ingress-gce/BETA_LIMITATIONS.md#disabling-glbc).
|
||||
|
||||
Make sure you review the [beta limitations](https://github.com/kubernetes/ingress-gce/blob/master/BETA_LIMITATIONS.md#glbc-beta-limitations) of this controller. In environments other than GCE/GKE, you need to [deploy a controller](https://git.k8s.io/ingress-nginx/README.md) as a pod.
|
||||
Make sure you review the [beta limitations](https://github.com/kubernetes/ingress-gce/blob/master/BETA_LIMITATIONS.md#glbc-beta-limitations) of this controller. In environments other than GCE/Google Kubernetes Engine, you need to [deploy a controller](https://git.k8s.io/ingress-nginx/README.md) as a pod.
|
||||
|
||||
## The Ingress Resource
|
||||
|
||||
|
|
|
@ -482,7 +482,7 @@ metadata:
|
|||
[...]
|
||||
```
|
||||
Use `cloud.google.com/load-balancer-type: "internal"` for masters with version 1.7.0 to 1.7.3.
|
||||
For more information, see the [docs](https://cloud.google.com/container-engine/docs/internal-load-balancing).
|
||||
For more information, see the [docs](https://cloud.google.com/kubernetes-engine/docs/internal-load-balancing).
|
||||
{% endcapture %}
|
||||
|
||||
{% capture aws %}
|
||||
|
|
|
@ -50,7 +50,7 @@ export KUBERNETES_PROVIDER=YOUR_PROVIDER; curl -sS https://get.k8s.io | bash
|
|||
Possible values for `YOUR_PROVIDER` include:
|
||||
|
||||
* `gce` - Google Compute Engine [default]
|
||||
* `gke` - Google Container Engine
|
||||
* `gke` - Google Kubernetes Engine
|
||||
* `aws` - Amazon EC2
|
||||
* `azure` - Microsoft Azure
|
||||
* `vagrant` - Vagrant (on local virtual machines)
|
||||
|
|
|
@ -14,7 +14,7 @@ The example below creates a Kubernetes cluster with 4 worker node Virtual Machin
|
|||
|
||||
### Before you start
|
||||
|
||||
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) for hosted cluster installation and management.
|
||||
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/internal-load-balancing) for hosted cluster installation and management.
|
||||
|
||||
For an easy way to experiment with the Kubernetes development environment, [click here](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
|
||||
to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo.
|
||||
|
|
|
@ -76,9 +76,9 @@ For information on using and managing a Kubernetes cluster on GCE, [consult the
|
|||
|
||||
|
||||
|
||||
## GKE
|
||||
## Google Kubernetes Engine
|
||||
|
||||
To create a Kubernetes cluster on GKE, you will need the Service Account JSON Data from Google.
|
||||
To create a Kubernetes cluster on Google Kubernetes Engine, you will need the Service Account JSON Data from Google.
|
||||
|
||||
### Choose a Provider
|
||||
|
||||
|
@ -86,7 +86,7 @@ Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitt
|
|||
|
||||
Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
Click to select Google Container Engine (GKE).
|
||||
Click to select Google Kubernetes Engine.
|
||||
|
||||
### Configure Your Provider
|
||||
|
||||
|
@ -103,10 +103,7 @@ Choose any extra options you may want to include with your cluster, then click *
|
|||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on GKE, consult [the official documentation](/docs/home/).
|
||||
|
||||
|
||||
|
||||
For information on using and managing a Kubernetes cluster on Google Kubernetes Engine, consult [the official documentation](/docs/home/).
|
||||
|
||||
|
||||
## DigitalOcean
|
||||
|
|
|
@ -30,7 +30,7 @@ system (e.g. Puppet) that you have to integrate with.
|
|||
If you are not constrained, there are other higher-level tools built to give you
|
||||
complete clusters:
|
||||
|
||||
* On GCE, [Google Container Engine](https://cloud.google.com/container-engine/)
|
||||
* On GCE, [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/)
|
||||
gives you one-click Kubernetes clusters.
|
||||
* On Microsoft Azure, [Azure Container Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)
|
||||
gives you managed Kubernetes clusters as a service.
|
||||
|
|
|
@ -37,7 +37,7 @@ a Kubernetes cluster from scratch.
|
|||
|
||||
# Hosted Solutions
|
||||
|
||||
* [Google Container Engine](https://cloud.google.com/container-engine) offers managed Kubernetes clusters.
|
||||
* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offers managed Kubernetes clusters.
|
||||
|
||||
* [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) can easily deploy Kubernetes clusters.
|
||||
|
||||
|
@ -142,7 +142,7 @@ Below is a table of all of the solutions listed above.
|
|||
IaaS Provider | Config. Mgmt. | OS | Networking | Docs | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ----------------------------
|
||||
any | any | multi-support | any CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle))
|
||||
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | Commercial
|
||||
Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial
|
||||
Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial
|
||||
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial
|
||||
KUBE2GO.io | | multi-support | multi-support | [docs](https://kube2go.io) | Commercial
|
||||
|
@ -211,5 +211,5 @@ any | any | any | any | [docs](http://docs.
|
|||
[1]: https://gist.github.com/erictune/4cabc010906afbcc5061
|
||||
<!-- Vagrant conformance test result -->
|
||||
[2]: https://gist.github.com/derekwaynecarr/505e56036cdf010bf6b6
|
||||
<!-- GKE conformance test result -->
|
||||
<!-- Google Kubernetes Engine conformance test result -->
|
||||
[3]: https://gist.github.com/erictune/2f39b22f72565365e59b
|
||||
|
|
|
@ -14,7 +14,7 @@ well as any provider specific details that may be necessary.
|
|||
|
||||
When using a Service with `spec.type: LoadBalancer`, you can specify the IP ranges that are allowed to access the load balancer
|
||||
by using `spec.loadBalancerSourceRanges`. This field takes a list of IP CIDR ranges, which Kubernetes will use to configure firewall exceptions.
|
||||
This feature is currently supported on Google Compute Engine, Google Container Engine and AWS. This field will be ignored if the cloud provider does not support the feature.
|
||||
This feature is currently supported on Google Compute Engine, Google Kubernetes Engine and AWS. This field will be ignored if the cloud provider does not support the feature.
|
||||
|
||||
Assuming 10.0.0.0/8 is the internal subnet. In the following example, a load balancer will be created that is only accessible to cluster internal IPs.
|
||||
This will not allow clients from outside of your Kubernetes cluster to access the load balancer.
|
||||
|
@ -89,7 +89,7 @@ Consider:
|
|||
the VM's external IP address.
|
||||
|
||||
Consequently, please be careful when opening firewalls in Google Compute Engine
|
||||
or Google Container Engine. You may accidentally be exposing other services to
|
||||
or Google Kubernetes Engine. You may accidentally be exposing other services to
|
||||
the wilds of the internet.
|
||||
|
||||
This will be fixed in an upcoming release of Kubernetes.
|
||||
|
|
|
@ -104,7 +104,7 @@ The IP address is listed next to `LoadBalancer Ingress`.
|
|||
Due to the implementation of this feature, the source IP seen in the target
|
||||
container will *not be the original source IP* of the client. To enable
|
||||
preservation of the client IP, the following fields can be configured in the
|
||||
service spec (supported in GCE/GKE environments):
|
||||
service spec (supported in GCE/Google Kubernetes Engine environments):
|
||||
|
||||
* `service.spec.externalTrafficPolicy` - denotes if this Service desires to route
|
||||
external traffic to node-local or cluster-wide endpoints. There are two available
|
||||
|
|
|
@ -49,12 +49,11 @@ Alternatively, to upgrade your entire cluster to the latest stable release:
|
|||
cluster/gce/upgrade.sh release/stable
|
||||
```
|
||||
|
||||
### Upgrading Google Container Engine (GKE) clusters
|
||||
### Upgrading Google Kubernetes Engine clusters
|
||||
|
||||
Google Container Engine automatically updates master components (e.g. `kube-apiserver`, `kube-scheduler`) to the latest
|
||||
version. It also handles upgrading the operating system and other components that the master runs on.
|
||||
Google Kubernetes Engine automatically updates master components (e.g. `kube-apiserver`, `kube-scheduler`) to the latest version. It also handles upgrading the operating system and other components that the master runs on.
|
||||
|
||||
The node upgrade process is user-initiated and is described in the [GKE documentation.](https://cloud.google.com/container-engine/docs/clusters/upgrade)
|
||||
The node upgrade process is user-initiated and is described in the [Google Kubernetes Engine documentation](https://cloud.google.com/kubernetes-engine//docs/clusters/upgrade).
|
||||
|
||||
### Upgrading clusters on other platforms
|
||||
|
||||
|
@ -68,7 +67,7 @@ Different providers, and tools, will manage upgrades differently. It is recomme
|
|||
## Resizing a cluster
|
||||
|
||||
If your cluster runs short on resources you can easily add more machines to it if your cluster is running in [Node self-registration mode](/docs/admin/node/#self-registration-of-nodes).
|
||||
If you're using GCE or GKE it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
|
||||
If you're using GCE or Google Kubernetes Engine it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
|
||||
|
||||
```shell
|
||||
gcloud compute instance-groups managed resize kubernetes-minion-group --size=42 --zone=$ZONE
|
||||
|
@ -80,7 +79,7 @@ In other environments you may need to configure the machine yourself and tell th
|
|||
|
||||
### Cluster autoscaling
|
||||
|
||||
If you are using GCE or GKE, you can configure your cluster so that it is automatically rescaled based on
|
||||
If you are using GCE or Google Kubernetes Engine, you can configure your cluster so that it is automatically rescaled based on
|
||||
pod needs.
|
||||
|
||||
As described in [Compute Resource](/docs/concepts/configuration/manage-compute-resources-container/), users can reserve how much CPU and memory is allocated to pods.
|
||||
|
@ -94,7 +93,7 @@ to the other in the cluster, would help. If yes, then it resizes the cluster to
|
|||
Cluster autoscaler also scales down the cluster if it notices that one or more nodes are not needed anymore for
|
||||
an extended period of time (10min but it may change in the future).
|
||||
|
||||
Cluster autoscaler is configured per instance group (GCE) or node pool (GKE).
|
||||
Cluster autoscaler is configured per instance group (GCE) or node pool (Google Kubernetes Engine).
|
||||
|
||||
If you are using GCE then you can either enable it while creating a cluster with kube-up.sh script.
|
||||
To configure cluster autoscaler you have to set three environment variables:
|
||||
|
@ -109,7 +108,7 @@ Example:
|
|||
KUBE_ENABLE_CLUSTER_AUTOSCALER=true KUBE_AUTOSCALER_MIN_NODES=3 KUBE_AUTOSCALER_MAX_NODES=10 NUM_NODES=5 ./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
On GKE you configure cluster autoscaler either on cluster creation or update or when creating a particular node pool
|
||||
On Google Kubernetes Engine you configure cluster autoscaler either on cluster creation or update or when creating a particular node pool
|
||||
(which you want to be autoscaled) by passing flags `--enable-autoscaling` `--min-nodes` and `--max-nodes`
|
||||
to the corresponding `gcloud` commands.
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ The ip-masq-agent configures iptables rules to hide a pod's IP address behind th
|
|||
* **Link Local**
|
||||
A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
|
||||
|
||||
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in GKE, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
|
||||
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in Google Kubernetes Engine, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
|
||||
|
||||

|
||||
|
||||
|
@ -50,7 +50,7 @@ MASQUERADE all -- anywhere anywhere /* ip-masq-agent:
|
|||
|
||||
```
|
||||
|
||||
By default, in GCE/GKE starting with Kubernetes version 1.7.0, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in another environment, you can add the ip-masq-agent [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) to your cluster:
|
||||
By default, in GCE/Google Kubernetes Engine starting with Kubernetes version 1.7.0, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in another environment, you can add the ip-masq-agent [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) to your cluster:
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ title: Federated Ingress
|
|||
This page explains how to use Kubernetes Federated Ingress to deploy
|
||||
a common HTTP(S) virtual IP load balancer across a federated service running in
|
||||
multiple Kubernetes clusters. As of v1.4, clusters hosted in Google
|
||||
Cloud (both GKE and GCE, or both) are supported. This makes it
|
||||
Cloud (both Google Kubernetes Engine and GCE, or both) are supported. This makes it
|
||||
easy to deploy a service that reliably serves HTTP(S) traffic
|
||||
originating from web clients around the globe on a single, static IP
|
||||
address. Low network latency, high fault tolerance and easy administration are
|
||||
|
@ -19,7 +19,7 @@ automatically checks the health of the pods comprising the service,
|
|||
and avoids sending requests to unresponsive or slow pods (or entire
|
||||
unresponsive clusters).
|
||||
|
||||
Federated Ingress is released as an alpha feature, and supports Google Cloud Platform (GKE,
|
||||
Federated Ingress is released as an alpha feature, and supports Google Cloud Platform (Google Kubernetes Engine,
|
||||
GCE and hybrid scenarios involving both) in Kubernetes v1.4. Work is under way to support other cloud
|
||||
providers such as AWS, and other hybrid cloud scenarios (e.g. services
|
||||
spanning private on-premises as well as public cloud Kubernetes
|
||||
|
|
|
@ -38,9 +38,9 @@ of the potential inaccuracy.
|
|||
|
||||
## Deployment
|
||||
|
||||
### Google Container Engine
|
||||
### Google Kubernetes Engine
|
||||
|
||||
In Google Container Engine (GKE), if cloud logging is enabled, event exporter
|
||||
In Google Kubernetes Engine, if cloud logging is enabled, event exporter
|
||||
is deployed by default to the clusters with master running version 1.7 and
|
||||
higher. To prevent disturbing your workloads, event exporter does not have
|
||||
resources set and is in the best effort QOS class, which means that it will
|
||||
|
|
|
@ -49,7 +49,7 @@ Running this command spawns a shell. In the shell, start your service. You can t
|
|||
|
||||
{% capture whatsnext %}
|
||||
|
||||
If you're interested in a hands-on tutorial, check out [this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s) that walks through locally developing the Guestbook application on Google Container Engine.
|
||||
If you're interested in a hands-on tutorial, check out [this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s) that walks through locally developing the Guestbook application on Google Kubernetes Engine.
|
||||
|
||||
Telepresence has [numerous proxying options](https://www.telepresence.io/reference/methods), depending on your situation.
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ This article describes how to set up a cluster to ingest logs into
|
|||
them using [Kibana](https://www.elastic.co/products/kibana), as an alternative to
|
||||
Stackdriver Logging when running on GCE. Note that Elasticsearch and Kibana
|
||||
cannot be setup automatically in the Kubernetes cluster hosted on
|
||||
Google Container Engine, you have to deploy it manually.
|
||||
Google Kubernetes Engine, you have to deploy it manually.
|
||||
|
||||
To use Elasticsearch and Kibana for cluster logging, you should set the
|
||||
following environment variable as shown below when creating your cluster with
|
||||
|
|
|
@ -22,9 +22,9 @@ and the instances are managed using a Kubernetes `DaemonSet`. The actual deploym
|
|||
|
||||
### Deploying to a new cluster
|
||||
|
||||
#### Google Container Engine
|
||||
#### Google Kubernetes Engine
|
||||
|
||||
Stackdriver is the default logging solution for clusters deployed on Google Container Engine.
|
||||
Stackdriver is the default logging solution for clusters deployed on Google Kubernetes Engine.
|
||||
Stackdriver Logging is deployed to a new cluster by default unless you explicitly opt-out.
|
||||
|
||||
#### Other platforms
|
||||
|
@ -186,14 +186,14 @@ the messages from a particular pod.
|
|||
|
||||
The most important pieces of metadata are the resource type and log name.
|
||||
The resource type of a container log is `container`, which is named
|
||||
`GKE Containers` in the UI (even if the Kubernetes cluster is not on GKE).
|
||||
`GKE Containers` in the UI (even if the Kubernetes cluster is not on Google Kubernetes Engine).
|
||||
The log name is the name of the container, so that if you have a pod with
|
||||
two containers, named `container_1` and `container_2` in the spec, their logs
|
||||
will have log names `container_1` and `container_2` respectively.
|
||||
|
||||
System components have resource type `compute`, which is named
|
||||
`GCE VM Instance` in the interface. Log names for system components are fixed.
|
||||
For a GKE node, every log entry from a system component has one of the following
|
||||
For a Google Kubernetes Engine node, every log entry from a system component has one of the following
|
||||
log names:
|
||||
|
||||
* docker
|
||||
|
|
|
@ -34,7 +34,7 @@ and command-line interfaces (CLIs), such as [`kubectl`](/docs/user-guide/kubectl
|
|||
You may also find the Stack Overflow topics relevant:
|
||||
|
||||
* [Kubernetes](http://stackoverflow.com/questions/tagged/kubernetes)
|
||||
* [Google Container Engine - GKE](http://stackoverflow.com/questions/tagged/google-container-engine)
|
||||
* [Google Kubernetes Engine](http://stackoverflow.com/questions/tagged/google-container-engine)
|
||||
|
||||
## Help! My question isn't covered! I need help now!
|
||||
|
||||
|
@ -67,7 +67,7 @@ these channels for localized support and info:
|
|||
|
||||
### Mailing List
|
||||
|
||||
The Kubernetes / Google Container Engine mailing list is [kubernetes-users@googlegroups.com](https://groups.google.com/forum/#!forum/kubernetes-users)
|
||||
The Kubernetes / Google Kubernetes Engine mailing list is [kubernetes-users@googlegroups.com](https://groups.google.com/forum/#!forum/kubernetes-users)
|
||||
|
||||
### Bugs and Feature requests
|
||||
|
||||
|
|
|
@ -148,11 +148,11 @@ to program the DNS service that you are using. For example, if your
|
|||
cluster is running on Google Compute Engine, you must enable the
|
||||
Google Cloud DNS API for your project.
|
||||
|
||||
The machines in Google Container Engine (GKE) clusters are created
|
||||
The machines in Google Kubernetes Engine clusters are created
|
||||
without the Google Cloud DNS API scope by default. If you want to use a
|
||||
GKE cluster as a Federation host, you must create it using the `gcloud`
|
||||
Google Kubernetes Engine cluster as a Federation host, you must create it using the `gcloud`
|
||||
command with the appropriate value in the `--scopes` field. You cannot
|
||||
modify a GKE cluster directly to add this scope, but you can create a
|
||||
modify a Google Kubernetes Engine cluster directly to add this scope, but you can create a
|
||||
new node pool for your cluster and delete the old one. *Note that this
|
||||
will cause pods in the cluster to be rescheduled.*
|
||||
|
||||
|
|
|
@ -231,7 +231,7 @@ client_address=10.240.0.5
|
|||
...
|
||||
```
|
||||
|
||||
However, if you're running on GKE/GCE, setting the same `service.spec.externalTrafficPolicy`
|
||||
However, if you're running on Google Kubernetes Engine/GCE, setting the same `service.spec.externalTrafficPolicy`
|
||||
field to `Local` forces nodes *without* Service endpoints to remove
|
||||
themselves from the list of nodes eligible for loadbalanced traffic by
|
||||
deliberately failing health checks.
|
||||
|
|
|
@ -46,7 +46,7 @@ Download the following configuration files:
|
|||
|
||||
MySQL and Wordpress each use a PersistentVolume to store data. While Kubernetes supports many different [types of PersistentVolumes](/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes), this tutorial covers [hostPath](/docs/concepts/storage/volumes/#hostpath).
|
||||
|
||||
**Note:** If you have a Kubernetes cluster running on Google Container Engine, please follow [this guide](https://cloud.google.com/container-engine/docs/tutorials/persistent-disk).
|
||||
**Note:** If you have a Kubernetes cluster running on Google Kubernetes Engine, please follow [this guide](https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk).
|
||||
{: .note}
|
||||
|
||||
### Setting up a hostPath Volume
|
||||
|
|
|
@ -14,7 +14,7 @@ external IP address.
|
|||
|
||||
* Install [kubectl](/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
* Use a cloud provider like Google Container Engine or Amazon Web Services to
|
||||
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
|
||||
create a Kubernetes cluster. This tutorial creates an
|
||||
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),
|
||||
which requires a cloud provider.
|
||||
|
|
|
@ -168,7 +168,7 @@ The `redis-slave` and `redis-master` Services you applied are only accessible wi
|
|||
|
||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. Minikube can only expose Services through `NodePort`.
|
||||
|
||||
**Note:** Some cloud providers, like Google Compute Engine or Google Container Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
|
||||
**Note:** Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
|
||||
{: .note}
|
||||
|
||||
1. Apply the frontend Service from the following `frontend-service.yaml` file:
|
||||
|
|
Loading…
Reference in New Issue