2018-10-03 20:08:33 +00:00
|
|
|
---
|
|
|
|
title: 여러 영역에서 구동
|
|
|
|
weight: 90
|
|
|
|
---
|
|
|
|
|
|
|
|
## 소개
|
|
|
|
|
|
|
|
Kubernetes 1.2 adds support for running a single cluster in multiple failure zones
|
|
|
|
(GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones").
|
|
|
|
This is a lightweight version of a broader Cluster Federation feature (previously referred to by the affectionate
|
|
|
|
nickname ["Ubernetes"](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/multicluster/federation.md)).
|
|
|
|
Full Cluster Federation allows combining separate
|
|
|
|
Kubernetes clusters running in different regions or cloud providers
|
|
|
|
(or on-premises data centers). However, many
|
|
|
|
users simply want to run a more available Kubernetes cluster in multiple zones
|
|
|
|
of their single cloud provider, and this is what the multizone support in 1.2 allows
|
|
|
|
(this previously went by the nickname "Ubernetes Lite").
|
|
|
|
|
|
|
|
Multizone support is deliberately limited: a single Kubernetes cluster can run
|
|
|
|
in multiple zones, but only within the same region (and cloud provider). Only
|
|
|
|
GCE and AWS are currently supported automatically (though it is easy to
|
|
|
|
add similar support for other clouds or even bare metal, by simply arranging
|
|
|
|
for the appropriate labels to be added to nodes and volumes).
|
|
|
|
|
|
|
|
|
|
|
|
{{< toc >}}
|
|
|
|
|
|
|
|
## 기능
|
|
|
|
|
|
|
|
When nodes are started, the kubelet automatically adds labels to them with
|
|
|
|
zone information.
|
|
|
|
|
|
|
|
Kubernetes will automatically spread the pods in a replication controller
|
|
|
|
or service across nodes in a single-zone cluster (to reduce the impact of
|
|
|
|
failures.) With multiple-zone clusters, this spreading behavior is
|
|
|
|
extended across zones (to reduce the impact of zone failures.) (This is
|
|
|
|
achieved via `SelectorSpreadPriority`). This is a best-effort
|
|
|
|
placement, and so if the zones in your cluster are heterogeneous
|
|
|
|
(e.g. different numbers of nodes, different types of nodes, or
|
|
|
|
different pod resource requirements), this might prevent perfectly
|
|
|
|
even spreading of your pods across zones. If desired, you can use
|
|
|
|
homogeneous zones (same number and types of nodes) to reduce the
|
|
|
|
probability of unequal spreading.
|
|
|
|
|
|
|
|
When persistent volumes are created, the `PersistentVolumeLabel`
|
|
|
|
admission controller automatically adds zone labels to them. The scheduler (via the
|
|
|
|
`VolumeZonePredicate` predicate) will then ensure that pods that claim a
|
|
|
|
given volume are only placed into the same zone as that volume, as volumes
|
|
|
|
cannot be attached across zones.
|
|
|
|
|
|
|
|
## 제한 사항
|
|
|
|
|
|
|
|
There are some important limitations of the multizone support:
|
|
|
|
|
|
|
|
* We assume that the different zones are located close to each other in the
|
|
|
|
network, so we don't perform any zone-aware routing. In particular, traffic
|
Official 1.13 Release Docs (#11401)
* Update metadata.generation behaviour for custom resources (#10705)
* update docs promoting plugins to beta (#10796)
* docs update to promote TaintBasedEvictions to beta (#10765)
* First Korean l10n work for dev-1.13 (#10719)
* Update outdated l10n(ko) contents (#10689)
fixes #10686
* Translate concepts/overview/what-is-kubernetes in Korean (#10690)
* Translate concepts/overview/what-is-kubernetes in Korean
* Feedback from ClaudiaJKang
* Translate concepts/overview/components in Korean (#10882)
* Translate concepts/overview/components in Korean #10717
* Translate concepts/overview/components in Korean
* Translate concepts/overview/components in Korean
* Apply Korean glossary: 서비스 어카운트
* Translate concepts/overview/kubernetes-api in Korean (#10773)
* Translate concepts/overview/kubernetes-api in Korean
* Applied feedback from ianychoi
* kubeadm: update the configuration docs to v1beta1 (#10959)
* kubeadm: add small v1beta1 related updates (#10988)
* ADD content/zh/docs/reference/setup-tools/kubeadm/kubeadm.md (#11031)
* ADD content/zh/docs/reference/setup-tools/kubeadm/kubeadm.md
* ADD content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md
* Update content/zh/docs/reference/setup-tools/kubeadm/kubeadm.md
Accepted
Co-Authored-By: YouthLab <tsui@highyouth.com>
* do not change 'master' or 'worker' nodes to '主从'
* Doc updates for volume scheduling GA (#10743)
* Doc updates for volume scheduling GA
* Make trivial change to kick build
* Document nodelease feature (#10699)
* advanced audit doc for ModeBlockingStrict (#10203)
* Rename EncryptionConfig to EncryptionConfiguration (#11080)
EncryptionConfig was renamed to EncryptedConfiguration and added to
the `apiserver.config.k8s.io` API group in Kubernetes 1.13.
The feature was previously in alpha and was not handling versions
properly, which lead to an originally unnoticed `v1` in the docs.
* content/zh/docs/reference/setup-tools/kubeadm/kubeadm-init.md
* trsanlate create-cluster-kubeadm.md to chinese (#11041)
* trsanlate create-cluster-kubeadm.md to chinese
* Update create-cluster-kubeadm.md
* update the feature stage in v1.13 (#11307)
* update new feature gates to document (#11295)
* refresh controller role list on rbac description page (#11290)
* node labeling restriction docs (#10944)
* Update 1.13 docs for CSI GA (#10893)
* dynamic audit documentation (#9947)
* adds dynamic audit documentation
* Copyedit for clarity
See also inline question/s
* Fix feature state shortcode
* Update feature state
* changes wording for dynamic audit flag behavior
* Minor copyedit
* fix dynamic audit yaml
* adds api enablement command to dynamic audit docs
* change ordering dynamic audit appears in
* add references to dynamic audit in webhook backend
* reword dynamic audit reference
* updates stages field for audit sink object
* changes audit sink api definition; rewords policy
* kubeadm: remove kube-proxy workaround (#11162)
* zh-trans content/en/docs/setup/independent/install-kubeadm.md (#11338)
* zh-trans content/en/docs/setup/independent/install-kubeadm.md
* Update install-kubeadm.md
* Update dry run feature to beta (#11140)
* vSphere volume raw block support doc update (#10932)
* Add docs for Windows DNS configurations (#10036)
* Update docs for fields allowed at root of CRD schema (#9973)
* Add docs for Windows DNS configurations
* add device monitoring documentation (#9945)
* kubeadm: adds upgrade instructions for 1.13 (#11138)
* kubeadm: adds upgrade instructions for 1.13
Signed-off-by: Chuck Ha <ha.chuck@gmail.com>
* add minor copyedits
Addressed a couple of copyedit comments a bit more cleanly.
* kubeadm: add improvements to HA docs (#11094)
* kubeadm: add information and diagrams for HA topologies
* kubeadm: update HA doc with simplified steps
* kubeadm: update HA doc with simplified steps
* edit ha, add new topology topic, reorder by weight
* troubleshoot markdown
* fix more markdown, fix links
* more markdown
* more markdown
* more markdown
* changes after reviewer comments
* add steps about Weave
* update note about stacked topology
* kubeadm external etcd HA upgrade 1.13 (#11364)
* kubeadm external etcd HA upgrade 1.13
Signed-off-by: Ruben Orduz <rubenoz@gmail.com>
* Update stacked controlplane steps
* kubeadm cert documentation (#11093)
* kubeadm certificate API and CSR documentation
* copyedits
* fix typo
* PR for diff docs (#10789)
* Empty commit against dev-1.13 for diff documentation
* Complete Declarative maangement with diff commands
* Second Korean l10n work for dev-1.13. (#11030)
* Update outdated l10n(ko) contents (#10915)
* Translate main menu for l10n(ko) docs (#10916)
* Translate tasks/run-application/horizontal-pod-autoscale-walkthrough (#10980)
* Translate content/ko/docs/concepts/overview/working-with-objects/kubernetes-object in Korean #11104 (#11332)
* Pick-right-solution page translates into Korean. (#11340)
* ko-trans: add jd/..., sap/..., ebay/..., homeoffice/... (#11336)
* Translate concept/workloads/pods/pod-overview.md (#11092)
Co-authored-by: June Yi <june.yi@samsung.com>
Co-authored-by: Jesang Myung <jesang.myung@gmail.com>
Co-authored-by: zerobig <38598117+zer0big@users.noreply.github.com>
Co-authored-by: Claudia J.Kang <claudiajkang@gmail.com>
Co-authored-by: lIuDuI <1693291525@qq.com>
Co-authored-by: Woojin Na(Eddie) <cheapluv@gmail.com>
* Rename encryption-at-rest related objects (#11059)
EncryptionConfig was renamed to EncryptedConfiguration and added to
the `apiserver.config.k8s.io` API group in Kubernetes 1.13.
The feature was previously in alpha and was not handling versions
properly, which lead to an originally unnoticed `v1` in the docs.
Also, the `--experimental-encryption-provider-config` flag is now called
just `--encryption-provider-config`.
* Documenting FlexVolume Resize alpha feature. (#10097)
* CR webhook conversion documentation (#10986)
* CR Conversion
* Addressing comments
* Addressing more comments
* Addressing even more comments
* Addressing even^2 more comments
* Remove references to etcd2 in v1.13 since support has been removed (#11414)
* Remove etcd2 references as etcd2 is deprecated
Link back to the v1.12 version of the etcd3 doc for
the etcd2->etcd3 migration instructions.
I updated the kube-apiserver reference manually,
unsure if that is auto-generated somehow.
The federation-apiserver can still potentially
support etcd2 so I didn't touch that.
* Remove outdated {master,node}.yaml files
There are master/node yaml files that reference
etcd2.service that are likely highly out of date.
I couldn't find any docs that actually reference
these templates so I removed them
* Address review comments
* Final Korean l10n work for dev-1.13 (#11440)
* Update outdated l10n(ko) contents (#11425)
fixes #11424
* Remove references to etcd2 in content/ko (#11416)
* Resolve conflicts against master for /ko contents (#11438)
* Fix unopened caution shortcode
* kubeadm: update the reference docs for 1.13 (#10960)
* docs update to promote TaintBasedEvictions to beta (#10765)
* First Korean l10n work for dev-1.13 (#10719)
* Update outdated l10n(ko) contents (#10689)
fixes #10686
* Translate concepts/overview/what-is-kubernetes in Korean (#10690)
* Translate concepts/overview/what-is-kubernetes in Korean
* Feedback from ClaudiaJKang
* Translate concepts/overview/components in Korean (#10882)
* Translate concepts/overview/components in Korean #10717
* Translate concepts/overview/components in Korean
* Translate concepts/overview/components in Korean
* Apply Korean glossary: 서비스 어카운트
* Translate concepts/overview/kubernetes-api in Korean (#10773)
* Translate concepts/overview/kubernetes-api in Korean
* Applied feedback from ianychoi
* kubeadm: update the configuration docs to v1beta1 (#10959)
* kubeadm: add small v1beta1 related updates (#10988)
* update new feature gates to document (#11295)
* Update dry run feature to beta (#11140)
* kubeadm: add improvements to HA docs (#11094)
* kubeadm: add information and diagrams for HA topologies
* kubeadm: update HA doc with simplified steps
* kubeadm: update HA doc with simplified steps
* edit ha, add new topology topic, reorder by weight
* troubleshoot markdown
* fix more markdown, fix links
* more markdown
* more markdown
* more markdown
* changes after reviewer comments
* add steps about Weave
* update note about stacked topology
* kubeadm: update reference docs
- add section about working with phases under kubeadm-init.md
- update GA / beta status of features
- kubeadm alpha phase was moved to kubeadm init phase
- new commands were added under kubeadm alpha
- included new CoreDNS usage examples
* Generate components and tools reference
* Add generated federation API Reference (#11491)
* Add generated federation API Reference
* Add front matter to federation reference
* Remove whitespace from federation front matter
* Remove more whitespace from federation front matter
* Remove superfluous kubefed reference
* Add frontmatter to generated kubefed reference
* Fix kubefed reference page frontmatter
* Generate kubectl reference docs 1.13 (#11487)
* Generate kubectl reference docs 1.13
* Fix links in kubectl reference
* Add 1.13 API reference (#11489)
* Update config.toml (#11486)
* Update config.toml
Preparing for 1.13 release, updating the config.toml and dropping the 1.8 docs reference.
* update dot releases and docsbranch typo
* adding .Site. to Params.currentUrl (#11503)
see https://github.com/kubernetes/website/pull/11502 for context
* Add 1.13 Release notes (#11499)
2018-12-04 01:21:11 +00:00
|
|
|
that goes via services might cross zones (even if some pods backing that service
|
2018-10-03 20:08:33 +00:00
|
|
|
exist in the same zone as the client), and this may incur additional latency and cost.
|
|
|
|
|
|
|
|
* Volume zone-affinity will only work with a `PersistentVolume`, and will not
|
|
|
|
work if you directly specify an EBS volume in the pod spec (for example).
|
|
|
|
|
|
|
|
* Clusters cannot span clouds or regions (this functionality will require full
|
|
|
|
federation support).
|
|
|
|
|
|
|
|
* Although your nodes are in multiple zones, kube-up currently builds
|
|
|
|
a single master node by default. While services are highly
|
|
|
|
available and can tolerate the loss of a zone, the control plane is
|
|
|
|
located in a single zone. Users that want a highly available control
|
|
|
|
plane should follow the [high availability](/docs/admin/high-availability) instructions.
|
|
|
|
|
|
|
|
### Volume limitations
|
|
|
|
The following limitations are addressed with [topology-aware volume binding](/docs/concepts/storage/storage-classes/#volume-binding-mode).
|
|
|
|
|
|
|
|
* StatefulSet volume zone spreading when using dynamic provisioning is currently not compatible with
|
|
|
|
pod affinity or anti-affinity policies.
|
|
|
|
|
|
|
|
* If the name of the StatefulSet contains dashes ("-"), volume zone spreading
|
|
|
|
may not provide a uniform distribution of storage across zones.
|
|
|
|
|
|
|
|
* When specifying multiple PVCs in a Deployment or Pod spec, the StorageClass
|
|
|
|
needs to be configured for a specific single zone, or the PVs need to be
|
|
|
|
statically provisioned in a specific zone. Another workaround is to use a
|
|
|
|
StatefulSet, which will ensure that all the volumes for a replica are
|
|
|
|
provisioned in the same zone.
|
|
|
|
|
|
|
|
## 연습
|
|
|
|
|
|
|
|
We're now going to walk through setting up and using a multi-zone
|
|
|
|
cluster on both GCE & AWS. To do so, you bring up a full cluster
|
|
|
|
(specifying `MULTIZONE=true`), and then you add nodes in additional zones
|
|
|
|
by running `kube-up` again (specifying `KUBE_USE_EXISTING_MASTER=true`).
|
|
|
|
|
|
|
|
### 클러스터 가져오기
|
|
|
|
|
|
|
|
Create the cluster as normal, but pass MULTIZONE to tell the cluster to manage multiple zones; creating nodes in us-central1-a.
|
|
|
|
|
|
|
|
GCE:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash
|
|
|
|
```
|
|
|
|
|
|
|
|
AWS:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash
|
|
|
|
```
|
|
|
|
|
|
|
|
This step brings up a cluster as normal, still running in a single zone
|
|
|
|
(but `MULTIZONE=true` has enabled multi-zone capabilities).
|
|
|
|
|
|
|
|
### 라벨이 지정된 노드 확인
|
|
|
|
|
|
|
|
View the nodes; you can see that they are labeled with zone information.
|
|
|
|
They are all in `us-central1-a` (GCE) or `us-west-2a` (AWS) so far. The
|
|
|
|
labels are `failure-domain.beta.kubernetes.io/region` for the region,
|
|
|
|
and `failure-domain.beta.kubernetes.io/zone` for the zone:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
> kubectl get nodes --show-labels
|
|
|
|
|
|
|
|
|
|
|
|
NAME STATUS ROLES AGE VERSION LABELS
|
2019-01-29 15:07:30 +00:00
|
|
|
kubernetes-master Ready,SchedulingDisabled <none> 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
|
|
|
|
kubernetes-minion-87j9 Ready <none> 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
|
|
|
|
kubernetes-minion-9vlv Ready <none> 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
|
|
|
|
kubernetes-minion-a12q Ready <none> 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
|
2018-10-03 20:08:33 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
### 두번째 영역에 더 많은 노드 추가하기
|
|
|
|
|
|
|
|
Let's add another set of nodes to the existing cluster, reusing the
|
|
|
|
existing master, running in a different zone (us-central1-b or us-west-2b).
|
|
|
|
We run kube-up again, but by specifying `KUBE_USE_EXISTING_MASTER=true`
|
|
|
|
kube-up will not create a new master, but will reuse one that was previously
|
|
|
|
created instead.
|
|
|
|
|
|
|
|
GCE:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh
|
|
|
|
```
|
|
|
|
|
|
|
|
On AWS we also need to specify the network CIDR for the additional
|
|
|
|
subnet, along with the master internal IP address:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
View the nodes again; 3 more nodes should have launched and be tagged
|
|
|
|
in us-central1-b:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
> kubectl get nodes --show-labels
|
|
|
|
|
|
|
|
NAME STATUS ROLES AGE VERSION LABELS
|
2019-01-29 15:07:30 +00:00
|
|
|
kubernetes-master Ready,SchedulingDisabled <none> 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
|
|
|
|
kubernetes-minion-281d Ready <none> 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
|
|
|
|
kubernetes-minion-87j9 Ready <none> 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
|
|
|
|
kubernetes-minion-9vlv Ready <none> 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
|
|
|
|
kubernetes-minion-a12q Ready <none> 17m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
|
|
|
|
kubernetes-minion-pp2f Ready <none> 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f
|
|
|
|
kubernetes-minion-wf8i Ready <none> 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i
|
2018-10-03 20:08:33 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
### 볼륨 어피니티
|
|
|
|
|
|
|
|
Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity):
|
|
|
|
|
|
|
|
```json
|
|
|
|
kubectl create -f - <<EOF
|
|
|
|
{
|
|
|
|
"kind": "PersistentVolumeClaim",
|
|
|
|
"apiVersion": "v1",
|
|
|
|
"metadata": {
|
|
|
|
"name": "claim1",
|
|
|
|
"annotations": {
|
|
|
|
"volume.alpha.kubernetes.io/storage-class": "foo"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"spec": {
|
|
|
|
"accessModes": [
|
|
|
|
"ReadWriteOnce"
|
|
|
|
],
|
|
|
|
"resources": {
|
|
|
|
"requests": {
|
|
|
|
"storage": "5Gi"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
EOF
|
|
|
|
```
|
|
|
|
|
2018-11-06 19:33:04 +00:00
|
|
|
{{< note >}}
|
|
|
|
For version 1.3+ Kubernetes will distribute dynamic PV claims across
|
2018-10-03 20:08:33 +00:00
|
|
|
the configured zones. For version 1.2, dynamic persistent volumes were
|
|
|
|
always created in the zone of the cluster master
|
|
|
|
(here us-central1-a / us-west-2a); that issue
|
|
|
|
([#23330](https://github.com/kubernetes/kubernetes/issues/23330))
|
|
|
|
was addressed in 1.3+.
|
2018-11-06 19:33:04 +00:00
|
|
|
{{< /note >}}
|
2018-10-03 20:08:33 +00:00
|
|
|
|
2018-11-06 19:33:04 +00:00
|
|
|
Now let's validate that Kubernetes automatically labeled the zone & region the PV was created in.
|
2018-10-03 20:08:33 +00:00
|
|
|
|
|
|
|
```shell
|
|
|
|
> kubectl get pv --show-labels
|
|
|
|
NAME CAPACITY ACCESSMODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
|
|
|
|
pv-gce-mj4gm 5Gi RWO Retain Bound default/claim1 manual 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a
|
|
|
|
```
|
|
|
|
|
|
|
|
So now we will create a pod that uses the persistent volume claim.
|
|
|
|
Because GCE PDs / AWS EBS volumes cannot be attached across zones,
|
|
|
|
this means that this pod can only be created in the same zone as the volume:
|
|
|
|
|
|
|
|
```yaml
|
|
|
|
kubectl create -f - <<EOF
|
|
|
|
kind: Pod
|
|
|
|
apiVersion: v1
|
|
|
|
metadata:
|
|
|
|
name: mypod
|
|
|
|
spec:
|
|
|
|
containers:
|
|
|
|
- name: myfrontend
|
|
|
|
image: nginx
|
|
|
|
volumeMounts:
|
|
|
|
- mountPath: "/var/www/html"
|
|
|
|
name: mypd
|
|
|
|
volumes:
|
|
|
|
- name: mypd
|
|
|
|
persistentVolumeClaim:
|
|
|
|
claimName: claim1
|
|
|
|
EOF
|
|
|
|
```
|
|
|
|
|
|
|
|
Note that the pod was automatically created in the same zone as the volume, as
|
|
|
|
cross-zone attachments are not generally permitted by cloud providers:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
> kubectl describe pod mypod | grep Node
|
|
|
|
Node: kubernetes-minion-9vlv/10.240.0.5
|
|
|
|
> kubectl get node kubernetes-minion-9vlv --show-labels
|
|
|
|
NAME STATUS AGE VERSION LABELS
|
|
|
|
kubernetes-minion-9vlv Ready 22m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
|
|
|
|
```
|
|
|
|
|
|
|
|
### 여러 영역에 파드 분배하기
|
|
|
|
|
|
|
|
Pods in a replication controller or service are automatically spread
|
|
|
|
across zones. First, let's launch more nodes in a third zone:
|
|
|
|
|
|
|
|
GCE:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-f NUM_NODES=3 kubernetes/cluster/kube-up.sh
|
|
|
|
```
|
|
|
|
|
|
|
|
AWS:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
|
|
|
|
```
|
|
|
|
|
|
|
|
Verify that you now have nodes in 3 zones:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
kubectl get nodes --show-labels
|
|
|
|
```
|
|
|
|
|
|
|
|
Create the guestbook-go example, which includes an RC of size 3, running a simple web app:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl create -f {}
|
|
|
|
```
|
|
|
|
|
|
|
|
The pods should be spread across all 3 zones:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
> kubectl describe pod -l app=guestbook | grep Node
|
|
|
|
Node: kubernetes-minion-9vlv/10.240.0.5
|
|
|
|
Node: kubernetes-minion-281d/10.240.0.8
|
|
|
|
Node: kubernetes-minion-olsh/10.240.0.11
|
|
|
|
|
|
|
|
> kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels
|
|
|
|
NAME STATUS ROLES AGE VERSION LABELS
|
2019-01-29 15:07:30 +00:00
|
|
|
kubernetes-minion-9vlv Ready <none> 34m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
|
|
|
|
kubernetes-minion-281d Ready <none> 20m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
|
|
|
|
kubernetes-minion-olsh Ready <none> 3m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh
|
2018-10-03 20:08:33 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
Load-balancers span all zones in a cluster; the guestbook-go example
|
|
|
|
includes an example load-balanced service:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
> kubectl describe service guestbook | grep LoadBalancer.Ingress
|
|
|
|
LoadBalancer Ingress: 130.211.126.21
|
|
|
|
|
|
|
|
> ip=130.211.126.21
|
|
|
|
|
|
|
|
> curl -s http://${ip}:3000/env | grep HOSTNAME
|
|
|
|
"HOSTNAME": "guestbook-44sep",
|
|
|
|
|
|
|
|
> (for i in `seq 20`; do curl -s http://${ip}:3000/env | grep HOSTNAME; done) | sort | uniq
|
|
|
|
"HOSTNAME": "guestbook-44sep",
|
|
|
|
"HOSTNAME": "guestbook-hum5n",
|
|
|
|
"HOSTNAME": "guestbook-ppm40",
|
|
|
|
```
|
|
|
|
|
|
|
|
The load balancer correctly targets all the pods, even though they are in multiple zones.
|
|
|
|
|
|
|
|
### 클러스터 강제 종료
|
|
|
|
|
|
|
|
When you're done, clean up:
|
|
|
|
|
|
|
|
GCE:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-f kubernetes/cluster/kube-down.sh
|
|
|
|
KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-b kubernetes/cluster/kube-down.sh
|
|
|
|
KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a kubernetes/cluster/kube-down.sh
|
|
|
|
```
|
|
|
|
|
|
|
|
AWS:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c kubernetes/cluster/kube-down.sh
|
|
|
|
KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh
|
|
|
|
KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh
|
|
|
|
```
|