First Japanese l10n work for release-1.13 (#12998)
* [ja] add basic files for 1.13 (#11571) * [ja] add basic files for 1.13 * [ja] add some base files * Translate setup/independent/_index.md (#11573) * Translate content/ja/docs/home/_index.md in Japanese (#11569) * Translate content/ja/docs/setup/custom-cloud/_index.md in Japanese (#11572) * Translate content/en/docs/setup/on-premises-vm/_index.md in Japanese (#11574) * Translate content/ja/docs/setup/release/_index.md in Japanese (#11576) * ja-trans: Translate content/ja/docs/tutorials/kubernetes-basics/explore/_index.md (#11580) * Translate content/ja/docs/setup/turnkey/_index.md (#11582) * Translate content/ja/docs/tutorials/kubernetes-basics/update/_index.m… (#11579) * Translate content/ja/docs/tutorials/kubernetes-basics/update/_index.md in Japanese * Fix title * Translated Tutorials/Learn Kubenetes Basics/Deploy an App in Japanese. (#11583) * translate tutorials/kubernetes-basics/expose/_index.md (#11584) * Dev 1.13 ja.1 tutorials kubernetes basics scale (#11577) * Translate content/ja/docs/tutorials/kubernetes-basics/scale/_index.md in Japanese * Fix title * translate deprecated state description (#11578) * Fix the build doesn't pass at dev-1.13-ja.1 (#11609) * delete files not at minimum requirements to pass the build. * copy necessary file for pass build from content/en * translate content/ja/_index.html (#11585) * ja-trans: add docs/_index.md (#11721) * Remove copied docs/index.md by mistake. (#11735) * Translate stable state description (#11642) * translate stable state description * Update content/ja/docs/templates/feature-state-stable.txt Co-Authored-By: auifzysr <38824461+auifzysr@users.noreply.github.com> * apply the suggestion directly * Translate alpha state description (#11753) * [ja] add ja section (#11581) * [ja] translate case-studies (#12060) * [ja] translate case-studies * remove comment * fix /ja/docs/ content (#12062) * Translate content/ja/docs/tutorials/kubernetes-basics/create-cluster/_index.md in Japanese (#12059) * [ja] translate supported doc versions (#12068) * [ja] add ja.toml (#11595) * Remove reviewers block from front matter. (#12092) * Translate beta state description (#12023) * [ja] translate setup (#12070) * translate setup * add translation * Update _index.md * Update _index.md * 表記ゆれ * 表記ゆれ * [ja] translate what-is-kubernetes (#12065) * translate what-is-kubernetes * add more translation * finish basic translation * Update content/ja/docs/concepts/overview/what-is-kubernetes.md Co-Authored-By: d-kuro <34958495+d-kuro@users.noreply.github.com> * Update what-is-kubernetes.md * Update content/ja/docs/concepts/overview/what-is-kubernetes.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/concepts/overview/what-is-kubernetes.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/concepts/overview/what-is-kubernetes.md Co-Authored-By: inductor <kohei.ota@zozo.com> * fix new lines * fix review * Update content/ja/docs/concepts/overview/what-is-kubernetes.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update what-is-kubernetes.md * Update what-is-kubernetes.md * rephrase プラクティス to 知見 * Update content/ja/docs/concepts/overview/what-is-kubernetes.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/concepts/overview/what-is-kubernetes.md Co-Authored-By: inductor <kohei.ota@zozo.com> * italic * オーケストレーション * [ja] tutorials/index (#12071) * translate tutorial index * fix page link * add ja to path for kubernetes-basic because it's already in progress of translation * Update _index.md * review * remove typo * Update content/ja/docs/tutorials/_index.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/_index.md Co-Authored-By: inductor <kohei.ota@zozo.com> * [ja] translate cri installation (#12095) * [ja] translate cri installation * Update content/ja/docs/setup/cri.md Co-Authored-By: auifzysr <38824461+auifzysr@users.noreply.github.com> * apply comments * apply comments * [ja]translate tutorials/kubernetes-basics (#12074) * start translation * translate index * wording * wording * cluster-interactive * cluster-intro * update interactive * update some data * fix link * deploy-intro * japanize * fix path for public data * wording * start translation of expose * expose intro * けーしょん * scale-intro * update-intro * fix wrong word * fix wording * translate missing string * Update content/ja/docs/tutorials/kubernetes-basics/_index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/_index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/scale/scale-intro.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/expose/expose-intro.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/expose/expose-interactive.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/_index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * fix wording * Update content/ja/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/scale/scale-interactive.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/expose/expose-interactive.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/explore/explore-interactive.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/explore/explore-intro.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tutorials/kubernetes-basics/scale/scale-intro.html Co-Authored-By: inductor <kohei.ota@zozo.com> * lowercase for kubectl * ja-trans: tutorials/hello-minikube.md (#11648) * trns-ja: tutorials/hello-minikube.md * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update hello-minikube.md 大変、大変遅くなりました。丁寧に見ていただいて感謝です。いただいたコメントを反映しました。 * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update hello-minikube.md <修正点> ・10行目の「本チュートリアルでは」を削除 ・クラスターをクラスタに統一 * Update hello-minikube.md 10行目の実践を手を動かすに修正 * Update hello-minikube.md 10行目を「手を動かす準備はできていますか?本チュートリアルでは、Node.jsを使った簡単な"Hello World"を実行するKubernetesクラスタをビルドします。」に差し替え。 * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * Update content/ja/docs/tutorials/hello-minikube.md Co-Authored-By: lkougi <45655192+lkougi@users.noreply.github.com> * ja-trans: setup/custom-cloud/coreos/ (#12731) * ja-trans: setup/release/building-from-source/ (#12721) * translate building-from-source * improve translation * ja-trans: translate setup/certificates/ (#12722) * translate certificates.md * change translation about Paths * ja-trans: setup/custom-cloud/kubespray/ (#12733) * ja-trans: setup/node-conformance/ (#12728) * ja-trans: setup/node-conformance/ * Update content/ja/docs/setup/node-conformance.md LGTM Co-Authored-By: makocchi-git <makocchi@gmail.com> * Update content/ja/docs/setup/node-conformance.md LGTM Co-Authored-By: makocchi-git <makocchi@gmail.com> * Update content/ja/docs/setup/node-conformance.md LGTM Co-Authored-By: makocchi-git <makocchi@gmail.com> * ja-trans: setup/cluster-large/ (#12723) * ja-trans: setup/cluster-large/ * translate quota and addon * ja-trans: setup/pick-right-solution/ (#12729) * ja-trans: setup/pick-right-solution/ * revise translating solutions * ending with a noun * ja-trans: setup/custom-cloud/kops/ (#12732) * ja-trans: setup/custom-cloud/kops/ * improve translation * translate build * translate explore and add-ons * ja-trans: setup/independent/control-plane-flags/ (#12745) * ja-trans: setup/minikube/ (#12724) * ja-trans: setup/minikube/ * Update content/ja/docs/setup/minikube.md LGTM Co-Authored-By: makocchi-git <makocchi@gmail.com> * translate features and add-ons * improve translation * improve translation * fix translation style * ja-trans: setup/multiple-zones/ (#12725) * ja-trans: setup/multiple-zones/ * ja-trans: setup/multiple-zones/ (2) * ending with a noun * fix translation style * ja-trans: setup/scratch/ (#12730) * ja-trans: setup/scratch/ * revise translating connectivity * improve translation * Update content/ja/docs/setup/scratch.md LGTM Co-Authored-By: makocchi-git <makocchi@gmail.com> * Update content/ja/docs/setup/scratch.md LGTM Co-Authored-By: makocchi-git <makocchi@gmail.com> * Update content/ja/docs/setup/scratch.md LGTM Co-Authored-By: makocchi-git <makocchi@gmail.com> * Update content/ja/docs/setup/scratch.md LGTM Co-Authored-By: makocchi-git <makocchi@gmail.com> * revise translation * revert some words to English * fix translation style * fix title * ja-trans: setup/independent/create-cluster-kubeadm/ (#12750) * ja-trans: setup/independent/create-cluster-kubeadm/ * translate Instructions * fix translation style * ja-trans: setup/independent/kubelet-integration/ (#12754) * ja-trans: setup/independent/kubelet-integration/ * fix translation style * ja-trans: setup/independent/setup-ha-etcd-with-kubeadm/ (#12755) * ja-trans: setup/independent/setup-ha-etcd-with-kubeadm/ * fix translation style * ja-trans: setup/independent/troubleshooting-kubeadm/ (#12757) * ja-trans: setup/independent/troubleshooting-kubeadm/ * pod -> Pod * ja-trans: setup/on-premises-vm/cloudstack/ (#12772) * ja-trans: setup/independent/high-availability/ (#12753) * ja-trans: setup/independent/high-availability/ * fix translation style * translate Stacked and worker node * ja-trans: setup/on-premises-metal/krib/ (#12770) * ja-trans: setup/on-premises-metal/krib/ * Update content/ja/docs/setup/on-premises-metal/krib.md Co-Authored-By: makocchi-git <makocchi@gmail.com> * ja-trans: setup/on-premises-vm/ovirt/ (#12781) * ja-trans: setup/on-premises-vm/dcos/ (#12780) * ja-trans: setup/on-premises-vm/dcos/ * fix translation * Update content/ja/docs/setup/on-premises-vm/dcos.md Co-Authored-By: makocchi-git <makocchi@gmail.com> * ja-trans: setup/turnkey/alibaba-cloud/ (#12786) * ja-trans: setup/turnkey/alibaba-cloud/ * tiny fix * Update content/ja/docs/setup/turnkey/alibaba-cloud.md Co-Authored-By: makocchi-git <makocchi@gmail.com> * fix translation * ja-trans: setup/turnkey/aws/ (#12788) * ja-trans: setup/turnkey/aws/ * translate production grade * fix translation * ja-trans: setup/release/notes/ (#12791) * ja-trans: setup/independent/install-kubeadm.md (#12812) * ja-trans: setup/independent/install-kubeadm.md * ja-trans: fix internal links in setup/independent/install-kubeadm.md * ja-trans: setup/turnkey/clc/ (#12824) * ja-trans: setup/turnkey/clc/ * Update content/ja/docs/setup/turnkey/clc.md Co-Authored-By: makocchi-git <makocchi@gmail.com> * Update content/ja/docs/setup/turnkey/clc.md Co-Authored-By: makocchi-git <makocchi@gmail.com> * ja-trans: setup/turnkey/stackpoint/ (#12853) * ja-trans: concepts/ (#12820) * ja-trans: concepts/ * fix translation * ja: fix formatting in what is kubernetes (#12694) * fix formatting in what is kubernetes * Update content/ja/docs/concepts/overview/what-is-kubernetes.md Co-Authored-By: inductor <kohei.ota@zozo.com> * ? * format (#12866) * ja-trans: setup/turnkey/gce.md (#12813) * ja-trans: setup/turnkey/gce.md * Update content/ja/docs/setup/turnkey/gce.md Co-Authored-By: auifzysr <38824461+auifzysr@users.noreply.github.com> * Update content/ja/docs/setup/turnkey/gce.md Co-Authored-By: auifzysr <38824461+auifzysr@users.noreply.github.com> * ja-trans: modify a word in setup/turnkey/gce.md * Translated docs/setup/turnkey/azure.md. (#12951) * Translated docs/setup/turnkey/azure.md. * Update content/ja/docs/setup/turnkey/azure.md Applied a suggestion. Co-Authored-By: dzeyelid <dzeyelid@gmail.com> * Update content/ja/docs/setup/turnkey/azure.md Applied a suggestion. Co-Authored-By: dzeyelid <dzeyelid@gmail.com> * Update content/ja/docs/setup/turnkey/azure.md Applied suggestion. Co-Authored-By: dzeyelid <dzeyelid@gmail.com> * Applied review suggestions. * Applied review suggestions. * fix language setting order.pull/13027/head
23
config.toml
|
@ -154,16 +154,15 @@ contentDir = "content/ko"
|
|||
time_format_blog = "2006.01.02"
|
||||
language_alternatives = ["en"]
|
||||
|
||||
[languages.no]
|
||||
[languages.ja]
|
||||
title = "Kubernetes"
|
||||
description = "Production-Grade Container Orchestration"
|
||||
languageName ="Norsk"
|
||||
languageName = "日本語 Japanese"
|
||||
weight = 4
|
||||
contentDir = "content/no"
|
||||
contentDir = "content/ja"
|
||||
|
||||
[languages.no.params]
|
||||
time_format_blog = "02.01.2006"
|
||||
# A list of language codes to look for untranslated content, ordered from left to right.
|
||||
[languages.ja.params]
|
||||
time_format_blog = "2006.01.02"
|
||||
language_alternatives = ["en"]
|
||||
|
||||
[languages.fr]
|
||||
|
@ -189,3 +188,15 @@ contentDir = "content/it"
|
|||
time_format_blog = "02.01.2006"
|
||||
# A list of language codes to look for untranslated content, ordered from left to right.
|
||||
language_alternatives = ["en"]
|
||||
|
||||
[languages.no]
|
||||
title = "Kubernetes"
|
||||
description = "Production-Grade Container Orchestration"
|
||||
languageName ="Norsk"
|
||||
weight = 7
|
||||
contentDir = "content/no"
|
||||
|
||||
[languages.no.params]
|
||||
time_format_blog = "02.01.2006"
|
||||
# A list of language codes to look for untranslated content, ordered from left to right.
|
||||
language_alternatives = ["en"]
|
||||
|
|
After Width: | Height: | Size: 1.9 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 1.8 MiB |
After Width: | Height: | Size: 1.8 KiB |
After Width: | Height: | Size: 5.0 KiB |
|
@ -0,0 +1,3 @@
|
|||
---
|
||||
headless: true
|
||||
---
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
title: "Production-Grade Container Orchestration"
|
||||
abstract: "自動化されたコンテナのデプロイ・スケール・管理"
|
||||
cid: home
|
||||
---
|
||||
|
||||
{{< deprecationwarning >}}
|
||||
|
||||
{{< blocks/section id="oceanNodes" >}}
|
||||
{{% blocks/feature image="flower" %}}
|
||||
### [Kubernetes]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}})は、デプロイやスケーリングを自動化したり、コンテナ化されたアプリケーションを管理したりするための、オープンソースのシステムです。
|
||||
|
||||
管理や検出を容易にするため、アプリケーションを論理的な単位に分割し、コンテナをグルーピングします。Kubernetesは[Googleでの15年にわたる経験](http://queue.acm.org/detail.cfm?id=2898444)を基に構築されており、コミュニティのアイディアや慣習との最善の組み合わせを取っています。
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature image="scalable" %}}
|
||||
#### 惑星規模のスケーリング
|
||||
|
||||
Googleが週に何十億ものコンテナを実行することを可能としているのと同じ原則に沿ってデザインされているため、Kubernetesは運用チームの人数を増やさずに規模を拡大することができます。
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature image="blocks" %}}
|
||||
#### いつまでも使える
|
||||
|
||||
ローカルのテストであろうとグローバル企業での開発であろうと、Kubernetesの柔軟性はあなたの要求がどれだけ複雑になろうとも問題なく、矛盾無く、簡単にアプリケーションを提供できます。
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature image="suitcase" %}}
|
||||
#### どこでも実行できる
|
||||
|
||||
Kubernetesはオープンソースなので、オンプレミスやパブリッククラウド、それらのハイブリッドなどの利点を自由に得ることができ、簡単に移行することができます。
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{< /blocks/section >}}
|
||||
|
||||
{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
|
||||
<div class="light-text">
|
||||
<h2>150以上のマイクロサービスアプリケーションをKubernetes上に移行する挑戦</h2>
|
||||
<p>By Sarah Wells, Technical Director for Operations and Reliability, Financial Times</p>
|
||||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">ビデオを見る</button>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2018/" button id="desktopKCButton">2018年11月のKubeCon 上海に参加する</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/" button id="desktopKCButton">2018年12月のKubeCon シアトルに参加する</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
<button id="closeButton"></button>
|
||||
</div>
|
||||
{{< /blocks/section >}}
|
||||
|
||||
{{< blocks/kubernetes-features >}}
|
||||
|
||||
{{< blocks/case-studies >}}
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
title: ケーススタディ
|
||||
linkTitle: ケーススタディ
|
||||
bigheader: Kubernetesのユーザーケーススタディ
|
||||
abstract: 本番環境でKubernetesを動かしているユーザーの一覧
|
||||
layout: basic
|
||||
class: gridPage
|
||||
cid: caseStudies
|
||||
---
|
|
@ -0,0 +1,3 @@
|
|||
---
|
||||
title: ドキュメント
|
||||
---
|
|
@ -0,0 +1,75 @@
|
|||
---
|
||||
title: コンセプト
|
||||
main_menu: true
|
||||
content_template: templates/concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
The Concepts section helps you learn about the parts of the Kubernetes system and the abstractions Kubernetes uses to represent your cluster, and helps you obtain a deeper understanding of how Kubernetes works.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## 概要
|
||||
|
||||
To work with Kubernetes, you use *Kubernetes API objects* to describe your cluster's *desired state*: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, `kubectl`. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state.
|
||||
|
||||
Once you've set your desired state, the *Kubernetes Control Plane* works to make the cluster's current state match the desired state. To do so, Kubernetes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection of processes running on your cluster:
|
||||
|
||||
* The **Kubernetes Master** is a collection of three processes that run on a single node in your cluster, which is designated as the master node. Those processes are: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and [kube-scheduler](/docs/admin/kube-scheduler/).
|
||||
* Each individual non-master node in your cluster runs two processes:
|
||||
* **[kubelet](/docs/admin/kubelet/)**, which communicates with the Kubernetes Master.
|
||||
* **[kube-proxy](/docs/admin/kube-proxy/)**, a network proxy which reflects Kubernetes networking services on each node.
|
||||
|
||||
## Kubernetesオブジェクト
|
||||
|
||||
Kubernetes contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are represented by objects in the Kubernetes API; see the [Kubernetes Objects overview](/docs/concepts/abstractions/overview/) for more details.
|
||||
|
||||
The basic Kubernetes objects include:
|
||||
|
||||
* [Pod](/docs/concepts/workloads/pods/pod-overview/)
|
||||
* [Service](/docs/concepts/services-networking/service/)
|
||||
* [Volume](/docs/concepts/storage/volumes/)
|
||||
* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/)
|
||||
|
||||
In addition, Kubernetes contains a number of higher-level abstractions called Controllers. Controllers build upon the basic objects, and provide additional functionality and convenience features. They include:
|
||||
|
||||
* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
|
||||
* [Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
|
||||
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
|
||||
* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
|
||||
|
||||
## Kubernetesコントロールプレーン
|
||||
|
||||
The various parts of the Kubernetes Control Plane, such as the Kubernetes Master and kubelet processes, govern how Kubernetes communicates with your cluster. The Control Plane maintains a record of all of the Kubernetes Objects in the system, and runs continuous control loops to manage those objects' state. At any given time, the Control Plane's control loops will respond to changes in the cluster and work to make the actual state of all the objects in the system match the desired state that you provided.
|
||||
|
||||
For example, when you use the Kubernetes API to create a Deployment object, you provide a new desired state for the system. The Kubernetes Control Plane records that object creation, and carries out your instructions by starting the required applications and scheduling them to cluster nodes--thus making the cluster's actual state match the desired state.
|
||||
|
||||
### Kubernetesマスター
|
||||
|
||||
The Kubernetes master is responsible for maintaining the desired state for your cluster. When you interact with Kubernetes, such as by using the `kubectl` command-line interface, you're communicating with your cluster's Kubernetes master.
|
||||
|
||||
> The "master" refers to a collection of processes managing the cluster state. Typically these processes are all run on a single node in the cluster, and this node is also referred to as the master. The master can also be replicated for availability and redundancy.
|
||||
|
||||
### Kubernetesノード
|
||||
|
||||
The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you'll rarely interact with nodes directly.
|
||||
|
||||
#### オブジェクトメタデータ
|
||||
|
||||
|
||||
* [Annotations](/docs/concepts/overview/working-with-objects/annotations/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
If you would like to write a concept page, see
|
||||
[Using Page Templates](/docs/home/contribute/page-templates/)
|
||||
for information about the concept page type and the concept template.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "Overview"
|
||||
weight: 20
|
||||
---
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
---
|
||||
title: Kubernetesとは何か?
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
このページでは、Kubernetesの概要について説明します。
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
Kubernetesは、宣言的な構成管理と自動化を促進し、コンテナ化されたワークロードやサービスを管理するための、ポータブルで拡張性のあるオープンソースプラットホームです。
|
||||
|
||||
Kubernetesは膨大で、急速に成長しているエコシステムを備えており、それらのサービス、サポート、ツールは幅広い形で利用可能です。
|
||||
|
||||
Googleは2014年にKubernetesプロジェクトをオープンソース化しました。Kubernetesは[Googleが大規模な本番ワークロードを動かしてきた10年半の経験](https://research.google.com/pubs/pub43438.html)と、コミュニティから得られた最善のアイデア、知見に基づいています。
|
||||
|
||||
## なぜKubernetesが必要で、どんなことができるのか?
|
||||
|
||||
Kubernetesには多くの機能があります。考えられるものとしては
|
||||
|
||||
- コンテナ基盤
|
||||
- マイクロサービス基盤
|
||||
- ポータブルなクラウド基盤
|
||||
|
||||
など、他にもいろいろ
|
||||
|
||||
Kubernetesは、**コンテナを中心とした**管理基盤です。ユーザーワークロードの代表格であるコンピューティング、ネットワーキング、ストレージインフラストラクチャのオーケストレーションを行います。それによって、Platform as a Service(PaaS)の簡単さの大部分を、Infrastructure as a Service(IaaS)の柔軟さとともに提供し、インフラストラクチャプロバイダの垣根を超えたポータビリティを実現します。
|
||||
|
||||
## Kubernetesが基盤になるってどういうこと?
|
||||
|
||||
Kubernetesが多くの機能を提供すると言いつつも、新しい機能から恩恵を受ける新しいシナリオは常にあります。アプリケーション固有のワークフローを効率化して開発者のスピードを早めることができます。最初は許容できるアドホックなオーケストレーションでも、大規模で堅牢な自動化が必要となることはしばしばあります。これが、Kubernetesがアプリケーションのデプロイ、拡張、および管理を容易にするために、コンポーネントとツールのエコシステムを構築するための基盤としても機能するように設計された理由です。
|
||||
|
||||
[ラベル](/docs/concepts/overview/working-with-objects/labels/)を使用すると、ユーザーは自分のリソースを整理できます。[アノテーション](/docs/concepts/overview/working-with-objects/annotations/)を使用すると、ユーザーは自分のワークフローを容易にし、管理ツールが状態をチェックするための簡単な方法を提供するためにカスタムデータを使ってリソースを装飾できるようになります。
|
||||
|
||||
さらに、[Kubernetesコントロールプレーン](/docs/concepts/overview/components/)は、開発者やユーザーが使える[API](/docs/reference/using-api/api-overview/)の上で成り立っています。ユーザーは[スケジューラー](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md)などの独自のコントローラーを、汎用の[コマンドラインツール](/docs/user-guide/kubectl-overview/)で使える[独自のAPI](/docs/concepts/api-extension/custom-resources/)を持たせて作成することができます。
|
||||
|
||||
この[デザイン](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md)によって、他の多くのシステムがKubernetes上で構築できるようになりました。
|
||||
|
||||
## Kubernetesにないこと
|
||||
|
||||
Kubernetesは伝統的な何でも入りのPaaSシステムではありません。Kubernetesはハードウェアレベルではなくコンテナレベルで動作するため、PaaS製品が提供するような、共通のいくつかの一般的に適用可能な機能(デプロイ、拡張、負荷分散、ログ記録、監視など)を提供します。ただし、Kubernetesはモノリシックではなく、これらのデフォルトのソリューションは任意に脱着可能です。Kubernetesは開発者の基盤を構築するための構成要素を提供しますが、重要な場合はユーザーの選択と柔軟性を維持します。
|
||||
|
||||
Kubernetesは...
|
||||
|
||||
* サポートするアプリケーションの種類を限定しません。Kubernetesはステートレス、ステートフル、およびデータ処理ワークロードなど、非常に多様なワークロードをサポートするように作られています。アプリケーションをコンテナ内で実行できる場合は、Kubernetes上でもうまく動作するはずです。
|
||||
* ソースコードのデプロイやアプリケーションのビルドを行いません。継続的インテグレーション、デリバリー、デプロイ(CI/CD)ワークフローは、技術選定がそうであるように、組織の文化や好みによって決まるからです。
|
||||
* ミドルウェア(例: message buses)、データ処理フレームワーク(例: Spark)、データベース(例: mysql)、キャッシュ、クラスターストレージシステム(例: Ceph) のような、アプリケーションレベルの機能は組み込みでは提供しません。これらのコンポーネントはKubernetesの上で動作できますし、Open Service Brokerのようなポータブルメカニズムを経由してKubernetes上のアプリケーションからアクセスすることもできます。
|
||||
* ロギング、モニタリング、アラーティングソリューションへの指示は行いません。概念実証(PoC)としていくつかのインテグレーション、およびメトリックを収集およびエクスポートするためのメカニズムを提供します。
|
||||
* 設定言語/システム(例: jsonnet)を提供も強制もしません。任意の形式の宣言仕様の対象となる可能性がある宣言APIを提供します。
|
||||
* 包括的なインフラ構成、保守、管理、またはセルフヒーリングシステムを提供、導入しません。
|
||||
|
||||
さらに、Kubernetesは単なる *オーケストレーションシステム* ではありません。実際、オーケストレーションは不要です。*オーケストレーション* の技術的定義は、定義されたワークフローの実行です。最初にA、次にB、次にCを実行します。対照的に、Kubernetesは現在の状態を提供された望ましい状態に向かって継続的に推進する一連の独立した構成可能な制御プロセスで構成されます。AからCへのアクセス方法は関係ありません。集中管理も必要ありません。これにより、使いやすく、より強力で、堅牢で、回復力があり、そして拡張性のあるシステムが得られます。
|
||||
|
||||
## なぜコンテナなのか?
|
||||
|
||||
なぜコンテナを使うべきかの理由をお探しですか?
|
||||
|
||||
![なぜコンテナなのか?](/images/docs/why_containers.svg)
|
||||
|
||||
アプリケーションをデプロイするための古い方法は、オペレーティングシステムのパッケージマネージャを使用してアプリケーションをホストにインストールすることでした。これには、アプリケーションの実行ファイル、構成、ライブラリ、ライフサイクルがそれぞれ、またホストOS自身と絡み合うというデメリットがありました。予測可能なロールアウトとロールバックを実現するために、不変の仮想マシンイメージを作成することもできますが、VMは重く、移植性がありません。
|
||||
|
||||
新しい方法は、ハードウェア仮想化ではなく、オペレーティングシステムレベルの仮想化に基づいてコンテナを展開することです。各コンテナは互いに、そしてホストから隔離されています。また、独自のファイルシステムを持ち、お互いのプロセスを見ることができず、計算リソースの使用量を制限することができます。これはVMよりも構築が簡単で、基盤となるインフラストラクチャとホストのファイルシステムから分離されているため、クラウドやOSのディストリビューション間で移植性があります。
|
||||
|
||||
コンテナは小さくて速いので、1つのアプリケーションを各コンテナイメージにまとめることができます。この1対1のアプリケーションとイメージの関係により、コンテナの利点が完全に引き出されます。コンテナを使用すると、各アプリケーションを残りのアプリケーションスタックと合成したり、本番インフラストラクチャ環境と結合したりする必要がないため、不変のコンテナイメージをデプロイ時ではなく、ビルド時またはリリース時に作成できます。ビルド/リリース時にコンテナイメージを生成することで、開発から運用に一貫した環境を持ち込むことができます。同様に、コンテナはVMよりもはるかに透過的であるため、監視と管理が容易になります。これは、コンテナのプロセスライフサイクルがコンテナ内のプロセススーパーバイザによって隠されるのではなく、インフラストラクチャによって管理される場合に特に当てはまります。最後に、コンテナごとに1つのアプリケーションを使用すると、コンテナの管理はアプリケーションのデプロイ管理と同等になります。
|
||||
|
||||
コンテナの利点をまとめると:
|
||||
|
||||
* **アジャイルなアプリケーション作成とデプロイ**:
|
||||
VMイメージの使用と比べ、コンテナイメージ作成は容易で効率も高いです。
|
||||
* **継続的な開発、インテグレーション、デプロイ**:
|
||||
迅速で簡単なロールバックで、信頼性の高い頻繁なコンテナイメージのビルドとデプロイを提供します(イメージの不変性にもよります)。
|
||||
* **開発と運用の懸念を分離**:
|
||||
デプロイ時ではなくビルド時またはリリース時にアプリケーションのコンテナイメージを作成することで、アプリケーションをインフラストラクチャから切り離します。
|
||||
* **可観測性**
|
||||
OSレベルの情報や測定基準だけでなく、アプリケーションの正常性やその他のシグナルも明確にします。
|
||||
* **開発、テスト、本番環境に跨った環境の一貫性**:
|
||||
手元のノートPC上でも、クラウド上と同じように動作します。
|
||||
* **クラウドとOSディストリビューションの移植性**:
|
||||
Ubuntu、RHEL、CoreOS、オンプレミス、Google Kubernetes Engine、その他のどこでも動作します。
|
||||
* **アプリケーション中心の管理**:
|
||||
仮想ハードウェア上でのOS実行から、論理リソースを使用したOS上でのアプリケーション実行へと、抽象度のレベルを上げます。
|
||||
* **疎結合で、分散された、伸縮自在の遊離した[マイクロサービス](https://martinfowler.com/articles/microservices.html)**:
|
||||
アプリケーションは小さな独立した欠片に分割され、動的に配置および管理できます。1つの大きな単一目的のマシンで実行されるモノリシックなスタックではありません。
|
||||
* **リソース分割**:
|
||||
アプリケーションパフォーマンスが予測可能です。
|
||||
* **リソースの効率利用**:
|
||||
高効率で高密度です。
|
||||
|
||||
## Kubernetesってどういう意味?K8sって何?
|
||||
|
||||
**Kubernetes** という名前はギリシャ語で *操舵手* や *パイロット* という意味があり、*知事* や[サイバネティックス](http://www.etymonline.com/index.php?term=cybernetics)の語源にもなっています。*K8s* は、8文字の「ubernete」を「8」に置き換えた略語です。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
* [はじめる](/docs/setup/)準備はできましたか?
|
||||
* さらなる詳細については、[Kubernetesのドキュメント](/docs/home/)を御覧ください。
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Kubernetesドキュメント
|
||||
layout: docsportal_home
|
||||
noedit: true
|
||||
cid: userJourneys
|
||||
css: /css/style_user_journeys.css
|
||||
js: /js/user-journeys/home.js, https://use.fontawesome.com/4bcc658a89.js
|
||||
display_browse_numbers: true
|
||||
linkTitle: "ホーム"
|
||||
main_menu: true
|
||||
weight: 10
|
||||
hide_feedback: true
|
||||
menu:
|
||||
main:
|
||||
title: "ドキュメント"
|
||||
weight: 20
|
||||
post: >
|
||||
<p>チュートリアル、サンプルやドキュメントのリファレンスを使って Kubernetes の利用方法を学んでください。あなたは<a href="/editdocs/" data-auto-burger-exclude>ドキュメントへコントリビュートをする</a>こともできます!</p>
|
||||
---
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
title: Kubernetesドキュメントがサポートしているバージョン
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
本ウェブサイトでは、現行版とその直前4バージョンのKubernetesドキュメントを含んでいます。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## 現行版
|
||||
|
||||
現在のバージョンは
|
||||
[{{< param "version" >}}](/).
|
||||
|
||||
## 以前のバージョン
|
||||
|
||||
{{< versions-other >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
After Width: | Height: | Size: 1.9 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 1.8 MiB |
After Width: | Height: | Size: 1.8 KiB |
After Width: | Height: | Size: 5.0 KiB |
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
no_issue: true
|
||||
title: セットアップ
|
||||
main_menu: true
|
||||
weight: 30
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
このページを使い、自分のニーズに最も適したソリューションを見つけてください。
|
||||
|
||||
Kubernetesをどこで実行するかは、利用可能なリソースと必要な柔軟性によって異なります。ノートPCからクラウドプロバイダのVM、ベアメタルのラックまで、ほぼどのような場所でもKubernetesを実行できます。単一のコマンドを実行して完全に管理された
|
||||
を設定したり、ベアメタルで独自にカスタマイズしたクラスタを作成したりすることもできます。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## ローカルマシンソリューション
|
||||
|
||||
ローカルマシンソリューションは、Kubernetesを使い始めるための簡単な方法です。クラウドリソースと、割当量の消費を気にせずにKubernetesクラスタを作成してテストできます。
|
||||
|
||||
もし以下のようなことを実現したいのであれば、ローカルマシンソリューションを選ぶべきです:
|
||||
|
||||
* Kubernetesの検証や勉強
|
||||
* ローカルでのクラスタの開発やテスト
|
||||
|
||||
[ローカルマシンソリューション](/docs/setup/pick-right-solution/#local-machine-solutions)を選ぶ
|
||||
|
||||
## ホスト型ソリューション
|
||||
|
||||
ホスト型ソリューションは、Kubernetesクラスタを作成および管理するためには便利な方法です。自身で管理せずとも、ホスティングプロバイダがクラスタを管理、運用します。
|
||||
|
||||
もし以下のようなことを実現したいのであれば、ホスト型ソリューションを選ぶべきです:
|
||||
|
||||
* 完全に管理されたソリューションが欲しい
|
||||
* アプリケーションやサービスの開発に集中したい
|
||||
* 専用のSite Reliability Engineering (SRE)チームはないが、高可用性を求めている
|
||||
* クラスタをホストしたり、監視したりするためのリソースがない
|
||||
|
||||
[ホスト型ソリューション](/docs/setup/pick-right-solution/#hosted-solutions)を選ぶ
|
||||
|
||||
## ターンキークラウドソリューション
|
||||
|
||||
このソリューションを使用すると、わずかなコマンドでKubernetesクラスタが作成できます。また、積極的に開発されており、積極的なコミュニティサポートを受けています。さまざまなCloud IaaSプロバイダでホストすることもできますが、努力と引き換えに、より多くの自由と柔軟性を提供します。
|
||||
|
||||
もし以下のようなことを実現したいのであれば、ターンキークラウドソリューションを選ぶべきです:
|
||||
|
||||
* ホスト型ソリューションが許可する以上に、クラスタをもっと制御したい
|
||||
* より多くのオペレーションの所有権を引き受けたい
|
||||
|
||||
[ターンキークラウドソリューション](/docs/setup/pick-right-solution/#turnkey-cloud-solutions)を選ぶ
|
||||
|
||||
## ターンキーオンプレミスソリューション
|
||||
|
||||
このソリューションを使用すると、内部の安全なクラウドネットワーク上に、少ないコマンドでKubernetesクラスタを作成できます。
|
||||
|
||||
もし以下のようなことを実現したいのであれば、ターンキーオンプレミスソリューションを選ぶべきです:
|
||||
|
||||
* プライベートクラウド内にクラスタを配置したい
|
||||
* 専用のSREチームがいる
|
||||
* クラスタをホストし、監視するためのリソースを持っている
|
||||
|
||||
[ターンキーオンプレミスソリューション](/docs/setup/pick-right-solution/#on-premises-turnkey-cloud-solutions)を選ぶ
|
||||
|
||||
## カスタムソリューション
|
||||
|
||||
カスタムソリューションは、クラスタに対して最も自由度が高いですが、専門知識が最も必要になります。このソリューションは、数多くのオペレーティングシステム上のベアメタルからクラウドプロバイダまで、多岐にわたります。
|
||||
|
||||
[カスタムソリューション](/docs/setup/pick-right-solution/#custom-solutions)を選ぶ
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
ソリューションの完全なリストを見るには、[正しいソリューションの選択](/docs/setup/pick-right-solution/) に進んでください。
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,137 @@
|
|||
---
|
||||
title: PKI証明書とその要件
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Kubernetes requires PKI certificates for authentication over TLS.
|
||||
If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), the certificates that your cluster requires are automatically generated.
|
||||
You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server.
|
||||
This page explains the certificates that your cluster requires.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## あなたのクラスタではどのように証明書が使われているのか
|
||||
|
||||
Kubernetes requires PKI for the following operations:
|
||||
|
||||
* Client certificates for the kubelet to authenticate to the API server
|
||||
* Server certificate for the API server endpoint
|
||||
* Client certificates for administrators of the cluster to authenticate to the API server
|
||||
* Client certificates for the API server to talk to the kubelets
|
||||
* Client certificate for the API server to talk to etcd
|
||||
* Client certificate/kubeconfig for the controller manager to talk to the API server
|
||||
* Client certificate/kubeconfig for the scheduler to talk to the API server.
|
||||
* Client and server certificates for the [front-proxy][proxy]
|
||||
|
||||
{{< note >}}
|
||||
`front-proxy` certificates are required only if you run kube-proxy to support [an extension API server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/).
|
||||
{{< /note >}}
|
||||
|
||||
etcd also implements mutual TLS to authenticate clients and peers.
|
||||
|
||||
## 証明書の保存場所
|
||||
|
||||
If you install Kubernetes with kubeadm, certificates are stored in `/etc/kubernetes/pki`. All paths in this documentation are relative to that directory.
|
||||
|
||||
## 手動で証明書を設定する
|
||||
|
||||
If you don't want kubeadm to generate the required certificates, you can create them in either of the following ways.
|
||||
|
||||
### 単一ルート認証局
|
||||
|
||||
You can create a single root CA, controlled by an administrator. This root CA can then create multiple intermediate CAs, and delegate all further creation to Kubernetes itself.
|
||||
|
||||
Required CAs:
|
||||
|
||||
| path | Default CN | description |
|
||||
|------------------------|---------------------------|----------------------------------|
|
||||
| ca.crt,key | kubernetes-ca | Kubernetes general CA |
|
||||
| etcd/ca.crt,key | etcd-ca | For all etcd-related functions |
|
||||
| front-proxy-ca.crt,key | kubernetes-front-proxy-ca | For the [front-end proxy][proxy] |
|
||||
|
||||
### 全ての証明書
|
||||
|
||||
If you don't wish to copy these private keys to your API servers, you can generate all certificates yourself.
|
||||
|
||||
Required certificates:
|
||||
|
||||
| Default CN | Parent CA | O (in Subject) | kind | hosts (SAN) |
|
||||
|-------------------------------|---------------------------|----------------|----------------------------------------|---------------------------------------------|
|
||||
| kube-etcd | etcd-ca | | server, client [<sup>1</sup>][etcdbug] | `localhost`, `127.0.0.1` |
|
||||
| kube-etcd-peer | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
|
||||
| kube-etcd-healthcheck-client | etcd-ca | | client | |
|
||||
| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |
|
||||
| kube-apiserver | kubernetes-ca | | server | `<hostname>`, `<Host_IP>`, `<advertise_IP>`, `[1]` |
|
||||
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
|
||||
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
|
||||
|
||||
[1]: `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`, `kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`
|
||||
|
||||
where `kind` maps to one or more of the [x509 key usage][usage] types:
|
||||
|
||||
| kind | Key usage |
|
||||
|--------|---------------------------------------------------------------------------------|
|
||||
| server | digital signature, key encipherment, server auth |
|
||||
| client | digital signature, key encipherment, client auth |
|
||||
|
||||
### 証明書のパス
|
||||
|
||||
Certificates should be placed in a recommended path (as used by [kubeadm][kubeadm]). Paths should be specified using the given argument regardless of location.
|
||||
|
||||
| Default CN | recommend key path | recommended cert path | command | key argument | cert argument |
|
||||
|------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------|
|
||||
| etcd-ca | | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
|
||||
| etcd-client | apiserver-etcd-client.crt | apiserver-etcd-client.crt | kube-apiserver | --etcd-certfile | --etcd-keyfile |
|
||||
| kubernetes-ca | | ca.crt | kube-apiserver | --client-ca-file | |
|
||||
| kube-apiserver | apiserver.crt | apiserver.key | kube-apiserver | --tls-cert-file | --tls-private-key |
|
||||
| apiserver-kubelet-client | apiserver-kubelet-client.crt | | kube-apiserver | --kubelet-client-certificate | |
|
||||
| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-cert-file | --proxy-client-key-file |
|
||||
| | | | | | |
|
||||
| etcd-ca | | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file |
|
||||
| kube-etcd | | etcd/server.crt | etcd | | --cert-file |
|
||||
| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
|
||||
| etcd-ca | | etcd/ca.crt | etcdctl[2] | | --cacert |
|
||||
| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl[2] | --key | --cert |
|
||||
|
||||
[2]: For a liveness probe, if self-hosted
|
||||
|
||||
## ユーザアカウント用に証明書を設定する
|
||||
|
||||
You must manually configure these administrator account and service accounts:
|
||||
|
||||
| filename | credential name | Default CN | O (in Subject) |
|
||||
|-------------------------|----------------------------|--------------------------------|----------------|
|
||||
| admin.conf | default-admin | kubernetes-admin | system:masters |
|
||||
| kubelet.conf | default-auth | system:node:`<nodename>` | system:nodes |
|
||||
| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
|
||||
| scheduler.conf | default-manager | system:kube-scheduler | |
|
||||
|
||||
1. For each config, generate an x509 cert/key pair with the given CN and O.
|
||||
|
||||
1. Run `kubectl` as follows for each config:
|
||||
|
||||
```shell
|
||||
KUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs
|
||||
KUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs
|
||||
KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>
|
||||
KUBECONFIG=<filename> kubectl config use-context default-system
|
||||
```
|
||||
|
||||
These files are used as follows:
|
||||
|
||||
| filename | command | comment |
|
||||
|-------------------------|-------------------------|-----------------------------------------------------------------------|
|
||||
| admin.conf | kubectl | Configures administrator user for the cluster |
|
||||
| kubelet.conf | kubelet | One required for each node in the cluster. |
|
||||
| controller-manager.conf | kube-controller-manager | Must be added to manifest in `manifests/kube-controller-manager.yaml` |
|
||||
| scheduler.conf | kube-scheduler | Must be added to manifest in `manifests/kube-scheduler.yaml` |
|
||||
|
||||
[usage]: https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage
|
||||
[kubeadm]: /docs/reference/setup-tools/kubeadm/kubeadm/
|
||||
[proxy]: /docs/tasks/access-kubernetes-api/configure-aggregation-layer/
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
title: 大規模クラスタの構築
|
||||
weight: 80
|
||||
---
|
||||
|
||||
## サポート
|
||||
|
||||
At {{< param "version" >}}, Kubernetes supports clusters with up to 5000 nodes. More specifically, we support configurations that meet *all* of the following criteria:
|
||||
|
||||
* No more than 5000 nodes
|
||||
* No more than 150000 total pods
|
||||
* No more than 300000 total containers
|
||||
* No more than 100 pods per node
|
||||
|
||||
<br>
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
## 構築
|
||||
|
||||
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
|
||||
|
||||
Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)).
|
||||
|
||||
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
|
||||
|
||||
When setting up a large Kubernetes cluster, the following issues must be considered.
|
||||
|
||||
### クォータの問題
|
||||
|
||||
To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider:
|
||||
|
||||
* Increase the quota for things like CPU, IPs, etc.
|
||||
* In [GCE, for example,](https://cloud.google.com/compute/docs/resource-quotas) you'll want to increase the quota for:
|
||||
* CPUs
|
||||
* VM instances
|
||||
* Total persistent disk reserved
|
||||
* In-use IP addresses
|
||||
* Firewall Rules
|
||||
* Forwarding rules
|
||||
* Routes
|
||||
* Target pools
|
||||
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
|
||||
|
||||
### Etcdのストレージ
|
||||
|
||||
To improve performance of large clusters, we store events in a separate dedicated etcd instance.
|
||||
|
||||
When creating a cluster, existing salt scripts:
|
||||
|
||||
* start and configure additional etcd instance
|
||||
* configure api-server to use it for storing events
|
||||
|
||||
### マスターのサイズと構成要素
|
||||
|
||||
On GCE/Google Kubernetes Engine, and AWS, `kube-up` automatically configures the proper VM size for your master depending on the number of nodes
|
||||
in your cluster. On other providers, you will need to configure it manually. For reference, the sizes we use on GCE are
|
||||
|
||||
* 1-5 nodes: n1-standard-1
|
||||
* 6-10 nodes: n1-standard-2
|
||||
* 11-100 nodes: n1-standard-4
|
||||
* 101-250 nodes: n1-standard-8
|
||||
* 251-500 nodes: n1-standard-16
|
||||
* more than 500 nodes: n1-standard-32
|
||||
|
||||
And the sizes we use on AWS are
|
||||
|
||||
* 1-5 nodes: m3.medium
|
||||
* 6-10 nodes: m3.large
|
||||
* 11-100 nodes: m3.xlarge
|
||||
* 101-250 nodes: m3.2xlarge
|
||||
* 251-500 nodes: c4.4xlarge
|
||||
* more than 500 nodes: c4.8xlarge
|
||||
|
||||
{{< note >}}
|
||||
On Google Kubernetes Engine, the size of the master node adjusts automatically based on the size of your cluster. For more information, see [this blog post](https://cloudplatform.googleblog.com/2017/11/Cutting-Cluster-Management-Fees-on-Google-Kubernetes-Engine.html).
|
||||
|
||||
On AWS, master node sizes are currently set at cluster startup time and do not change, even if you later scale your cluster up or down by manually removing or adding nodes or using a cluster autoscaler.
|
||||
{{< /note >}}
|
||||
|
||||
### アドオンのリソース
|
||||
|
||||
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
containers:
|
||||
- name: fluentd-cloud-logging
|
||||
image: k8s.gcr.io/fluentd-gcp:1.16
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
```
|
||||
|
||||
Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
|
||||
|
||||
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
|
||||
|
||||
* Scale memory and CPU limits for each of the following addons, if used, as you scale up the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
|
||||
* [InfluxDB and Grafana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
|
||||
* [kubedns, dnsmasq, and sidecar](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
|
||||
* [Kibana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
|
||||
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
|
||||
* [elasticsearch](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
|
||||
* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
|
||||
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
|
||||
* [FluentD with GCP Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
|
||||
|
||||
Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185)
|
||||
and [#22940](http://issue.k8s.io/22940)). If you find that Heapster is running
|
||||
out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
|
||||
|
||||
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting).
|
||||
|
||||
In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster.
|
||||
We welcome PRs that implement those features.
|
||||
|
||||
### 少数のノードの起動の失敗を許容する
|
||||
|
||||
For various reasons (see [#18969](https://github.com/kubernetes/kubernetes/issues/18969) for more details) running
|
||||
`kube-up.sh` with a very large `NUM_NODES` may fail due to a very small number of nodes not coming up properly.
|
||||
Currently you have two choices: restart the cluster (`kube-down.sh` and then `kube-up.sh` again), or before
|
||||
running `kube-up.sh` set the environment variable `ALLOWED_NOTREADY_NODES` to whatever value you feel comfortable
|
||||
with. This will allow `kube-up.sh` to succeed with fewer than `NUM_NODES` coming up. Depending on the
|
||||
reason for the failure, those additional nodes may join later or the cluster may remain at a size of
|
||||
`NUM_NODES - ALLOWED_NOTREADY_NODES`.
|
|
@ -0,0 +1,224 @@
|
|||
---
|
||||
title: CRIのインストール
|
||||
content_template: templates/concept
|
||||
weight: 100
|
||||
---
|
||||
{{% capture overview %}}
|
||||
Kubernetesでは、v1.6.0からデフォルトでCRI(Container Runtime Interface)を利用できます。
|
||||
このページでは、様々なCRIのインストール方法について説明します。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
手順を進めるにあたっては、下記に示しているコマンドを、ご利用のOSのものに従ってrootユーザとして実行してください。
|
||||
環境によっては、それぞれのホストへSSHで接続した後に`sudo -i`を実行することで、rootユーザになることができる場合があります。
|
||||
|
||||
## Docker
|
||||
|
||||
それぞれのマシンに対してDockerをインストールします。
|
||||
バージョン18.06が推奨されていますが、1.11、1.12、1.13、17.03についても動作が確認されています。
|
||||
Kubernetesのリリースノートにある、Dockerの動作確認済み最新バージョンについてもご確認ください。
|
||||
|
||||
システムへDockerをインストールするには、次のコマンドを実行します。
|
||||
|
||||
{{< tabs name="tab-cri-docker-installation" >}}
|
||||
{{< tab name="Ubuntu 16.04" codelang="bash" >}}
|
||||
# UbuntuのリポジトリからDockerをインストールする場合は次を実行します:
|
||||
apt-get update
|
||||
apt-get install -y docker.io
|
||||
|
||||
# または、UbuntuやDebian向けのDockerのリポジトリからDocker CE 18.06をインストールする場合は、次を実行します:
|
||||
|
||||
## 必要なパッケージをインストールします。
|
||||
apt-get update && apt-get install apt-transport-https ca-certificates curl software-properties-common
|
||||
|
||||
## GPGキーをダウンロードします。
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
|
||||
## dockerパッケージ用のaptリポジトリを追加します。
|
||||
add-apt-repository \
|
||||
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||
$(lsb_release -cs) \
|
||||
stable"
|
||||
|
||||
## dockerをインストールします。
|
||||
apt-get update && apt-get install docker-ce=18.06.0~ce~3-0~ubuntu
|
||||
|
||||
# デーモンをセットアップします。
|
||||
cat > /etc/docker/daemon.json <<EOF
|
||||
{
|
||||
"exec-opts": ["native.cgroupdriver=systemd"],
|
||||
"log-driver": "json-file",
|
||||
"log-opts": {
|
||||
"max-size": "100m"
|
||||
},
|
||||
"storage-driver": "overlay2"
|
||||
}
|
||||
EOF
|
||||
|
||||
mkdir -p /etc/systemd/system/docker.service.d
|
||||
|
||||
# dockerを再起動します。
|
||||
systemctl daemon-reload
|
||||
systemctl restart docker
|
||||
{{< /tab >}}
|
||||
{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}}
|
||||
|
||||
# CentOSやRHELのリポジトリからDockerをインストールする場合は、次を実行します:
|
||||
yum install -y docker
|
||||
|
||||
# または、CentOS向けのDockerのリポジトリからDocker CE 18.06をインストールする場合は、次を実行します:
|
||||
|
||||
## 必要なパッケージをインストールします。
|
||||
yum install yum-utils device-mapper-persistent-data lvm2
|
||||
|
||||
## dockerパッケージ用のyumリポジトリを追加します。
|
||||
yum-config-manager \
|
||||
--add-repo \
|
||||
https://download.docker.com/linux/centos/docker-ce.repo
|
||||
|
||||
## dockerをインストールします。
|
||||
yum update && yum install docker-ce-18.06.1.ce
|
||||
|
||||
## /etc/docker ディレクトリを作成します。
|
||||
mkdir /etc/docker
|
||||
|
||||
# デーモンをセットアップします。
|
||||
cat > /etc/docker/daemon.json <<EOF
|
||||
{
|
||||
"exec-opts": ["native.cgroupdriver=systemd"],
|
||||
"log-driver": "json-file",
|
||||
"log-opts": {
|
||||
"max-size": "100m"
|
||||
},
|
||||
"storage-driver": "overlay2",
|
||||
"storage-opts": [
|
||||
"overlay2.override_kernel_check=true"
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
mkdir -p /etc/systemd/system/docker.service.d
|
||||
|
||||
# dockerを再起動します。
|
||||
systemctl daemon-reload
|
||||
systemctl restart docker
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
詳細については、[Dockerの公式インストールガイド](https://docs.docker.com/engine/installation/)を参照してください。
|
||||
|
||||
## CRI-O
|
||||
|
||||
このセクションでは、CRIランタイムとして`CRI-O`を利用するために必要な手順について説明します。
|
||||
|
||||
システムへCRI-Oをインストールするためには以下のコマンドを利用します:
|
||||
|
||||
### 必要な設定の追加
|
||||
|
||||
```shell
|
||||
modprobe overlay
|
||||
modprobe br_netfilter
|
||||
|
||||
# 必要なカーネルパラメータの設定をします。これらの設定値は再起動後も永続化されます。
|
||||
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
EOF
|
||||
|
||||
sysctl --system
|
||||
```
|
||||
|
||||
{{< tabs name="tab-cri-cri-o-installation" >}}
|
||||
{{< tab name="Ubuntu 16.04" codelang="bash" >}}
|
||||
|
||||
# 必要なパッケージをインストールし、リポジトリを追加
|
||||
apt-get update
|
||||
apt-get install software-properties-common
|
||||
|
||||
add-apt-repository ppa:projectatomic/ppa
|
||||
apt-get update
|
||||
|
||||
# CRI-Oをインストール
|
||||
apt-get install cri-o-1.11
|
||||
|
||||
{{< /tab >}}
|
||||
{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}}
|
||||
|
||||
# 必要なリポジトリを追加
|
||||
yum-config-manager --add-repo=https://cbs.centos.org/repos/paas7-crio-311-candidate/x86_64/os/
|
||||
|
||||
# CRI-Oをインストール
|
||||
yum install --nogpgcheck cri-o
|
||||
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### CRI-Oの起動
|
||||
|
||||
```
|
||||
systemctl start crio
|
||||
```
|
||||
|
||||
詳細については、[CRI-Oインストールガイド](https://github.com/kubernetes-sigs/cri-o#getting-started)を参照してください。
|
||||
|
||||
## Containerd
|
||||
|
||||
このセクションでは、CRIランタイムとして`containerd`を利用するために必要な手順について説明します。
|
||||
|
||||
システムへContainerdをインストールするためには次のコマンドを実行します。
|
||||
|
||||
### 必要な設定の追加
|
||||
|
||||
```shell
|
||||
modprobe overlay
|
||||
modprobe br_netfilter
|
||||
|
||||
# 必要なカーネルパラメータの設定をします。これらの設定値は再起動後も永続化されます。
|
||||
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
EOF
|
||||
|
||||
sysctl --system
|
||||
```
|
||||
|
||||
{{< tabs name="tab-cri-containerd-installation" >}}
|
||||
{{< tab name="Ubuntu 16.04+" codelang="bash" >}}
|
||||
apt-get install -y libseccomp2
|
||||
{{< /tab >}}
|
||||
{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}}
|
||||
yum install -y libseccomp
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### containerdのインストール
|
||||
|
||||
[Containerdは定期的にリリース](https://github.com/containerd/containerd/releases)されますが、以下に示すコマンドで利用している値は、この手順が作成された時点での最新のバージョンにしたがって書かれています。より新しいバージョンとダウンロードするファイルのハッシュ値については[こちら](https://storage.googleapis.com/cri-containerd-release)で確認するようにしてください。
|
||||
|
||||
```shell
|
||||
# 必要な環境変数をexportします。
|
||||
export CONTAINERD_VERSION="1.1.2"
|
||||
export CONTAINERD_SHA256="d4ed54891e90a5d1a45e3e96464e2e8a4770cd380c21285ef5c9895c40549218"
|
||||
|
||||
# containerdのtarボールをダウンロードします。
|
||||
wget https://storage.googleapis.com/cri-containerd-release/cri-containerd-${CONTAINERD_VERSION}.linux-amd64.tar.gz
|
||||
|
||||
# ハッシュ値をチェックします。
|
||||
echo "${CONTAINERD_SHA256} cri-containerd-${CONTAINERD_VERSION}.linux-amd64.tar.gz" | sha256sum --check -
|
||||
|
||||
# 解凍して展開します。
|
||||
tar --no-overwrite-dir -C / -xzf cri-containerd-${CONTAINERD_VERSION}.linux-amd64.tar.gz
|
||||
|
||||
# containerdを起動します。
|
||||
systemctl start containerd
|
||||
```
|
||||
|
||||
## その他のCRIランタイム(rktletおよびfrakti)について
|
||||
|
||||
詳細については[Fraktiのクイックスタートガイド](https://github.com/kubernetes/frakti#quickstart)および[Rktletのクイックスタートガイド](https://github.com/kubernetes-incubator/rktlet/blob/master/docs/getting-started-guide.md)を参照してください。
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: カスタムクラウドソリューション
|
||||
weight: 50
|
||||
---
|
|
@ -0,0 +1,88 @@
|
|||
---
|
||||
title: AWSまたはGCE上のCoreOS
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
There are multiple guides on running Kubernetes with [CoreOS](https://coreos.com/kubernetes/docs/latest/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## 公式CoreOSガイド
|
||||
|
||||
These guides are maintained by CoreOS and deploy Kubernetes the "CoreOS Way" with full TLS, the DNS add-on, and more. These guides pass Kubernetes conformance testing and we encourage you to [test this yourself](https://coreos.com/kubernetes/docs/latest/conformance-tests.html).
|
||||
|
||||
* [**AWS Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)
|
||||
|
||||
Guide and CLI tool for setting up a multi-node cluster on AWS.
|
||||
CloudFormation is used to set up a master and multiple workers in auto-scaling groups.
|
||||
|
||||
* [**Bare Metal Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-baremetal.html#automated-provisioning)
|
||||
|
||||
Guide and HTTP/API service for PXE booting and provisioning a multi-node cluster on bare metal.
|
||||
[Ignition](https://coreos.com/ignition/docs/latest/) is used to provision a master and multiple workers on the first boot from disk.
|
||||
|
||||
* [**Vagrant Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html)
|
||||
|
||||
Guide to setting up a multi-node cluster on Vagrant.
|
||||
The deployer can independently configure the number of etcd nodes, master nodes, and worker nodes to bring up a fully HA control plane.
|
||||
|
||||
* [**Vagrant Single-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html)
|
||||
|
||||
The quickest way to set up a Kubernetes development environment locally.
|
||||
As easy as `git clone`, `vagrant up` and configuring `kubectl`.
|
||||
|
||||
* [**Full Step by Step Guide**](https://coreos.com/kubernetes/docs/latest/getting-started.html)
|
||||
|
||||
A generic guide to setting up an HA cluster on any cloud or bare metal, with full TLS.
|
||||
Repeat the master or worker steps to configure more machines of that role.
|
||||
|
||||
## コミュニティガイド
|
||||
|
||||
These guides are maintained by community members, cover specific platforms and use cases, and experiment with different ways of configuring Kubernetes on CoreOS.
|
||||
|
||||
* [**Easy Multi-node Cluster on Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
|
||||
|
||||
Scripted installation of a single master, multi-worker cluster on GCE.
|
||||
Kubernetes components are managed by [fleet](https://github.com/coreos/fleet).
|
||||
|
||||
* [**Multi-node cluster using cloud-config and Weave on Vagrant**](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md)
|
||||
|
||||
Configure a Vagrant-based cluster of 3 machines with networking provided by Weave.
|
||||
|
||||
* [**Multi-node cluster using cloud-config and Vagrant**](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md)
|
||||
|
||||
Configure a single master, multi-worker cluster locally, running on your choice of hypervisor: VirtualBox, Parallels, or VMware
|
||||
|
||||
* [**Single-node cluster using a small macOS App**](https://github.com/rimusz/kube-solo-osx/blob/master/README.md)
|
||||
|
||||
Guide to running a solo cluster (master + worker) controlled by an macOS menubar application.
|
||||
Uses xhyve + CoreOS under the hood.
|
||||
|
||||
* [**Multi-node cluster with Vagrant and fleet units using a small macOS App**](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md)
|
||||
|
||||
Guide to running a single master, multi-worker cluster controlled by an macOS menubar application.
|
||||
Uses Vagrant under the hood.
|
||||
|
||||
* [**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
|
||||
|
||||
Configure a single master, single worker cluster on VMware ESXi.
|
||||
|
||||
* [**Single/Multi-node cluster using cloud-config, CoreOS and Foreman**](https://github.com/johscheuer/theforeman-coreos-kubernetes)
|
||||
|
||||
Configure a standalone Kubernetes or a Kubernetes cluster with [Foreman](https://theforeman.org).
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires))
|
||||
Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,174 @@
|
|||
---
|
||||
title: kopsを使ったAWS上でのKubernetesのインストール
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This quickstart shows you how to easily install a Kubernetes cluster on AWS.
|
||||
It uses a tool called [`kops`](https://github.com/kubernetes/kops).
|
||||
|
||||
kops is an opinionated provisioning system:
|
||||
|
||||
* Fully automated installation
|
||||
* Uses DNS to identify clusters
|
||||
* Self-healing: everything runs in Auto-Scaling Groups
|
||||
* Multiple OS support (Debian, Ubuntu 16.04 supported, CentOS & RHEL, Amazon Linux and CoreOS) - see the [images.md](https://github.com/kubernetes/kops/blob/master/docs/images.md)
|
||||
* High-Availability support - see the [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/high_availability.md)
|
||||
* Can directly provision, or generate terraform manifests - see the [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
|
||||
|
||||
If your opinions differ from these you may prefer to build your own cluster using [kubeadm](/docs/admin/kubeadm/) as
|
||||
a building block. kops builds on the kubeadm work.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## クラスタの作成
|
||||
|
||||
### (1/5) kopsのインストール
|
||||
|
||||
#### 要件
|
||||
|
||||
You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed in order for kops to work.
|
||||
|
||||
#### インストール
|
||||
|
||||
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source):
|
||||
|
||||
On macOS:
|
||||
|
||||
```shell
|
||||
curl -OL https://github.com/kubernetes/kops/releases/download/1.10.0/kops-darwin-amd64
|
||||
chmod +x kops-darwin-amd64
|
||||
mv kops-darwin-amd64 /usr/local/bin/kops
|
||||
# you can also install using Homebrew
|
||||
brew update && brew install kops
|
||||
```
|
||||
|
||||
On Linux:
|
||||
|
||||
```shell
|
||||
wget https://github.com/kubernetes/kops/releases/download/1.10.0/kops-linux-amd64
|
||||
chmod +x kops-linux-amd64
|
||||
mv kops-linux-amd64 /usr/local/bin/kops
|
||||
```
|
||||
|
||||
### (2/5) クラスタ用のroute53ドメインの作成
|
||||
|
||||
kops uses DNS for discovery, both inside the cluster and so that you can reach the kubernetes API server
|
||||
from clients.
|
||||
|
||||
kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so you will
|
||||
no longer get your clusters confused, you can share clusters with your colleagues unambiguously,
|
||||
and you can reach them without relying on remembering an IP address.
|
||||
|
||||
You can, and probably should, use subdomains to divide your clusters. As our example we will use
|
||||
`useast1.dev.example.com`. The API server endpoint will then be `api.useast1.dev.example.com`.
|
||||
|
||||
A Route53 hosted zone can serve subdomains. Your hosted zone could be `useast1.dev.example.com`,
|
||||
but also `dev.example.com` or even `example.com`. kops works with any of these, so typically
|
||||
you choose for organization reasons (e.g. you are allowed to create records under `dev.example.com`,
|
||||
but not under `example.com`).
|
||||
|
||||
Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using
|
||||
the [normal process](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
|
||||
with a command such as `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`.
|
||||
|
||||
You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here,
|
||||
you would create NS records in `example.com` for `dev`. If it is a root domain name you would configure the NS
|
||||
records at your domain registrar (e.g. `example.com` would need to be configured where you bought `example.com`).
|
||||
|
||||
This step is easy to mess up (it is the #1 cause of problems!) You can double-check that
|
||||
your cluster is configured correctly if you have the dig tool by running:
|
||||
|
||||
`dig NS dev.example.com`
|
||||
|
||||
You should see the 4 NS records that Route53 assigned your hosted zone.
|
||||
|
||||
### (3/5) クラスタの状態を保存するS3バケットの作成
|
||||
|
||||
kops lets you manage your clusters even after installation. To do this, it must keep track of the clusters
|
||||
that you have created, along with their configuration, the keys they are using etc. This information is stored
|
||||
in an S3 bucket. S3 permissions are used to control access to the bucket.
|
||||
|
||||
Multiple clusters can use the same S3 bucket, and you can share an S3 bucket between your colleagues that
|
||||
administer the same clusters - this is much easier than passing around kubecfg files. But anyone with access
|
||||
to the S3 bucket will have administrative access to all your clusters, so you don't want to share it beyond
|
||||
the operations team.
|
||||
|
||||
So typically you have one S3 bucket for each ops team (and often the name will correspond
|
||||
to the name of the hosted zone above!)
|
||||
|
||||
In our example, we chose `dev.example.com` as our hosted zone, so let's pick `clusters.dev.example.com` as
|
||||
the S3 bucket name.
|
||||
|
||||
* Export `AWS_PROFILE` (if you need to select a profile for the AWS CLI to work)
|
||||
|
||||
* Create the S3 bucket using `aws s3 mb s3://clusters.dev.example.com`
|
||||
|
||||
* You can `export KOPS_STATE_STORE=s3://clusters.dev.example.com` and then kops will use this location by default.
|
||||
We suggest putting this in your bash profile or similar.
|
||||
|
||||
|
||||
### (4/5) クラスタ設定の構築
|
||||
|
||||
Run "kops create cluster" to create your cluster configuration:
|
||||
|
||||
`kops create cluster --zones=us-east-1c useast1.dev.example.com`
|
||||
|
||||
kops will create the configuration for your cluster. Note that it _only_ creates the configuration, it does
|
||||
not actually create the cloud resources - you'll do that in the next step with a `kops update cluster`. This
|
||||
give you an opportunity to review the configuration or change it.
|
||||
|
||||
It prints commands you can use to explore further:
|
||||
|
||||
* List your clusters with: `kops get cluster`
|
||||
* Edit this cluster with: `kops edit cluster useast1.dev.example.com`
|
||||
* Edit your node instance group: `kops edit ig --name=useast1.dev.example.com nodes`
|
||||
* Edit your master instance group: `kops edit ig --name=useast1.dev.example.com master-us-east-1c`
|
||||
|
||||
If this is your first time using kops, do spend a few minutes to try those out! An instance group is a
|
||||
set of instances, which will be registered as kubernetes nodes. On AWS this is implemented via auto-scaling-groups.
|
||||
You can have several instance groups, for example if you wanted nodes that are a mix of spot and on-demand instances, or
|
||||
GPU and non-GPU instances.
|
||||
|
||||
|
||||
### (5/5) AWSにクラスタを作成
|
||||
|
||||
Run "kops update cluster" to create your cluster in AWS:
|
||||
|
||||
`kops update cluster useast1.dev.example.com --yes`
|
||||
|
||||
That takes a few seconds to run, but then your cluster will likely take a few minutes to actually be ready.
|
||||
`kops update cluster` will be the tool you'll use whenever you change the configuration of your cluster; it
|
||||
applies the changes you have made to the configuration to your cluster - reconfiguring AWS or kubernetes as needed.
|
||||
|
||||
For example, after you `kops edit ig nodes`, then `kops update cluster --yes` to apply your configuration, and
|
||||
sometimes you will also have to `kops rolling-update cluster` to roll out the configuration immediately.
|
||||
|
||||
Without `--yes`, `kops update cluster` will show you a preview of what it is going to do. This is handy
|
||||
for production clusters!
|
||||
|
||||
### 他のアドオンの参照
|
||||
|
||||
See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster.
|
||||
|
||||
## クリーンアップ
|
||||
|
||||
* To delete your cluster: `kops delete cluster useast1.dev.example.com --yes`
|
||||
|
||||
## フィードバック
|
||||
|
||||
* Slack Channel: [#kops-users](https://kubernetes.slack.com/messages/kops-users/)
|
||||
* [GitHub Issues](https://github.com/kubernetes/kops/issues)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
|
||||
* Learn about `kops` [advanced usage](https://github.com/kubernetes/kops)
|
||||
* See the `kops` [docs](https://github.com/kubernetes/kops) section for tutorials, best practices and advanced configuration options.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,120 @@
|
|||
---
|
||||
title: kubesprayを使ったオンプレミス/クラウドプロバイダへのKubernetesのインストール
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-incubator/kubespray).
|
||||
|
||||
Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
|
||||
|
||||
* a highly available cluster
|
||||
* composable attributes
|
||||
* support for most popular Linux distributions
|
||||
* Container Linux by CoreOS
|
||||
* Debian Jessie, Stretch, Wheezy
|
||||
* Ubuntu 16.04, 18.04
|
||||
* CentOS/RHEL 7
|
||||
* Fedora/CentOS Atomic
|
||||
* openSUSE Leap 42.3/Tumbleweed
|
||||
* continuous integration tests
|
||||
|
||||
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## クラスタの作成
|
||||
|
||||
### (1/5) 下地の要件の確認
|
||||
|
||||
Provision servers with the following [requirements](https://github.com/kubernetes-incubator/kubespray#requirements):
|
||||
|
||||
* **Ansible v2.5 (or newer) and python-netaddr is installed on the machine that will run Ansible commands**
|
||||
* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
|
||||
* The target servers must have **access to the Internet** in order to pull docker images
|
||||
* The target servers are configured to allow **IPv4 forwarding**
|
||||
* **Your ssh key must be copied** to all the servers part of your inventory
|
||||
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall
|
||||
* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified
|
||||
|
||||
Kubespray provides the following utilities to help provision your environment:
|
||||
|
||||
* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:
|
||||
* [AWS](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws)
|
||||
* [OpenStack](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/openstack)
|
||||
|
||||
### (2/5) インベントリファイルの用意
|
||||
|
||||
After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
|
||||
|
||||
### (3/5) クラスタ作成の計画
|
||||
|
||||
Kubespray provides the ability to customize many aspects of the deployment:
|
||||
|
||||
* Choice deployment mode: kubeadm or non-kubeadm
|
||||
* CNI (networking) plugins
|
||||
* DNS configuration
|
||||
* Choice of control plane: native/binary or containerized with docker or rkt
|
||||
* Component versions
|
||||
* Calico route reflectors
|
||||
* Component runtime options
|
||||
* docker
|
||||
* rkt
|
||||
* cri-o
|
||||
* Certificate generation methods (**Vault being discontinued**)
|
||||
|
||||
Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
|
||||
|
||||
### (4/5) クラスタのデプロイ
|
||||
|
||||
Next, deploy your cluster:
|
||||
|
||||
Cluster deployment using [ansible-playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
|
||||
|
||||
```shell
|
||||
ansible-playbook -i your/inventory/hosts.ini cluster.yml -b -v \
|
||||
--private-key=~/.ssh/private_key
|
||||
```
|
||||
|
||||
Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/large-deployments.md) for best results.
|
||||
|
||||
### (5/5) デプロイの確認
|
||||
|
||||
Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.
|
||||
|
||||
## クラスタの操作
|
||||
|
||||
Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.
|
||||
|
||||
### クラスタのスケール
|
||||
|
||||
You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#adding-nodes)".
|
||||
You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#remove-nodes)".
|
||||
|
||||
### クラスタのアップグレード
|
||||
|
||||
You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/upgrades.md)".
|
||||
|
||||
## クリーンアップ
|
||||
|
||||
You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/reset.yml).
|
||||
|
||||
{{< caution >}}
|
||||
When running the reset playbook, be sure not to accidentally target your production cluster!
|
||||
{{< /caution >}}
|
||||
|
||||
## フィードバック
|
||||
|
||||
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/)
|
||||
* [GitHub Issues](https://github.com/kubernetes-incubator/kubespray/issues)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/roadmap.md).
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "kubeadmによるClusterのブートストラッピング"
|
||||
weight: 30
|
||||
---
|
||||
|
|
@ -0,0 +1,82 @@
|
|||
---
|
||||
title: kubeadmを使ったコントロールプレーンの設定のカスタマイズ
|
||||
content_template: templates/concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
kubeadmの`ClusterConfiguration`オブジェクトはAPIServer、ControllerManager、およびSchedulerのようなコントロールプレーンの構成要素に渡されたデフォルトのフラグを上書きすることができる `extraArgs`の項目があります。
|
||||
その構成要素は次の項目で定義されています。
|
||||
|
||||
- `apiServer`
|
||||
- `controllerManager`
|
||||
- `scheduler`
|
||||
|
||||
`extraArgs` の項目は `キー: 値` のペアです。コントロールプレーンの構成要素のフラグを上書きするには:
|
||||
|
||||
1. 設定内容に適切な項目を追加
|
||||
2. フラグを追加して項目を上書き
|
||||
|
||||
各設定項目のより詳細な情報は[APIリファレンスのページ](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#ClusterConfiguration)を参照してください。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## APIServerフラグ
|
||||
|
||||
詳細は[kube-apiserverのリファレンスドキュメント](/docs/reference/command-line-tools-reference/kube-apiserver/)を参照してください。
|
||||
|
||||
Example usage:
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: v1.13.0
|
||||
metadata:
|
||||
name: 1.13-sample
|
||||
apiServer:
|
||||
extraArgs:
|
||||
advertise-address: 192.168.0.103
|
||||
anonymous-auth: false
|
||||
enable-admission-plugins: AlwaysPullImages,DefaultStorageClass
|
||||
audit-log-path: /home/johndoe/audit.log
|
||||
```
|
||||
|
||||
## ControllerManagerフラグ
|
||||
|
||||
詳細は[kube-controller-managerのリファレンスドキュメント](/docs/reference/command-line-tools-reference/kube-controller-manager/)を参照してください。
|
||||
|
||||
Example usage:
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: v1.13.0
|
||||
metadata:
|
||||
name: 1.13-sample
|
||||
controllerManager:
|
||||
extraArgs:
|
||||
cluster-signing-key-file: /home/johndoe/keys/ca.key
|
||||
bind-address: 0.0.0.0
|
||||
deployment-controller-sync-period: 50
|
||||
```
|
||||
|
||||
## Schedulerフラグ
|
||||
|
||||
詳細は[kube-schedulerのリファレンスドキュメント](/docs/reference/command-line-tools-reference/kube-scheduler/)を参照してください。
|
||||
|
||||
Example usage:
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: v1.13.0
|
||||
metadata:
|
||||
name: 1.13-sample
|
||||
scheduler:
|
||||
extraArgs:
|
||||
address: 0.0.0.0
|
||||
config: /home/johndoe/schedconfig.yaml
|
||||
kubeconfig: /home/johndoe/kubeconfig.yaml
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,626 @@
|
|||
---
|
||||
title: kubeadmを使用したシングルマスタークラスターの作成
|
||||
content_template: templates/task
|
||||
weight: 30
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<img src="https://raw.githubusercontent.com/cncf/artwork/master/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster
|
||||
lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
|
||||
|
||||
Because you can install kubeadm on various types of machine (e.g. laptop, server,
|
||||
Raspberry Pi, etc.), it's well suited for integration with provisioning systems
|
||||
such as Terraform or Ansible.
|
||||
|
||||
kubeadm's simplicity means it can serve a wide range of use cases:
|
||||
|
||||
- New users can start with kubeadm to try Kubernetes out for the first time.
|
||||
- Users familiar with Kubernetes can spin up clusters with kubeadm and test their applications.
|
||||
- Larger projects can include kubeadm as a building block in a more complex system that can also include other installer tools.
|
||||
|
||||
kubeadm is designed to be a simple way for new users to start trying
|
||||
Kubernetes out, possibly for the first time, a way for existing users to
|
||||
test their application on and stitch together a cluster easily, and also to be
|
||||
a building block in other ecosystem and/or installer tool with a larger
|
||||
scope.
|
||||
|
||||
You can install _kubeadm_ very easily on operating systems that support
|
||||
installing deb or rpm packages. The responsible SIG for kubeadm,
|
||||
[SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle), provides these packages pre-built for you,
|
||||
but you may also build them from source for other OSes.
|
||||
|
||||
|
||||
### kubeadmの成熟度
|
||||
|
||||
| Area | Maturity Level |
|
||||
|---------------------------|--------------- |
|
||||
| Command line UX | GA |
|
||||
| Implementation | GA |
|
||||
| Config file API | beta |
|
||||
| CoreDNS | GA |
|
||||
| kubeadm alpha subcommands | alpha |
|
||||
| High availability | alpha |
|
||||
| DynamicKubeletConfig | alpha |
|
||||
| Self-hosting | alpha |
|
||||
|
||||
|
||||
kubeadm's overall feature state is **GA**. Some sub-features, like the configuration
|
||||
file API are still under active development. The implementation of creating the cluster
|
||||
may change slightly as the tool evolves, but the overall implementation should be pretty stable.
|
||||
Any commands under `kubeadm alpha` are by definition, supported on an alpha level.
|
||||
|
||||
|
||||
### サポート期間
|
||||
|
||||
Kubernetes releases are generally supported for nine months, and during that
|
||||
period a patch release may be issued from the release branch if a severe bug or
|
||||
security issue is found. Here are the latest Kubernetes releases and the support
|
||||
timeframe; which also applies to `kubeadm`.
|
||||
|
||||
| Kubernetes version | Release month | End-of-life-month |
|
||||
|--------------------|----------------|-------------------|
|
||||
| v1.6.x | March 2017 | December 2017 |
|
||||
| v1.7.x | June 2017 | March 2018 |
|
||||
| v1.8.x | September 2017 | June 2018 |
|
||||
| v1.9.x | December 2017 | September 2018 |
|
||||
| v1.10.x | March 2018 | December 2018 |
|
||||
| v1.11.x | June 2018 | March 2019 |
|
||||
| v1.12.x | September 2018 | June 2019 |
|
||||
| v1.13.x | December 2018 | September 2019 |
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
- One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS
|
||||
- 2 GB or more of RAM per machine. Any less leaves little room for your
|
||||
apps.
|
||||
- 2 CPUs or more on the master
|
||||
- Full network connectivity among all machines in the cluster. A public or
|
||||
private network is fine.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## 目的
|
||||
|
||||
* Install a single master Kubernetes cluster or [high availability cluster](https://kubernetes.io/docs/setup/independent/high-availability/)
|
||||
* Install a Pod network on the cluster so that your Pods can
|
||||
talk to each other
|
||||
|
||||
## 説明
|
||||
|
||||
### kubeadmのインストール
|
||||
|
||||
See ["Installing kubeadm"](/docs/setup/independent/install-kubeadm/).
|
||||
|
||||
{{< note >}}
|
||||
If you have already installed kubeadm, run `apt-get update &&
|
||||
apt-get upgrade` or `yum update` to get the latest version of kubeadm.
|
||||
|
||||
When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
|
||||
kubeadm to tell it what to do. This crashloop is expected and normal.
|
||||
After you initialize your master, the kubelet runs normally.
|
||||
{{< /note >}}
|
||||
|
||||
### マスターの初期化
|
||||
|
||||
The master is the machine where the control plane components run, including
|
||||
etcd (the cluster database) and the API server (which the kubectl CLI
|
||||
communicates with).
|
||||
|
||||
1. Choose a pod network add-on, and verify whether it requires any arguments to
|
||||
be passed to kubeadm initialization. Depending on which
|
||||
third-party provider you choose, you might need to set the `--pod-network-cidr` to
|
||||
a provider-specific value. See [Installing a pod network add-on](#pod-network).
|
||||
1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
|
||||
with the default gateway to advertise the master's IP. To use a different
|
||||
network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
|
||||
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
|
||||
must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
|
||||
1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
|
||||
connectivity to gcr.io registries.
|
||||
|
||||
Now run:
|
||||
|
||||
```bash
|
||||
kubeadm init <args>
|
||||
```
|
||||
|
||||
### 詳細
|
||||
|
||||
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
|
||||
|
||||
For a complete list of configuration options, see the [configuration file documentation](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
|
||||
|
||||
To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in [custom arguments](/docs/admin/kubeadm#custom-args).
|
||||
|
||||
To run `kubeadm init` again, you must first [tear down the cluster](#tear-down).
|
||||
|
||||
If you join a node with a different architecture to your cluster, create a separate
|
||||
Deployment or DaemonSet for `kube-proxy` and `kube-dns` on the node. This is because the Docker images for these
|
||||
components do not currently support multi-architecture.
|
||||
|
||||
`kubeadm init` first runs a series of prechecks to ensure that the machine
|
||||
is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init`
|
||||
then downloads and installs the cluster control plane components. This may take several minutes.
|
||||
The output should look like:
|
||||
|
||||
```none
|
||||
[init] Using Kubernetes version: vX.Y.Z
|
||||
[preflight] Running pre-flight checks
|
||||
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
|
||||
[certificates] Generated ca certificate and key.
|
||||
[certificates] Generated apiserver certificate and key.
|
||||
[certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
|
||||
[certificates] Generated apiserver-kubelet-client certificate and key.
|
||||
[certificates] Generated sa key and public key.
|
||||
[certificates] Generated front-proxy-ca certificate and key.
|
||||
[certificates] Generated front-proxy-client certificate and key.
|
||||
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
|
||||
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
|
||||
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
|
||||
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
|
||||
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
|
||||
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
|
||||
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
|
||||
[apiclient] All control plane components are healthy after 39.511972 seconds
|
||||
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
|
||||
[markmaster] Will mark node master as master by adding a label and a taint
|
||||
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
|
||||
[bootstraptoken] Using token: <token>
|
||||
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
|
||||
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
|
||||
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
|
||||
[addons] Applied essential addon: CoreDNS
|
||||
[addons] Applied essential addon: kube-proxy
|
||||
|
||||
Your Kubernetes master has initialized successfully!
|
||||
|
||||
To start using your cluster, you need to run (as a regular user):
|
||||
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
|
||||
You should now deploy a pod network to the cluster.
|
||||
Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at:
|
||||
http://kubernetes.io/docs/admin/addons/
|
||||
|
||||
You can now join any number of machines by running the following on each node
|
||||
as root:
|
||||
|
||||
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
|
||||
```
|
||||
|
||||
To make kubectl work for your non-root user, run these commands, which are
|
||||
also part of the `kubeadm init` output:
|
||||
|
||||
```bash
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
```
|
||||
|
||||
Alternatively, if you are the `root` user, you can run:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||
```
|
||||
|
||||
Make a record of the `kubeadm join` command that `kubeadm init` outputs. You
|
||||
need this command to [join nodes to your cluster](#join-nodes).
|
||||
|
||||
The token is used for mutual authentication between the master and the joining
|
||||
nodes. The token included here is secret. Keep it safe, because anyone with this
|
||||
token can add authenticated nodes to your cluster. These tokens can be listed,
|
||||
created, and deleted with the `kubeadm token` command. See the
|
||||
[kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm-token/).
|
||||
|
||||
### Podネットワークアドオンのインストール {#pod-network}
|
||||
|
||||
{{< caution >}}
|
||||
This section contains important information about installation and deployment order. Read it carefully before proceeding.
|
||||
{{< /caution >}}
|
||||
|
||||
You must install a pod network add-on so that your pods can communicate with
|
||||
each other.
|
||||
|
||||
**The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed.
|
||||
kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).**
|
||||
|
||||
Several projects provide Kubernetes pod networks using CNI, some of which also
|
||||
support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons.
|
||||
- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
|
||||
- [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in Kubernetes version 1.9.
|
||||
|
||||
Note that kubeadm sets up a more secure cluster by default and enforces use of [RBAC](/docs/reference/access-authn-authz/rbac/).
|
||||
Make sure that your network manifest supports RBAC.
|
||||
|
||||
Also, beware, that your Pod network must not overlap with any of the host networks as this can cause issues.
|
||||
If you find a collision between your network plugin’s preferred Pod network and some of your host networks, you should think of a suitable CIDR replacement and use that during `kubeadm init` with `--pod-network-cidr` and as a replacement in your network plugin’s YAML.
|
||||
|
||||
You can install a pod network add-on with the following command:
|
||||
|
||||
```bash
|
||||
kubectl apply -f <add-on.yaml>
|
||||
```
|
||||
|
||||
You can install only one pod network per cluster.
|
||||
|
||||
{{< tabs name="tabs-pod-install" >}}
|
||||
{{% tab name="Choose one..." %}}
|
||||
Please select one of the tabs to see installation instructions for the respective third-party Pod Network Provider.
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Calico" %}}
|
||||
For more information about using Calico, see [Quickstart for Calico on Kubernetes](https://docs.projectcalico.org/latest/getting-started/kubernetes/), [Installing Calico for policy and networking](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico), and other related resources.
|
||||
|
||||
For Calico to work correctly, you need to pass `--pod-network-cidr=192.168.0.0/16` to `kubeadm init` or update the `calico.yml` file to match your Pod network. Note that Calico works on `amd64` only.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
|
||||
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Canal" %}}
|
||||
Canal uses Calico for policy and Flannel for networking. Refer to the Calico documentation for the [official getting started guide](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/flannel).
|
||||
|
||||
For Canal to work correctly, `--pod-network-cidr=10.244.0.0/16` has to be passed to `kubeadm init`. Note that Canal works on `amd64` only.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
|
||||
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Cilium" %}}
|
||||
For more information about using Cilium with Kubernetes, see [Quickstart for Cilium on Kubernetes](http://docs.cilium.io/en/v1.2/kubernetes/quickinstall/) and [Kubernetes Install guide for Cilium](http://docs.cilium.io/en/v1.2/kubernetes/install/).
|
||||
|
||||
Passing `--pod-network-cidr` option to `kubeadm init` is not required, but highly recommended.
|
||||
|
||||
These commands will deploy Cilium with its own etcd managed by etcd operator.
|
||||
|
||||
```shell
|
||||
# Download required manifests from Cilium repository
|
||||
wget https://github.com/cilium/cilium/archive/v1.2.0.zip
|
||||
unzip v1.2.0.zip
|
||||
cd cilium-1.2.0/examples/kubernetes/addons/etcd-operator
|
||||
|
||||
# Generate and deploy etcd certificates
|
||||
export CLUSTER_DOMAIN=$(kubectl get ConfigMap --namespace kube-system coredns -o yaml | awk '/kubernetes/ {print $2}')
|
||||
tls/certs/gen-cert.sh $CLUSTER_DOMAIN
|
||||
tls/deploy-certs.sh
|
||||
|
||||
# Label kube-dns with fixed identity label
|
||||
kubectl label -n kube-system pod $(kubectl -n kube-system get pods -l k8s-app=kube-dns -o jsonpath='{range .items[]}{.metadata.name}{" "}{end}') io.cilium.fixed-identity=kube-dns
|
||||
|
||||
kubectl create -f ./
|
||||
|
||||
# Wait several minutes for Cilium, coredns and etcd pods to converge to a working state
|
||||
```
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Flannel" %}}
|
||||
|
||||
For `flannel` to work correctly, you must pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init`.
|
||||
|
||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
||||
please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
||||
|
||||
Note that `flannel` works on `amd64`, `arm`, `arm64` and `ppc64le`.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
|
||||
```
|
||||
|
||||
For more information about `flannel`, see [the CoreOS flannel repository on GitHub
|
||||
](https://github.com/coreos/flannel).
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Kube-router" %}}
|
||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
||||
please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
||||
|
||||
Kube-router relies on kube-controller-manager to allocate pod CIDR for the nodes. Therefore, use `kubeadm init` with the `--pod-network-cidr` flag.
|
||||
|
||||
Kube-router provides pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.
|
||||
|
||||
For information on setting up Kubernetes cluster with Kube-router using kubeadm, please see official [setup guide](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md).
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Romana" %}}
|
||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
||||
please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
||||
|
||||
The official Romana set-up guide is [here](https://github.com/romana/romana/tree/master/containerize#using-kubeadm).
|
||||
|
||||
Romana works on `amd64` only.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/romana/romana/master/containerize/specs/romana-kubeadm.yml
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Weave Net" %}}
|
||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
||||
please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
||||
|
||||
The official Weave Net set-up guide is [here](https://www.weave.works/docs/net/latest/kube-addon/).
|
||||
|
||||
Weave Net works on `amd64`, `arm`, `arm64` and `ppc64le` without any extra action required.
|
||||
Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address
|
||||
if they don't know their PodIP.
|
||||
|
||||
```shell
|
||||
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="JuniperContrail/TungstenFabric" %}}
|
||||
Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking,
|
||||
simultaneous overlay-underlay support, network policy enforcement, network isolation,
|
||||
service chaining and flexible load balancing.
|
||||
|
||||
There are multiple, flexible ways to install JuniperContrail/TungstenFabric CNI.
|
||||
|
||||
Kindly refer to this quickstart: [TungstenFabric](https://tungstenfabric.github.io/website/)
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
Once a pod network has been installed, you can confirm that it is working by
|
||||
checking that the CoreDNS pod is Running in the output of `kubectl get pods --all-namespaces`.
|
||||
And once the CoreDNS pod is up and running, you can continue by joining your nodes.
|
||||
|
||||
If your network is not working or CoreDNS is not in the Running state, check
|
||||
out our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
|
||||
|
||||
### コントロールプレーンノードの隔離
|
||||
|
||||
By default, your cluster will not schedule pods on the master for security
|
||||
reasons. If you want to be able to schedule pods on the master, e.g. for a
|
||||
single-machine Kubernetes cluster for development, run:
|
||||
|
||||
```bash
|
||||
kubectl taint nodes --all node-role.kubernetes.io/master-
|
||||
```
|
||||
|
||||
With output looking something like:
|
||||
|
||||
```
|
||||
node "test-01" untainted
|
||||
taint "node-role.kubernetes.io/master:" not found
|
||||
taint "node-role.kubernetes.io/master:" not found
|
||||
```
|
||||
|
||||
This will remove the `node-role.kubernetes.io/master` taint from any nodes that
|
||||
have it, including the master node, meaning that the scheduler will then be able
|
||||
to schedule pods everywhere.
|
||||
|
||||
### ノードの追加 {#join-nodes}
|
||||
|
||||
The nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster do the following for each machine:
|
||||
|
||||
* SSH to the machine
|
||||
* Become root (e.g. `sudo su -`)
|
||||
* Run the command that was output by `kubeadm init`. For example:
|
||||
|
||||
``` bash
|
||||
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
|
||||
```
|
||||
|
||||
If you do not have the token, you can get it by running the following command on the master node:
|
||||
|
||||
``` bash
|
||||
kubeadm token list
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
``` console
|
||||
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
|
||||
8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
|
||||
signing token generated by bootstrappers:
|
||||
'kubeadm init'. kubeadm:
|
||||
default-node-token
|
||||
```
|
||||
|
||||
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired,
|
||||
you can create a new token by running the following command on the master node:
|
||||
|
||||
``` bash
|
||||
kubeadm token create
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
``` console
|
||||
5didvk.d09sbcov8ph2amjw
|
||||
```
|
||||
|
||||
If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the master node:
|
||||
|
||||
``` bash
|
||||
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
|
||||
openssl dgst -sha256 -hex | sed 's/^.* //'
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
``` console
|
||||
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To specify an IPv6 tuple for `<master-ip>:<master-port>`, IPv6 address must be enclosed in square brackets, for example: `[fd00::101]:2073`.
|
||||
{{< /note >}}
|
||||
|
||||
The output should look something like:
|
||||
|
||||
```
|
||||
[preflight] Running pre-flight checks
|
||||
|
||||
... (log output of join workflow) ...
|
||||
|
||||
Node join complete:
|
||||
* Certificate signing request sent to master and response
|
||||
received.
|
||||
* Kubelet informed of new secure connection details.
|
||||
|
||||
Run 'kubectl get nodes' on the master to see this machine join.
|
||||
```
|
||||
|
||||
A few seconds later, you should notice this node in the output from `kubectl get
|
||||
nodes` when run on the master.
|
||||
|
||||
### (任意) マスター以外のマシンからのクラスター操作
|
||||
|
||||
In order to get a kubectl on some other computer (e.g. laptop) to talk to your
|
||||
cluster, you need to copy the administrator kubeconfig file from your master
|
||||
to your workstation like this:
|
||||
|
||||
``` bash
|
||||
scp root@<master ip>:/etc/kubernetes/admin.conf .
|
||||
kubectl --kubeconfig ./admin.conf get nodes
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The example above assumes SSH access is enabled for root. If that is not the
|
||||
case, you can copy the `admin.conf` file to be accessible by some other user
|
||||
and `scp` using that other user instead.
|
||||
|
||||
The `admin.conf` file gives the user _superuser_ privileges over the cluster.
|
||||
This file should be used sparingly. For normal users, it's recommended to
|
||||
generate an unique credential to which you whitelist privileges. You can do
|
||||
this with the `kubeadm alpha kubeconfig user --client-name <CN>`
|
||||
command. That command will print out a KubeConfig file to STDOUT which you
|
||||
should save to a file and distribute to your user. After that, whitelist
|
||||
privileges by using `kubectl create (cluster)rolebinding`.
|
||||
{{< /note >}}
|
||||
|
||||
### (任意) APIサーバーをlocalhostへプロキシ
|
||||
|
||||
If you want to connect to the API Server from outside the cluster you can use
|
||||
`kubectl proxy`:
|
||||
|
||||
```bash
|
||||
scp root@<master ip>:/etc/kubernetes/admin.conf .
|
||||
kubectl --kubeconfig ./admin.conf proxy
|
||||
```
|
||||
|
||||
You can now access the API Server locally at `http://localhost:8001/api/v1`
|
||||
|
||||
## クラスターの削除 {#tear-down}
|
||||
|
||||
To undo what kubeadm did, you should first [drain the
|
||||
node](/docs/reference/generated/kubectl/kubectl-commands#drain) and make
|
||||
sure that the node is empty before shutting it down.
|
||||
|
||||
Talking to the master with the appropriate credentials, run:
|
||||
|
||||
```bash
|
||||
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
|
||||
kubectl delete node <node name>
|
||||
```
|
||||
|
||||
Then, on the node being removed, reset all kubeadm installed state:
|
||||
|
||||
```bash
|
||||
kubeadm reset
|
||||
```
|
||||
|
||||
If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
|
||||
appropriate arguments.
|
||||
|
||||
More options and information about the
|
||||
[`kubeadm reset command`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/).
|
||||
|
||||
## クラスターの維持 {#lifecycle}
|
||||
|
||||
Instructions for maintaining kubeadm clusters (e.g. upgrades,downgrades, etc.) can be found [here.](/docs/tasks/administer-cluster/kubeadm)
|
||||
|
||||
## 他アドオンの参照 {#other-addons}
|
||||
|
||||
See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons,
|
||||
including tools for logging, monitoring, network policy, visualization &
|
||||
control of your Kubernetes cluster.
|
||||
|
||||
## 次の手順 {#whats-next}
|
||||
|
||||
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
|
||||
* Learn about kubeadm's advanced usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
|
||||
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
|
||||
* Configure log rotation. You can use **logrotate** for that. When using Docker, you can specify log rotation options for Docker daemon, for example `--log-driver=json-file --log-opt=max-size=10m --log-opt=max-file=5`. See [Configure and troubleshoot the Docker daemon](https://docs.docker.com/engine/admin/) for more details.
|
||||
|
||||
## フィードバック {#feedback}
|
||||
|
||||
* For bugs, visit [kubeadm Github issue tracker](https://github.com/kubernetes/kubeadm/issues)
|
||||
* For support, visit kubeadm Slack Channel:
|
||||
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/)
|
||||
* General SIG Cluster Lifecycle Development Slack Channel:
|
||||
[#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
|
||||
* SIG Cluster Lifecycle [SIG information](#TODO)
|
||||
* SIG Cluster Lifecycle Mailing List:
|
||||
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
|
||||
|
||||
## バージョン互換ポリシー {#version-skew-policy}
|
||||
|
||||
The kubeadm CLI tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1).
|
||||
kubeadm CLI vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).
|
||||
|
||||
Due to that we can't see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters.
|
||||
|
||||
Example: kubeadm v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to
|
||||
v1.8.
|
||||
|
||||
Please also check our [installation guide](/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
|
||||
for more information on the version skew between kubelets and the control plane.
|
||||
|
||||
## kubeadmは様々なプラットフォームで動く
|
||||
|
||||
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
|
||||
following the [multi-platform
|
||||
proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multi-platform.md).
|
||||
|
||||
Multiplatform container images for the control plane and addons are also supported since v1.12.
|
||||
|
||||
Only some of the network providers offer solutions for all platforms. Please consult the list of
|
||||
network providers above or the documentation from each provider to figure out whether the provider
|
||||
supports your chosen platform.
|
||||
|
||||
## 制限事項 {#limitations}
|
||||
|
||||
Please note: kubeadm is a work in progress and these limitations will be
|
||||
addressed in due course.
|
||||
|
||||
1. The cluster created here has a single master, with a single etcd database
|
||||
running on it. This means that if the master fails, your cluster may lose
|
||||
data and may need to be recreated from scratch. Adding HA support
|
||||
(multiple etcd servers, multiple API servers, etc) to kubeadm is
|
||||
still a work-in-progress.
|
||||
|
||||
Workaround: regularly
|
||||
[back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
|
||||
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the master.
|
||||
|
||||
## トラブルシューティング {#troubleshooting}
|
||||
|
||||
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,69 @@
|
|||
---
|
||||
title: Options for Highly Available Topology
|
||||
content_template: templates/concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This page explains the two options for configuring the topology of your highly available (HA) Kubernetes clusters.
|
||||
|
||||
You can set up an HA cluster:
|
||||
|
||||
- With stacked control plane nodes, where etcd nodes are colocated with control plane nodes
|
||||
- With external etcd nodes, where etcd runs on separate nodes from the control plane
|
||||
|
||||
You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Stacked etcd topology
|
||||
|
||||
A stacked HA cluster is a [topology](https://en.wikipedia.org/wiki/Network_topology) where the distributed
|
||||
data storage cluster provided by etcd is stacked on top of the cluster formed by the nodes managed by
|
||||
kubeadm that run control plane components.
|
||||
|
||||
Each control plane node runs an instance of the `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`.
|
||||
The `kube-apiserver` is exposed to worker nodes using a load balancer.
|
||||
|
||||
Each control plane node creates a local etcd member and this etcd member communicate only with
|
||||
the `kube-apiserver` of this node. The same applies to the local `kube-controller-manager`
|
||||
and `kube-scheduler` instances.
|
||||
|
||||
This topology couples the control planes and etcd members on the same nodes. It is simpler to set up than a cluster
|
||||
with external etcd nodes, and simpler to manage for replication.
|
||||
|
||||
However, a stacked cluster runs the risk of failed coupling. If one node goes down, both an etcd member and a control
|
||||
plane instance are lost, and redundancy is compromised. You can mitigate this risk by adding more control plane nodes.
|
||||
|
||||
You should therefore run a minimum of three stacked control plane nodes for an HA cluster.
|
||||
|
||||
This is the default topology in kubeadm. A local etcd member is created automatically
|
||||
on control plane nodes when using `kubeadm init` and `kubeadm join --experimental-control-plane`.
|
||||
|
||||
![Stacked etcd topology](/images/kubeadm/kubeadm-ha-topology-stacked-etcd.svg)
|
||||
|
||||
## External etcd topology
|
||||
|
||||
An HA cluster with external etcd is a [topology](https://en.wikipedia.org/wiki/Network_topology) where the distributed data storage cluster provided by etcd is external to the cluster formed by the nodes that run control plane components.
|
||||
|
||||
Like the stacked etcd topology, each control plane node in an external etcd topology runs an instance of the `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`. And the `kube-apiserver` is exposed to worker nodes using a load balancer. However, etcd members run on separate hosts, and each etcd host communicates with the `kube-apiserver` of each control plane node.
|
||||
|
||||
This topology decouples the control plane and etcd member. It therefore provides an HA setup where
|
||||
losing a control plane instance or an etcd member has less impact and does not affect
|
||||
the cluster redundancy as much as the stacked HA topology.
|
||||
|
||||
However, this topology requires twice the number of hosts as the stacked HA topology.
|
||||
A minimum of three hosts for control plane nodes and three hosts for etcd nodes are required for an HA cluster with this topology.
|
||||
|
||||
![External etcd topology](/images/kubeadm/kubeadm-ha-topology-external-etcd.svg)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
- [Set up a highly available cluster with kubeadm](/docs/setup/independent/high-availability/)
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,346 @@
|
|||
---
|
||||
title: kubeadmを使用した高可用性クラスターの作成
|
||||
content_template: templates/task
|
||||
weight: 60
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This page explains two different approaches to setting up a highly available Kubernetes
|
||||
cluster using kubeadm:
|
||||
|
||||
- With stacked control plane nodes. This approach requires less infrastructure. The etcd members
|
||||
and control plane nodes are co-located.
|
||||
- With an external etcd cluster. This approach requires more infrastructure. The
|
||||
control plane nodes and etcd members are separated.
|
||||
|
||||
Before proceeding, you should carefully consideer which approach best meets the needs of your applications
|
||||
and environment. [This comparison topic](/docs/setup/independent/ha-topology/) outlines the advantages and disadvantages of each.
|
||||
|
||||
Your clusters must run Kubernetes version 1.12 or later. You should also be aware that
|
||||
setting up HA clusters with kubeadm is still experimental and will be further simplified
|
||||
in future versions. You might encounter issues with upgrading your clusters, for example.
|
||||
We encourage you to try either approach, and provide us with feedback in the kubeadm
|
||||
[issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
|
||||
|
||||
Note that the alpha feature gate `HighAvailability` is deprecated in v1.12 and removed in v1.13.
|
||||
|
||||
See also [The HA upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha).
|
||||
|
||||
{{< caution >}}
|
||||
This page does not address running your cluster on a cloud provider. In a cloud
|
||||
environment, neither approach documented here works with Service objects of type
|
||||
LoadBalancer, or with dynamic PersistentVolumes.
|
||||
{{< /caution >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
For both methods you need this infrastructure:
|
||||
|
||||
- Three machines that meet [kubeadm's minimum
|
||||
requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for
|
||||
the masters
|
||||
- Three machines that meet [kubeadm's minimum
|
||||
requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for
|
||||
the workers
|
||||
- Full network connectivity between all machines in the cluster (public or
|
||||
private network)
|
||||
- sudo privileges on all machines
|
||||
- SSH access from one device to all nodes in the system
|
||||
- `kubeadm` and `kubelet` installed on all machines. `kubectl` is optional.
|
||||
|
||||
For the external etcd cluster only, you also need:
|
||||
|
||||
- Three additional machines for etcd members
|
||||
|
||||
{{< note >}}
|
||||
The following examples run Calico as the Pod networking provider. If you run another
|
||||
networking provider, make sure to replace any default values as needed.
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## 両手順における最初のステップ
|
||||
|
||||
{{< note >}}
|
||||
**Note**: All commands on any control plane or etcd node should be
|
||||
run as root.
|
||||
{{< /note >}}
|
||||
|
||||
- Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and
|
||||
some like Weave do not. See the see [the CNI network
|
||||
documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network).
|
||||
To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under
|
||||
the `networking` object of `ClusterConfiguration`.
|
||||
|
||||
### kube-apiserver用にロードバランサーを作成
|
||||
|
||||
{{< note >}}
|
||||
There are many configurations for load balancers. The following example is only one
|
||||
option. Your cluster requirements may need a different configuration.
|
||||
{{< /note >}}
|
||||
|
||||
1. Create a kube-apiserver load balancer with a name that resolves to DNS.
|
||||
|
||||
- In a cloud environment you should place your control plane nodes behind a TCP
|
||||
forwarding load balancer. This load balancer distributes traffic to all
|
||||
healthy control plane nodes in its target list. The health check for
|
||||
an apiserver is a TCP check on the port the kube-apiserver listens on
|
||||
(default value `:6443`).
|
||||
|
||||
- It is not recommended to use an IP address directly in a cloud environment.
|
||||
|
||||
- The load balancer must be able to communicate with all control plane nodes
|
||||
on the apiserver port. It must also allow incoming traffic on its
|
||||
listening port.
|
||||
|
||||
- [HAProxy](http://www.haproxy.org/) can be used as a load balancer.
|
||||
|
||||
- Make sure the address of the load balancer always matches
|
||||
the address of kubeadm's `ControlPlaneEndpoint`.
|
||||
|
||||
1. Add the first control plane nodes to the load balancer and test the
|
||||
connection:
|
||||
|
||||
```sh
|
||||
nc -v LOAD_BALANCER_IP PORT
|
||||
```
|
||||
|
||||
- A connection refused error is expected because the apiserver is not yet
|
||||
running. A timeout, however, means the load balancer cannot communicate
|
||||
with the control plane node. If a timeout occurs, reconfigure the load
|
||||
balancer to communicate with the control plane node.
|
||||
|
||||
1. Add the remaining control plane nodes to the load balancer target group.
|
||||
|
||||
### SSHの設定
|
||||
|
||||
SSH is required if you want to control all nodes from a single machine.
|
||||
|
||||
1. Enable ssh-agent on your main device that has access to all other nodes in
|
||||
the system:
|
||||
|
||||
```
|
||||
eval $(ssh-agent)
|
||||
```
|
||||
|
||||
1. Add your SSH identity to the session:
|
||||
|
||||
```
|
||||
ssh-add ~/.ssh/path_to_private_key
|
||||
```
|
||||
|
||||
1. SSH between nodes to check that the connection is working correctly.
|
||||
|
||||
- When you SSH to any node, make sure to add the `-A` flag:
|
||||
|
||||
```
|
||||
ssh -A 10.0.0.7
|
||||
```
|
||||
|
||||
- When using sudo on any node, make sure to preserve the environment so SSH
|
||||
forwarding works:
|
||||
|
||||
```
|
||||
sudo -E -s
|
||||
```
|
||||
|
||||
## 積み重なったコントロールプレーンとetcdノード
|
||||
|
||||
### 最初のコントロールプレーンノードの手順
|
||||
|
||||
1. On the first control plane node, create a configuration file called `kubeadm-config.yaml`:
|
||||
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
apiServer:
|
||||
certSANs:
|
||||
- "LOAD_BALANCER_DNS"
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
|
||||
|
||||
- `kubernetesVersion` should be set to the Kubernetes version to use. This
|
||||
example uses `stable`.
|
||||
- `controlPlaneEndpoint` should match the address or DNS and port of the load balancer.
|
||||
- It's recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.
|
||||
|
||||
1. Make sure that the node is in a clean state:
|
||||
|
||||
```sh
|
||||
sudo kubeadm init --config=kubeadm-config.yaml
|
||||
```
|
||||
|
||||
You should see something like:
|
||||
|
||||
```sh
|
||||
...
|
||||
You can now join any number of machines by running the following on each node
|
||||
as root:
|
||||
|
||||
kubeadm join 192.168.0.200:6443 --token j04n3m.octy8zely83cy2ts --discovery-token-ca-cert-hash sha256:84938d2a22203a8e56a787ec0c6ddad7bc7dbd52ebabc62fd5f4dbea72b14d1f
|
||||
```
|
||||
|
||||
1. Copy this output to a text file. You will need it later to join other control plane nodes to the
|
||||
cluster.
|
||||
|
||||
1. Apply the Weave CNI plugin:
|
||||
|
||||
```sh
|
||||
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
|
||||
```
|
||||
|
||||
1. Type the following and watch the pods of the components get started:
|
||||
|
||||
```sh
|
||||
kubectl get pod -n kube-system -w
|
||||
```
|
||||
|
||||
- It's recommended that you join new control plane nodes only after the first node has finished initializing.
|
||||
|
||||
1. Copy the certificate files from the first control plane node to the rest:
|
||||
|
||||
In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the
|
||||
other control plane nodes.
|
||||
```sh
|
||||
USER=ubuntu # customizable
|
||||
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
|
||||
for host in ${CONTROL_PLANE_IPS}; do
|
||||
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
|
||||
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
|
||||
scp /etc/kubernetes/admin.conf "${USER}"@$host:
|
||||
done
|
||||
```
|
||||
|
||||
### 残りのコントロールプレーンノードの手順
|
||||
|
||||
1. Move the files created by the previous step where `scp` was used:
|
||||
|
||||
```sh
|
||||
USER=ubuntu # customizable
|
||||
mkdir -p /etc/kubernetes/pki/etcd
|
||||
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
|
||||
mv /home/${USER}/ca.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
|
||||
mv /home/${USER}/sa.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
|
||||
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
|
||||
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
|
||||
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
|
||||
```
|
||||
|
||||
This process writes all the requested files in the `/etc/kubernetes` folder.
|
||||
|
||||
1. Start `kubeadm join` on this node using the join command that was previously given to you by `kubeadm init` on
|
||||
the first node. It should look something like this:
|
||||
|
||||
```sh
|
||||
sudo kubeadm join 192.168.0.200:6443 --token j04n3m.octy8zely83cy2ts --discovery-token-ca-cert-hash sha256:84938d2a22203a8e56a787ec0c6ddad7bc7dbd52ebabc62fd5f4dbea72b14d1f --experimental-control-plane
|
||||
```
|
||||
|
||||
- Notice the addition of the `--experimental-control-plane` flag. This flag automates joining this
|
||||
control plane node to the cluster.
|
||||
|
||||
1. Type the following and watch the pods of the components get started:
|
||||
|
||||
```sh
|
||||
kubectl get pod -n kube-system -w
|
||||
```
|
||||
|
||||
1. Repeat these steps for the rest of the control plane nodes.
|
||||
|
||||
## 外部のetcdノード
|
||||
|
||||
### etcdクラスターの構築
|
||||
|
||||
- Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/)
|
||||
to set up the etcd cluster.
|
||||
|
||||
### 最初のコントロールプレーンノードの構築
|
||||
|
||||
1. Copy the following files from any node from the etcd cluster to this node:
|
||||
|
||||
```sh
|
||||
export CONTROL_PLANE="ubuntu@10.0.0.7"
|
||||
+scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
|
||||
+scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
|
||||
+scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
|
||||
```
|
||||
|
||||
- Replace the value of `CONTROL_PLANE` with the `user@host` of this machine.
|
||||
|
||||
1. Create a file called `kubeadm-config.yaml` with the following contents:
|
||||
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
apiServer:
|
||||
certSANs:
|
||||
- "LOAD_BALANCER_DNS"
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
|
||||
etcd:
|
||||
external:
|
||||
endpoints:
|
||||
- https://ETCD_0_IP:2379
|
||||
- https://ETCD_1_IP:2379
|
||||
- https://ETCD_2_IP:2379
|
||||
caFile: /etc/kubernetes/pki/etcd/ca.crt
|
||||
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
|
||||
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
|
||||
|
||||
- The difference between stacked etcd and external etcd here is that we are using the `external` field for `etcd` in the kubeadm config. In the case of the stacked etcd topology this is managed automatically.
|
||||
|
||||
- Replace the following variables in the template with the appropriate values for your cluster:
|
||||
|
||||
- `LOAD_BALANCER_DNS`
|
||||
- `LOAD_BALANCER_PORT`
|
||||
- `ETCD_0_IP`
|
||||
- `ETCD_1_IP`
|
||||
- `ETCD_2_IP`
|
||||
|
||||
1. Run `kubeadm init --config kubeadm-config.yaml` on this node.
|
||||
|
||||
1. Write the join command that is returned to a text file for later use.
|
||||
|
||||
1. Apply the Weave CNI plugin:
|
||||
|
||||
```sh
|
||||
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
|
||||
```
|
||||
|
||||
### 残りのコントロールプレーンノードの手順
|
||||
|
||||
To add the rest of the control plane nodes, follow [these instructions](#steps-for-the-rest-of-the-control-plane-nodes).
|
||||
The steps are the same as for the stacked etcd setup, with the exception that a local
|
||||
etcd member is not created.
|
||||
|
||||
To summarize:
|
||||
|
||||
- Make sure the first control plane node is fully initialized.
|
||||
- Copy certificates between the first control plane node and the other control plane nodes.
|
||||
- Join each control plane node with the join command you saved to a text file, plus add the `--experimental-control-plane` flag.
|
||||
|
||||
## コントロールプレーン起動後の共通タスク
|
||||
|
||||
### Podネットワークのインストール
|
||||
|
||||
[Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install
|
||||
the pod network. Make sure this corresponds to whichever pod CIDR you provided
|
||||
in the master configuration file.
|
||||
|
||||
### ワーカーのインストール
|
||||
|
||||
Each worker node can now be joined to the cluster with the command returned from any of the
|
||||
`kubeadm init` commands. The flag `--experimental-control-plane` should not be added to worker nodes.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,253 @@
|
|||
---
|
||||
title: kubeadmのインストール
|
||||
content_template: templates/task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<img src="https://raw.githubusercontent.com/cncf/artwork/master/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">This page shows how to install the `kubeadm` toolbox.
|
||||
For information how to create a cluster with kubeadm once you have performed this installation process,
|
||||
see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/) page.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
* One or more machines running one of:
|
||||
- Ubuntu 16.04+
|
||||
- Debian 9
|
||||
- CentOS 7
|
||||
- RHEL 7
|
||||
- Fedora 25/26 (best-effort)
|
||||
- HypriotOS v1.0.1+
|
||||
- Container Linux (tested with 1800.6.0)
|
||||
* 2 GB or more of RAM per machine (any less will leave little room for your apps)
|
||||
* 2 CPUs or more
|
||||
* Full network connectivity between all machines in the cluster (public or private network is fine)
|
||||
* Unique hostname, MAC address, and product_uuid for every node. See [here](#MACアドレスとproduct_uuidが全てのノードでユニークであることの検証) for more details.
|
||||
* Certain ports are open on your machines. See [here](#必須ポートの確認) for more details.
|
||||
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## MACアドレスとproduct_uuidが全てのノードでユニークであることの検証
|
||||
|
||||
* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a`
|
||||
* The product_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid`
|
||||
|
||||
It is very likely that hardware devices will have unique addresses, although some virtual machines may have
|
||||
identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster.
|
||||
If these values are not unique to each node, the installation process
|
||||
may [fail](https://github.com/kubernetes/kubeadm/issues/31).
|
||||
|
||||
## ネットワークアダプタの確認
|
||||
|
||||
If you have more than one network adapter, and your Kubernetes components are not reachable on the default
|
||||
route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter.
|
||||
|
||||
## 必須ポートの確認
|
||||
|
||||
### マスターノード
|
||||
|
||||
| Protocol | Direction | Port Range | Purpose | Used By |
|
||||
|----------|-----------|------------|-------------------------|---------------------------|
|
||||
| TCP | Inbound | 6443* | Kubernetes API server | All |
|
||||
| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
|
||||
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
|
||||
| TCP | Inbound | 10251 | kube-scheduler | Self |
|
||||
| TCP | Inbound | 10252 | kube-controller-manager | Self |
|
||||
|
||||
### ワーカーノード
|
||||
|
||||
| Protocol | Direction | Port Range | Purpose | Used By |
|
||||
|----------|-----------|-------------|-----------------------|-------------------------|
|
||||
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
|
||||
| TCP | Inbound | 30000-32767 | NodePort Services** | All |
|
||||
|
||||
** Default port range for [NodePort Services](/docs/concepts/services-networking/service/).
|
||||
|
||||
Any port numbers marked with * are overridable, so you will need to ensure any
|
||||
custom ports you provide are also open.
|
||||
|
||||
Although etcd ports are included in master nodes, you can also host your own
|
||||
etcd cluster externally or on custom ports.
|
||||
|
||||
The pod network plugin you use (see below) may also require certain ports to be
|
||||
open. Since this differs with each pod network plugin, please see the
|
||||
documentation for the plugins about what port(s) those need.
|
||||
|
||||
## ランタイムのインストール
|
||||
|
||||
Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default.
|
||||
The container runtime used by default is Docker, which is enabled through the built-in
|
||||
`dockershim` CRI implementation inside of the `kubelet`.
|
||||
|
||||
Other CRI-based runtimes include:
|
||||
|
||||
- [containerd](https://github.com/containerd/cri) (CRI plugin built into containerd)
|
||||
- [cri-o](https://github.com/kubernetes-incubator/cri-o)
|
||||
- [frakti](https://github.com/kubernetes/frakti)
|
||||
- [rkt](https://github.com/kubernetes-incubator/rktlet)
|
||||
|
||||
Refer to the [CRI installation instructions](/docs/setup/cri) for more information.
|
||||
|
||||
## kubeadm、kubelet、kubectlのインストール
|
||||
|
||||
You will install these packages on all of your machines:
|
||||
|
||||
* `kubeadm`: the command to bootstrap the cluster.
|
||||
|
||||
* `kubelet`: the component that runs on all of the machines in your cluster
|
||||
and does things like starting pods and containers.
|
||||
|
||||
* `kubectl`: the command line util to talk to your cluster.
|
||||
|
||||
kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will
|
||||
need to ensure they match the version of the Kubernetes control panel you want
|
||||
kubeadm to install for you. If you do not, there is a risk of a version skew occurring that
|
||||
can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the
|
||||
kubelet and the control plane is supported, but the kubelet version may never exceed the API
|
||||
server version. For example, kubelets running 1.7.0 should be fully compatible with a 1.8.0 API server,
|
||||
but not vice versa.
|
||||
|
||||
{{< warning >}}
|
||||
These instructions exclude all Kubernetes packages from any system upgrades.
|
||||
This is because kubeadm and Kubernetes require
|
||||
[special attention to upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/).
|
||||
{{</ warning >}}
|
||||
|
||||
For more information on version skews, please read our
|
||||
[version skew policy](/docs/setup/independent/create-cluster-kubeadm/#version-skew-policy).
|
||||
|
||||
{{< tabs name="k8s_install" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
```bash
|
||||
apt-get update && apt-get install -y apt-transport-https curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
||||
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
|
||||
deb https://apt.kubernetes.io/ kubernetes-xenial main
|
||||
EOF
|
||||
apt-get update
|
||||
apt-get install -y kubelet kubeadm kubectl
|
||||
apt-mark hold kubelet kubeadm kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
```bash
|
||||
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
exclude=kube*
|
||||
EOF
|
||||
|
||||
# Set SELinux in permissive mode (effectively disabling it)
|
||||
setenforce 0
|
||||
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
|
||||
|
||||
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
|
||||
|
||||
systemctl enable kubelet && systemctl start kubelet
|
||||
```
|
||||
|
||||
**Note:**
|
||||
|
||||
- Setting SELinux in permissive mode by running `setenforce 0` and `sed ...` effectively disables it.
|
||||
This is required to allow containers to access the host filesystem, which is needed by pod networks for example.
|
||||
You have to do this until SELinux support is improved in the kubelet.
|
||||
- Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure
|
||||
`net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g.
|
||||
|
||||
```bash
|
||||
cat <<EOF > /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
EOF
|
||||
sysctl --system
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Container Linux" %}}
|
||||
Install CNI plugins (required for most pod network):
|
||||
|
||||
```bash
|
||||
CNI_VERSION="v0.6.0"
|
||||
mkdir -p /opt/cni/bin
|
||||
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz
|
||||
```
|
||||
|
||||
Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI))
|
||||
|
||||
```bash
|
||||
CRICTL_VERSION="v1.11.1"
|
||||
mkdir -p /opt/bin
|
||||
curl -L "https://github.com/kubernetes-incubator/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz
|
||||
```
|
||||
|
||||
Install `kubeadm`, `kubelet`, `kubectl` and add a `kubelet` systemd service:
|
||||
|
||||
```bash
|
||||
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
|
||||
|
||||
mkdir -p /opt/bin
|
||||
cd /opt/bin
|
||||
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
|
||||
chmod +x {kubeadm,kubelet,kubectl}
|
||||
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
|
||||
mkdir -p /etc/systemd/system/kubelet.service.d
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
```
|
||||
|
||||
Enable and start `kubelet`:
|
||||
|
||||
```bash
|
||||
systemctl enable kubelet && systemctl start kubelet
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
The kubelet is now restarting every few seconds, as it waits in a crashloop for
|
||||
kubeadm to tell it what to do.
|
||||
|
||||
## マスターノードのkubeletによって使用されるcgroupドライバの設定
|
||||
|
||||
When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet
|
||||
and set it in the `/var/lib/kubelet/kubeadm-flags.env` file during runtime.
|
||||
|
||||
If you are using a different CRI, you have to modify the file
|
||||
`/etc/default/kubelet` with your `cgroup-driver` value, like so:
|
||||
|
||||
```bash
|
||||
KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
|
||||
```
|
||||
|
||||
This file will be used by `kubeadm init` and `kubeadm join` to source extra
|
||||
user defined arguments for the kubelet.
|
||||
|
||||
Please mind, that you **only** have to do that if the cgroup driver of your CRI
|
||||
is not `cgroupfs`, because that is the default value in the kubelet already.
|
||||
|
||||
Restarting the kubelet is required:
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
## トラブルシュート
|
||||
|
||||
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/)
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,200 @@
|
|||
---
|
||||
title: kubeadmを使用したクラスター内の各kubeletの設定
|
||||
content_template: templates/concept
|
||||
weight: 80
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="1.11" state="stable" >}}
|
||||
|
||||
The lifecycle of the kubeadm CLI tool is decoupled from the
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet), which is a daemon that runs
|
||||
on each node within the Kubernetes cluster. The kubeadm CLI tool is executed by the user when Kubernetes is
|
||||
initialized or upgraded, whereas the kubelet is always running in the background.
|
||||
|
||||
Since the kubelet is a daemon, it needs to be maintained by some kind of a init
|
||||
system or service manager. When the kubelet is installed using DEBs or RPMs,
|
||||
systemd is configured to manage the kubelet. You can use a different service
|
||||
manager instead, but you need to configure it manually.
|
||||
|
||||
Some kubelet configuration details need to be the same across all kubelets involved in the cluster, while
|
||||
other configuration aspects need to be set on a per-kubelet basis, to accommodate the different
|
||||
characteristics of a given machine, such as OS, storage, and networking. You can manage the configuration
|
||||
of your kubelets manually, but [kubeadm now provides a `KubeletConfiguration` API type for managing your
|
||||
kubelet configurations centrally](#configure-kubelets-using-kubeadm).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Kubeletの設定パターン
|
||||
|
||||
The following sections describe patterns to kubelet configuration that are simplified by
|
||||
using kubeadm, rather than managing the kubelet configuration for each Node manually.
|
||||
|
||||
### 各kubeletにクラスターレベルの設定を配布
|
||||
|
||||
You can provide the kubelet with default values to be used by `kubeadm init` and `kubeadm join`
|
||||
commands. Interesting examples include using a different CRI runtime or setting the default subnet
|
||||
used by services.
|
||||
|
||||
If you want your services to use the subnet `10.96.0.0/12` as the default for services, you can pass
|
||||
the `--service-cidr` parameter to kubeadm:
|
||||
|
||||
```bash
|
||||
kubeadm init --service-cidr 10.96.0.0/12
|
||||
```
|
||||
|
||||
Virtual IPs for services are now allocated from this subnet. You also need to set the DNS address used
|
||||
by the kubelet, using the `--cluster-dns` flag. This setting needs to be the same for every kubelet
|
||||
on every manager and Node in the cluster. The kubelet provides a versioned, structured API object
|
||||
that can configure most parameters in the kubelet and push out this configuration to each running
|
||||
kubelet in the cluster. This object is called **the kubelet's ComponentConfig**.
|
||||
The ComponentConfig allows the user to specify flags such as the cluster DNS IP addresses expressed as
|
||||
a list of values to a camelCased key, illustrated by the following example:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
clusterDNS:
|
||||
- 10.96.0.10
|
||||
```
|
||||
|
||||
For more details on the ComponentConfig have a look at [this section](#configure-kubelets-using-kubeadm).
|
||||
|
||||
### インスタンス固有の設定内容を適用
|
||||
|
||||
Some hosts require specific kubelet configurations, due to differences in hardware, operating system,
|
||||
networking, or other host-specific parameters. The following list provides a few examples.
|
||||
|
||||
- The path to the DNS resolution file, as specified by the `--resolv-conf` kubelet
|
||||
configuration flag, may differ among operating systems, or depending on whether you are using
|
||||
`systemd-resolved`. If this path is wrong, DNS resolution will fail on the Node whose kubelet
|
||||
is configured incorrectly.
|
||||
|
||||
- The Node API object `.metadata.name` is set to the machine's hostname by default,
|
||||
unless you are using a cloud provider. You can use the `--hostname-override` flag to override the
|
||||
default behavior if you need to specify a Node name different from the machine's hostname.
|
||||
|
||||
- Currently, the kubelet cannot automatically detects the cgroup driver used by the CRI runtime,
|
||||
but the value of `--cgroup-driver` must match the cgroup driver used by the CRI runtime to ensure
|
||||
the health of the kubelet.
|
||||
|
||||
- Depending on the CRI runtime your cluster uses, you may need to specify different flags to the kubelet.
|
||||
For instance, when using Docker, you need to specify flags such as `--network-plugin=cni`, but if you
|
||||
are using an external runtime, you need to specify `--container-runtime=remote` and specify the CRI
|
||||
endpoint using the `--container-runtime-path-endpoint=<path>`.
|
||||
|
||||
You can specify these flags by configuring an individual kubelet's configuration in your service manager,
|
||||
such as systemd.
|
||||
|
||||
## kubeadmを使用したkubeletの設定
|
||||
|
||||
It is possible to configure the kubelet that kubeadm will start if a custom `KubeletConfiguration`
|
||||
API object is passed with a configuration file like so `kubeadm ... --config some-config-file.yaml`.
|
||||
|
||||
By calling `kubeadm config print-default --api-objects KubeletConfiguration` you can
|
||||
see all the default values for this structure.
|
||||
|
||||
Also have a look at the [API reference for the
|
||||
kubelet ComponentConfig](https://godoc.org/k8s.io/kubernetes/pkg/kubelet/apis/config#KubeletConfiguration)
|
||||
for more information on the individual fields.
|
||||
|
||||
### `kubeadm init`実行時の流れ
|
||||
|
||||
When you call `kubeadm init`, the kubelet configuration is marshalled to disk
|
||||
at `/var/lib/kubelet/config.yaml`, and also uploaded to a ConfigMap in the cluster. The ConfigMap
|
||||
is named `kubelet-config-1.X`, where `.X` is the minor version of the Kubernetes version you are
|
||||
initializing. A kubelet configuration file is also written to `/etc/kubernetes/kubelet.conf` with the
|
||||
baseline cluster-wide configuration for all kubelets in the cluster. This configuration file
|
||||
points to the client certificates that allow the kubelet to communicate with the API server. This
|
||||
addresses the need to
|
||||
[propagate cluster-level configuration to each kubelet](#propagating-cluster-level-configuration-to-each-kubelet).
|
||||
|
||||
To address the second pattern of
|
||||
[providing instance-specific configuration details](#providing-instance-specific-configuration-details),
|
||||
kubeadm writes an environment file to `/var/lib/kubelet/kubeadm-flags.env`, which contains a list of
|
||||
flags to pass to the kubelet when it starts. The flags are presented in the file like this:
|
||||
|
||||
```bash
|
||||
KUBELET_KUBEADM_ARGS="--flag1=value1 --flag2=value2 ..."
|
||||
```
|
||||
|
||||
In addition to the flags used when starting the kubelet, the file also contains dynamic
|
||||
parameters such as the cgroup driver and whether to use a different CRI runtime socket
|
||||
(`--cri-socket`).
|
||||
|
||||
After marshalling these two files to disk, kubeadm attempts to run the following two
|
||||
commands, if you are using systemd:
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload && systemctl restart kubelet
|
||||
```
|
||||
|
||||
If the reload and restart are successful, the normal `kubeadm init` workflow continues.
|
||||
|
||||
### `kubeadm join`実行時の流れ
|
||||
|
||||
When you run `kubeadm join`, kubeadm uses the Bootstrap Token credential perform
|
||||
a TLS bootstrap, which fetches the credential needed to download the
|
||||
`kubelet-config-1.X` ConfigMap and writes it to `/var/lib/kubelet/config.yaml`. The dynamic
|
||||
environment file is generated in exactly the same way as `kubeadm init`.
|
||||
|
||||
Next, `kubeadm` runs the following two commands to load the new configuration into the kubelet:
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload && systemctl restart kubelet
|
||||
```
|
||||
|
||||
After the kubelet loads the new configuration, kubeadm writes the
|
||||
`/etc/kubernetes/bootstrap-kubelet.conf` KubeConfig file, which contains a CA certificate and Bootstrap
|
||||
Token. These are used by the kubelet to perform the TLS Bootstrap and obtain a unique
|
||||
credential, which is stored in `/etc/kubernetes/kubelet.conf`. When this file is written, the kubelet
|
||||
has finished performing the TLS Bootstrap.
|
||||
|
||||
## kubelet用のsystemdファイル
|
||||
|
||||
The configuration file installed by the kubeadm DEB or RPM package is written to
|
||||
`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` and is used by systemd.
|
||||
|
||||
```none
|
||||
[Service]
|
||||
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
|
||||
--kubeconfig=/etc/kubernetes/kubelet.conf"
|
||||
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
|
||||
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating
|
||||
the KUBELET_KUBEADM_ARGS variable dynamically
|
||||
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
|
||||
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,
|
||||
#the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.
|
||||
# KUBELET_EXTRA_ARGS should be sourced from this file.
|
||||
EnvironmentFile=-/etc/default/kubelet
|
||||
ExecStart=
|
||||
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
|
||||
```
|
||||
|
||||
This file specifies the default locations for all of the files managed by kubeadm for the kubelet.
|
||||
|
||||
- The KubeConfig file to use for the TLS Bootstrap is `/etc/kubernetes/bootstrap-kubelet.conf`,
|
||||
but it is only used if `/etc/kubernetes/kubelet.conf` does not exist.
|
||||
- The KubeConfig file with the unique kubelet identity is `/etc/kubernetes/kubelet.conf`.
|
||||
- The file containing the kubelet's ComponentConfig is `/var/lib/kubelet/config.yaml`.
|
||||
- The dynamic environment file that contains `KUBELET_KUBEADM_ARGS` is sourced from `/var/lib/kubelet/kubeadm-flags.env`.
|
||||
- The file that can contain user-specified flag overrides with `KUBELET_EXTRA_ARGS` is sourced from
|
||||
`/etc/default/kubelet` (for DEBs), or `/etc/systconfig/kubelet` (for RPMs). `KUBELET_EXTRA_ARGS`
|
||||
is last in the flag chain and has the highest priority in the event of conflicting settings.
|
||||
|
||||
## Kubernetesバイナリとパッケージの内容
|
||||
|
||||
The DEB and RPM packages shipped with the Kubernetes releases are:
|
||||
|
||||
| Package name | Description |
|
||||
|--------------|-------------|
|
||||
| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and [The kubelet drop-in file(#the-kubelet-drop-in-file-for-systemd) for the kubelet. |
|
||||
| `kubelet` | Installs the `/usr/bin/kubelet` binary. |
|
||||
| `kubectl` | Installs the `/usr/bin/kubectl` binary. |
|
||||
| `kubernetes-cni` | Installs the official CNI binaries into the `/opt/cni/bin` directory. |
|
||||
| `cri-tools` | Installs the `/usr/bin/crictl` binary from [https://github.com/kubernetes-incubator/cri-tools](https://github.com/kubernetes-incubator/cri-tools). |
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,265 @@
|
|||
---
|
||||
title: kubeadmを使用した高可用性etcdクラスターの作成
|
||||
content_template: templates/task
|
||||
weight: 70
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Kubeadm defaults to running a single member etcd cluster in a static pod managed
|
||||
by the kubelet on the control plane node. This is not a high availability setup
|
||||
as the etcd cluster contains only one member and cannot sustain any members
|
||||
becoming unavailable. This task walks through the process of creating a high
|
||||
availability etcd cluster of three members that can be used as an external etcd
|
||||
when using kubeadm to set up a kubernetes cluster.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
* Three hosts that can talk to each other over ports 2379 and 2380. This
|
||||
document assumes these default ports. However, they are configurable through
|
||||
the kubeadm config file.
|
||||
* Each host must [have docker, kubelet, and kubeadm installed][toolbox].
|
||||
* Some infrastructure to copy files between hosts. For example `ssh` and `scp`
|
||||
can satisfy this requirement.
|
||||
|
||||
[toolbox]: /docs/setup/independent/install-kubeadm/
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## クラスターの構築
|
||||
|
||||
The general approach is to generate all certs on one node and only distribute
|
||||
the *necessary* files to the other nodes.
|
||||
|
||||
{{< note >}}
|
||||
kubeadm contains all the necessary crytographic machinery to generate
|
||||
the certificates described below; no other cryptographic tooling is required for
|
||||
this example.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
1. Configure the kubelet to be a service manager for etcd.
|
||||
|
||||
Running etcd is simpler than running kubernetes so you must override the
|
||||
kubeadm-provided kubelet unit file by creating a new one with a higher
|
||||
precedence.
|
||||
|
||||
```sh
|
||||
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
|
||||
[Service]
|
||||
ExecStart=
|
||||
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
|
||||
Restart=always
|
||||
EOF
|
||||
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
1. Create configuration files for kubeadm.
|
||||
|
||||
Generate one kubeadm configuration file for each host that will have an etcd
|
||||
member running on it using the following script.
|
||||
|
||||
```sh
|
||||
# Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts
|
||||
export HOST0=10.0.0.6
|
||||
export HOST1=10.0.0.7
|
||||
export HOST2=10.0.0.8
|
||||
|
||||
# Create temp directories to store files that will end up on other hosts.
|
||||
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
|
||||
|
||||
ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
|
||||
NAMES=("infra0" "infra1" "infra2")
|
||||
|
||||
for i in "${!ETCDHOSTS[@]}"; do
|
||||
HOST=${ETCDHOSTS[$i]}
|
||||
NAME=${NAMES[$i]}
|
||||
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
|
||||
apiVersion: "kubeadm.k8s.io/v1beta1"
|
||||
kind: ClusterConfiguration
|
||||
etcd:
|
||||
local:
|
||||
serverCertSANs:
|
||||
- "${HOST}"
|
||||
peerCertSANs:
|
||||
- "${HOST}"
|
||||
extraArgs:
|
||||
initial-cluster: infra0=https://${ETCDHOSTS[0]}:2380,infra1=https://${ETCDHOSTS[1]}:2380,infra2=https://${ETCDHOSTS[2]}:2380
|
||||
initial-cluster-state: new
|
||||
name: ${NAME}
|
||||
listen-peer-urls: https://${HOST}:2380
|
||||
listen-client-urls: https://${HOST}:2379
|
||||
advertise-client-urls: https://${HOST}:2379
|
||||
initial-advertise-peer-urls: https://${HOST}:2380
|
||||
EOF
|
||||
done
|
||||
```
|
||||
|
||||
1. Generate the certificate authority
|
||||
|
||||
If you already have a CA then the only action that is copying the CA's `crt` and
|
||||
`key` file to `/etc/kubernetes/pki/etcd/ca.crt` and
|
||||
`/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied,
|
||||
proceed to the next step, "Create certificates for each member".
|
||||
|
||||
If you do not already have a CA then run this command on `$HOST0` (where you
|
||||
generated the configuration files for kubeadm).
|
||||
|
||||
```
|
||||
kubeadm init phase certs etcd-ca
|
||||
```
|
||||
|
||||
This creates two files
|
||||
|
||||
- `/etc/kubernetes/pki/etcd/ca.crt`
|
||||
- `/etc/kubernetes/pki/etcd/ca.key`
|
||||
|
||||
1. Create certificates for each member
|
||||
|
||||
```sh
|
||||
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
|
||||
kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
|
||||
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
|
||||
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
|
||||
cp -R /etc/kubernetes/pki /tmp/${HOST2}/
|
||||
# cleanup non-reusable certificates
|
||||
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
|
||||
|
||||
kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
|
||||
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
|
||||
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
|
||||
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
|
||||
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
|
||||
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
|
||||
|
||||
kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
# No need to move the certs because they are for HOST0
|
||||
|
||||
# clean up certs that should not be copied off this host
|
||||
find /tmp/${HOST2} -name ca.key -type f -delete
|
||||
find /tmp/${HOST1} -name ca.key -type f -delete
|
||||
```
|
||||
|
||||
1. Copy certificates and kubeadm configs
|
||||
|
||||
The certificates have been generated and now they must be moved to their
|
||||
respective hosts.
|
||||
|
||||
```sh
|
||||
USER=ubuntu
|
||||
HOST=${HOST1}
|
||||
scp -r /tmp/${HOST}/* ${USER}@${HOST}:
|
||||
ssh ${USER}@${HOST}
|
||||
USER@HOST $ sudo -Es
|
||||
root@HOST $ chown -R root:root pki
|
||||
root@HOST $ mv pki /etc/kubernetes/
|
||||
```
|
||||
|
||||
1. Ensure all expected files exist
|
||||
|
||||
The complete list of required files on `$HOST0` is:
|
||||
|
||||
```
|
||||
/tmp/${HOST0}
|
||||
└── kubeadmcfg.yaml
|
||||
---
|
||||
/etc/kubernetes/pki
|
||||
├── apiserver-etcd-client.crt
|
||||
├── apiserver-etcd-client.key
|
||||
└── etcd
|
||||
├── ca.crt
|
||||
├── ca.key
|
||||
├── healthcheck-client.crt
|
||||
├── healthcheck-client.key
|
||||
├── peer.crt
|
||||
├── peer.key
|
||||
├── server.crt
|
||||
└── server.key
|
||||
```
|
||||
|
||||
On `$HOST1`:
|
||||
|
||||
```
|
||||
$HOME
|
||||
└── kubeadmcfg.yaml
|
||||
---
|
||||
/etc/kubernetes/pki
|
||||
├── apiserver-etcd-client.crt
|
||||
├── apiserver-etcd-client.key
|
||||
└── etcd
|
||||
├── ca.crt
|
||||
├── healthcheck-client.crt
|
||||
├── healthcheck-client.key
|
||||
├── peer.crt
|
||||
├── peer.key
|
||||
├── server.crt
|
||||
└── server.key
|
||||
```
|
||||
|
||||
On `$HOST2`
|
||||
|
||||
```
|
||||
$HOME
|
||||
└── kubeadmcfg.yaml
|
||||
---
|
||||
/etc/kubernetes/pki
|
||||
├── apiserver-etcd-client.crt
|
||||
├── apiserver-etcd-client.key
|
||||
└── etcd
|
||||
├── ca.crt
|
||||
├── healthcheck-client.crt
|
||||
├── healthcheck-client.key
|
||||
├── peer.crt
|
||||
├── peer.key
|
||||
├── server.crt
|
||||
└── server.key
|
||||
```
|
||||
|
||||
1. Create the static pod manifests
|
||||
|
||||
Now that the certificates and configs are in place it's time to create the
|
||||
manifests. On each host run the `kubeadm` command to generate a static manifest
|
||||
for etcd.
|
||||
|
||||
```sh
|
||||
root@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
root@HOST1 $ kubeadm init phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml
|
||||
root@HOST2 $ kubeadm init phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml
|
||||
```
|
||||
|
||||
1. Optional: Check the cluster health
|
||||
|
||||
```sh
|
||||
docker run --rm -it \
|
||||
--net host \
|
||||
-v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:${ETCD_TAG} etcdctl \
|
||||
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
|
||||
--key-file /etc/kubernetes/pki/etcd/peer.key \
|
||||
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
|
||||
--endpoints https://${HOST0}:2379 cluster-health
|
||||
...
|
||||
cluster is healthy
|
||||
```
|
||||
- Set `${ETCD_TAG}` to the version tag of your etcd image. For example `v3.2.24`.
|
||||
- Set `${HOST0}`to the IP address of the host you are testing.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
Once your have a working 3 member etcd cluster, you can continue setting up a
|
||||
highly available control plane using the [external etcd method with
|
||||
kubeadm](/docs/setup/independent/high-availability/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
@ -0,0 +1,229 @@
|
|||
---
|
||||
title: kubeadmのトラブルシューティング
|
||||
content_template: templates/concept
|
||||
weight: 90
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
As with any program, you might run into an error installing or running kubeadm.
|
||||
This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem.
|
||||
|
||||
If your problem is not listed below, please follow the following steps:
|
||||
|
||||
- If you think your problem is a bug with kubeadm:
|
||||
- Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues.
|
||||
- If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template.
|
||||
|
||||
- If you are unsure about how kubeadm works, you can ask on [Slack](http://slack.k8s.io/) in #kubeadm, or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include
|
||||
relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## インストール中に`ebtables`もしくは他の似たような実行プログラムが見つからない
|
||||
|
||||
If you see the following warnings while running `kubeadm init`
|
||||
|
||||
```sh
|
||||
[preflight] WARNING: ebtables not found in system path
|
||||
[preflight] WARNING: ethtool not found in system path
|
||||
```
|
||||
|
||||
Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands:
|
||||
|
||||
- For Ubuntu/Debian users, run `apt install ebtables ethtool`.
|
||||
- For CentOS/Fedora users, run `yum install ebtables ethtool`.
|
||||
|
||||
## インストール中にkubeadmがコントロールプレーンを待ち続けて止まる
|
||||
|
||||
If you notice that `kubeadm init` hangs after printing out the following line:
|
||||
|
||||
```sh
|
||||
[apiclient] Created API client, waiting for the control plane to become ready
|
||||
```
|
||||
|
||||
This may be caused by a number of problems. The most common are:
|
||||
|
||||
- network connection problems. Check that your machine has full network connectivity before continuing.
|
||||
- the default cgroup driver configuration for the kubelet differs from that used by Docker.
|
||||
Check the system log file (e.g. `/var/log/message`) or examine the output from `journalctl -u kubelet`. If you see something like the following:
|
||||
|
||||
```shell
|
||||
error: failed to run Kubelet: failed to create kubelet:
|
||||
misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
|
||||
```
|
||||
|
||||
There are two common ways to fix the cgroup driver problem:
|
||||
|
||||
1. Install Docker again following instructions
|
||||
[here](/docs/setup/independent/install-kubeadm/#installing-docker).
|
||||
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
|
||||
[Configure cgroup driver used by kubelet on Master Node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
|
||||
for detailed instructions.
|
||||
|
||||
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
|
||||
|
||||
## 管理コンテナを削除する時にkubeadmが止まる
|
||||
|
||||
The following could happen if Docker halts and does not remove any Kubernetes-managed containers:
|
||||
|
||||
```bash
|
||||
sudo kubeadm reset
|
||||
[preflight] Running pre-flight checks
|
||||
[reset] Stopping the kubelet service
|
||||
[reset] Unmounting mounted directories in "/var/lib/kubelet"
|
||||
[reset] Removing kubernetes-managed containers
|
||||
(block)
|
||||
```
|
||||
|
||||
A possible solution is to restart the Docker service and then re-run `kubeadm reset`:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart docker.service
|
||||
sudo kubeadm reset
|
||||
```
|
||||
|
||||
Inspecting the logs for docker may also be useful:
|
||||
|
||||
```sh
|
||||
journalctl -ul docker
|
||||
```
|
||||
|
||||
## Podの状態が`RunContainerError`、`CrashLoopBackOff`、または`Error`
|
||||
|
||||
Right after `kubeadm init` there should not be any pods in these states.
|
||||
|
||||
- If there are pods in one of these states _right after_ `kubeadm init`, please open an
|
||||
issue in the kubeadm repo. `coredns` (or `kube-dns`) should be in the `Pending` state
|
||||
until you have deployed the network solution.
|
||||
- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state
|
||||
after deploying the network solution and nothing happens to `coredns` (or `kube-dns`),
|
||||
it's very likely that the Pod Network solution and nothing happens to the DNS server, it's very
|
||||
likely that the Pod Network solution that you installed is somehow broken. You
|
||||
might have to grant it more RBAC privileges or use a newer version. Please file
|
||||
an issue in the Pod Network providers' issue tracker and get the issue triaged there.
|
||||
- If you install a version of Docker older than 1.12.1, remove the `MountFlags=slave` option
|
||||
when booting `dockerd` with `systemd` and restart `docker`. You can see the MountFlags in `/usr/lib/systemd/system/docker.service`.
|
||||
MountFlags can interfere with volumes mounted by Kubernetes, and put the Pods in `CrashLoopBackOff` state.
|
||||
The error happens when Kubernetes does not find `var/run/secrets/kubernetes.io/serviceaccount` files.
|
||||
|
||||
## `coredns`(もしくは`kube-dns`)が`Pending`状態でスタックする
|
||||
|
||||
This is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin
|
||||
should [install the pod network solution](/docs/concepts/cluster-administration/addons/)
|
||||
of choice. You have to install a Pod Network
|
||||
before CoreDNS may deployed fully. Hence the `Pending` state before the network is set up.
|
||||
|
||||
## `HostPort`サービスが動かない
|
||||
|
||||
The `HostPort` and `HostIP` functionality is available depending on your Pod Network
|
||||
provider. Please contact the author of the Pod Network solution to find out whether
|
||||
`HostPort` and `HostIP` functionality are available.
|
||||
|
||||
Calico, Canal, and Flannel CNI providers are verified to support HostPort.
|
||||
|
||||
For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
|
||||
|
||||
If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
|
||||
services](/docs/concepts/services-networking/service/#nodeport) or use `HostNetwork=true`.
|
||||
|
||||
## サービスIP経由でPodにアクセスすることができない
|
||||
|
||||
- Many network add-ons do not yet enable [hairpin mode](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)
|
||||
which allows pods to access themselves via their Service IP. This is an issue related to
|
||||
[CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network
|
||||
add-on provider to get the latest status of their support for hairpin mode.
|
||||
|
||||
- If you are using VirtualBox (directly or via Vagrant), you will need to
|
||||
ensure that `hostname -i` returns a routable IP address. By default the first
|
||||
interface is connected to a non-routable host-only network. A work around
|
||||
is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
|
||||
for an example.
|
||||
|
||||
## TLS証明書のエラー
|
||||
|
||||
The following error indicates a possible certificate mismatch.
|
||||
|
||||
```none
|
||||
# kubectl get pods
|
||||
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
|
||||
```
|
||||
|
||||
- Verify that the `$HOME/.kube/config` file contains a valid certificate, and
|
||||
regenerate a certificate if necessary. The certificates in a kubeconfig file
|
||||
are base64 encoded. The `base64 -d` command can be used to decode the certificate
|
||||
and `openssl x509 -text -noout` can be used for viewing the certificate information.
|
||||
- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user:
|
||||
|
||||
```sh
|
||||
mv $HOME/.kube $HOME/.kube.bak
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
```
|
||||
|
||||
## Vagrant内でPodネットワークとしてflannelを使用する時のデフォルトNIC
|
||||
|
||||
The following error might indicate that something was wrong in the pod network:
|
||||
|
||||
```sh
|
||||
Error from server (NotFound): the server could not find the requested resource
|
||||
```
|
||||
|
||||
- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.
|
||||
|
||||
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
|
||||
|
||||
This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.
|
||||
|
||||
## 公開されていないIPがコンテナに使われている
|
||||
|
||||
In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
|
||||
|
||||
```sh
|
||||
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
|
||||
```
|
||||
|
||||
- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.
|
||||
- Digital Ocean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one.
|
||||
|
||||
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to Digital Ocean allows to query for the anchor IP from the droplet:
|
||||
|
||||
```sh
|
||||
curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
|
||||
```
|
||||
|
||||
The workaround is to tell `kubelet` which IP to use using `--node-ip`. When using Digital Ocean, it can be the public one (assigned to `eth0`) or the private one (assigned to `eth1`) should you want to use the optional private network. The [`KubeletExtraArgs` section of the kubeadm `NodeRegistrationOptions` structure](https://github.com/kubernetes/kubernetes/blob/release-1.13/cmd/kubeadm/app/apis/kubeadm/v1beta1/types.go) can be used for this.
|
||||
|
||||
Then restart `kubelet`:
|
||||
|
||||
```sh
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
## `coredns`のPodが`CrashLoopBackOff`もしくは`Error`状態になる
|
||||
|
||||
If you have nodes that are running SELinux with an older version of Docker you might experience a scenario
|
||||
where the `coredns` pods are not starting. To solve that you can try one of the following options:
|
||||
|
||||
- Upgrade to a [newer version of Docker](/docs/setup/independent/install-kubeadm/#installing-docker).
|
||||
- [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux).
|
||||
- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`:
|
||||
|
||||
```bash
|
||||
kubectl -n kube-system get deployment coredns -o yaml | \
|
||||
sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
|
||||
kubectl apply -f -
|
||||
```
|
||||
|
||||
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. [A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
|
||||
are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits.
|
||||
|
||||
{{< warning >}}
|
||||
Disabling SELinux or setting `allowPrivilegeEscalation` to `true` can compromise
|
||||
the security of your cluster.
|
||||
{{< /warning >}}
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,408 @@
|
|||
---
|
||||
title: Minikubを使用してローカル環境でKubernetesを動かす
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Minikubeの機能
|
||||
|
||||
* Minikube supports Kubernetes features such as:
|
||||
* DNS
|
||||
* NodePorts
|
||||
* ConfigMaps and Secrets
|
||||
* Dashboards
|
||||
* Container Runtime: Docker, [rkt](https://github.com/rkt/rkt), [CRI-O](https://github.com/kubernetes-incubator/cri-o) and [containerd](https://github.com/containerd/containerd)
|
||||
* Enabling CNI (Container Network Interface)
|
||||
* Ingress
|
||||
|
||||
## インストール
|
||||
|
||||
See [Installing Minikube](/docs/tasks/tools/install-minikube/).
|
||||
|
||||
## クイックスタート
|
||||
|
||||
Here's a brief demo of Minikube usage.
|
||||
If you want to change the VM driver add the appropriate `--vm-driver=xxx` flag to `minikube start`. Minikube supports
|
||||
the following drivers:
|
||||
|
||||
* virtualbox
|
||||
* vmwarefusion
|
||||
* kvm2 ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#kvm2-driver))
|
||||
* kvm ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#kvm-driver))
|
||||
* hyperkit ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#hyperkit-driver))
|
||||
* xhyve ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver)) (deprecated)
|
||||
|
||||
Note that the IP below is dynamic and can change. It can be retrieved with `minikube ip`.
|
||||
|
||||
```shell
|
||||
$ minikube start
|
||||
Starting local Kubernetes cluster...
|
||||
Running pre-create checks...
|
||||
Creating machine...
|
||||
Starting local Kubernetes cluster...
|
||||
|
||||
$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
|
||||
deployment.apps/hello-minikube created
|
||||
$ kubectl expose deployment hello-minikube --type=NodePort
|
||||
service/hello-minikube exposed
|
||||
|
||||
# We have now launched an echoserver pod but we have to wait until the pod is up before curling/accessing it
|
||||
# via the exposed service.
|
||||
# To check whether the pod is up and running we can use the following:
|
||||
$ kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s
|
||||
# We can see that the pod is still being created from the ContainerCreating status
|
||||
$ kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-minikube-3383150820-vctvh 1/1 Running 0 13s
|
||||
# We can see that the pod is now Running and we will now be able to curl it:
|
||||
$ curl $(minikube service hello-minikube --url)
|
||||
|
||||
|
||||
Hostname: hello-minikube-7c77b68cff-8wdzq
|
||||
|
||||
Pod Information:
|
||||
-no pod information available-
|
||||
|
||||
Server values:
|
||||
server_version=nginx: 1.13.3 - lua: 10008
|
||||
|
||||
Request Information:
|
||||
client_address=172.17.0.1
|
||||
method=GET
|
||||
real path=/
|
||||
query=
|
||||
request_version=1.1
|
||||
request_scheme=http
|
||||
request_uri=http://192.168.99.100:8080/
|
||||
|
||||
Request Headers:
|
||||
accept=*/*
|
||||
host=192.168.99.100:30674
|
||||
user-agent=curl/7.47.0
|
||||
|
||||
Request Body:
|
||||
-no body in request-
|
||||
|
||||
|
||||
$ kubectl delete services hello-minikube
|
||||
service "hello-minikube" deleted
|
||||
$ kubectl delete deployment hello-minikube
|
||||
deployment.extensions "hello-minikube" deleted
|
||||
$ minikube stop
|
||||
Stopping local Kubernetes cluster...
|
||||
Stopping "minikube"...
|
||||
```
|
||||
|
||||
### コンテナランタイムの代替
|
||||
|
||||
#### containerd
|
||||
|
||||
To use [containerd](https://github.com/containerd/containerd) as the container runtime, run:
|
||||
|
||||
```bash
|
||||
$ minikube start \
|
||||
--network-plugin=cni \
|
||||
--container-runtime=containerd \
|
||||
--bootstrapper=kubeadm
|
||||
```
|
||||
|
||||
Or you can use the extended version:
|
||||
|
||||
```bash
|
||||
$ minikube start \
|
||||
--network-plugin=cni \
|
||||
--extra-config=kubelet.container-runtime=remote \
|
||||
--extra-config=kubelet.container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--extra-config=kubelet.image-service-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--bootstrapper=kubeadm
|
||||
```
|
||||
|
||||
#### CRI-O
|
||||
|
||||
To use [CRI-O](https://github.com/kubernetes-incubator/cri-o) as the container runtime, run:
|
||||
|
||||
```bash
|
||||
$ minikube start \
|
||||
--network-plugin=cni \
|
||||
--container-runtime=cri-o \
|
||||
--bootstrapper=kubeadm
|
||||
```
|
||||
|
||||
Or you can use the extended version:
|
||||
|
||||
```bash
|
||||
$ minikube start \
|
||||
--network-plugin=cni \
|
||||
--extra-config=kubelet.container-runtime=remote \
|
||||
--extra-config=kubelet.container-runtime-endpoint=/var/run/crio.sock \
|
||||
--extra-config=kubelet.image-service-endpoint=/var/run/crio.sock \
|
||||
--bootstrapper=kubeadm
|
||||
```
|
||||
|
||||
#### rktコンテナエンジン
|
||||
|
||||
To use [rkt](https://github.com/rkt/rkt) as the container runtime run:
|
||||
|
||||
```shell
|
||||
$ minikube start \
|
||||
--network-plugin=cni \
|
||||
--container-runtime=rkt
|
||||
```
|
||||
|
||||
This will use an alternative minikube ISO image containing both rkt, and Docker, and enable CNI networking.
|
||||
|
||||
### ドライバープラグイン
|
||||
|
||||
See [DRIVERS](https://git.k8s.io/minikube/docs/drivers.md) for details on supported drivers and how to install
|
||||
plugins, if required.
|
||||
|
||||
### Dockerデーモンの再利用によるローカルイメージの使用
|
||||
|
||||
When using a single VM of Kubernetes, it's really handy to reuse the Minikube's built-in Docker daemon; as this means you don't have to build a docker registry on your host machine and push the image into it - you can just build inside the same docker daemon as minikube which speeds up local experiments. Just make sure you tag your Docker image with something other than 'latest' and use that tag while you pull the image. Otherwise, if you do not specify version of your image, it will be assumed as `:latest`, with pull image policy of `Always` correspondingly, which may eventually result in `ErrImagePull` as you may not have any versions of your Docker image out there in the default docker registry (usually DockerHub) yet.
|
||||
|
||||
To be able to work with the docker daemon on your mac/linux host use the `docker-env command` in your shell:
|
||||
|
||||
```shell
|
||||
eval $(minikube docker-env)
|
||||
```
|
||||
|
||||
You should now be able to use docker on the command line on your host mac/linux machine talking to the docker daemon inside the minikube VM:
|
||||
|
||||
```shell
|
||||
docker ps
|
||||
```
|
||||
|
||||
On Centos 7, docker may report the following error:
|
||||
|
||||
```shell
|
||||
Could not read CA certificate "/etc/docker/ca.pem": open /etc/docker/ca.pem: no such file or directory
|
||||
```
|
||||
|
||||
The fix is to update /etc/sysconfig/docker to ensure that Minikube's environment changes are respected:
|
||||
|
||||
```shell
|
||||
< DOCKER_CERT_PATH=/etc/docker
|
||||
---
|
||||
> if [ -z "${DOCKER_CERT_PATH}" ]; then
|
||||
> DOCKER_CERT_PATH=/etc/docker
|
||||
> fi
|
||||
```
|
||||
|
||||
Remember to turn off the imagePullPolicy:Always, otherwise Kubernetes won't use images you built locally.
|
||||
|
||||
## クラスターの管理
|
||||
|
||||
### クラスターの起動
|
||||
|
||||
The `minikube start` command can be used to start your cluster.
|
||||
This command creates and configures a Virtual Machine that runs a single-node Kubernetes cluster.
|
||||
This command also configures your [kubectl](/docs/user-guide/kubectl-overview/) installation to communicate with this cluster.
|
||||
|
||||
If you are behind a web proxy, you will need to pass this information to the `minikube start` command:
|
||||
|
||||
```shell
|
||||
https_proxy=<my proxy> minikube start --docker-env http_proxy=<my proxy> --docker-env https_proxy=<my proxy> --docker-env no_proxy=192.168.99.0/24
|
||||
```
|
||||
|
||||
Unfortunately just setting the environment variables will not work.
|
||||
|
||||
Minikube will also create a "minikube" context, and set it to default in kubectl.
|
||||
To switch back to this context later, run this command: `kubectl config use-context minikube`.
|
||||
|
||||
#### Kubernetesバージョンの指定
|
||||
|
||||
You can specify the specific version of Kubernetes for Minikube to use by
|
||||
adding the `--kubernetes-version` string to the `minikube start` command. For
|
||||
example, to run version `v1.7.3`, you would run the following:
|
||||
|
||||
```
|
||||
minikube start --kubernetes-version v1.7.3
|
||||
```
|
||||
|
||||
### Kubernetesの設定
|
||||
|
||||
Minikube has a "configurator" feature that allows users to configure the Kubernetes components with arbitrary values.
|
||||
To use this feature, you can use the `--extra-config` flag on the `minikube start` command.
|
||||
|
||||
This flag is repeated, so you can pass it several times with several different values to set multiple options.
|
||||
|
||||
This flag takes a string of the form `component.key=value`, where `component` is one of the strings from the below list, `key` is a value on the
|
||||
configuration struct and `value` is the value to set.
|
||||
|
||||
Valid keys can be found by examining the documentation for the Kubernetes `componentconfigs` for each component.
|
||||
Here is the documentation for each supported configuration:
|
||||
|
||||
* [kubelet](https://godoc.org/k8s.io/kubernetes/pkg/kubelet/apis/config#KubeletConfiguration)
|
||||
* [apiserver](https://godoc.org/k8s.io/kubernetes/cmd/kube-apiserver/app/options#ServerRunOptions)
|
||||
* [proxy](https://godoc.org/k8s.io/kubernetes/pkg/proxy/apis/config#KubeProxyConfiguration)
|
||||
* [controller-manager](https://godoc.org/k8s.io/kubernetes/pkg/controller/apis/config#KubeControllerManagerConfiguration)
|
||||
* [etcd](https://godoc.org/github.com/coreos/etcd/etcdserver#ServerConfig)
|
||||
* [scheduler](https://godoc.org/k8s.io/kubernetes/pkg/scheduler/apis/config#KubeSchedulerConfiguration)
|
||||
|
||||
#### 例
|
||||
|
||||
To change the `MaxPods` setting to 5 on the Kubelet, pass this flag: `--extra-config=kubelet.MaxPods=5`.
|
||||
|
||||
This feature also supports nested structs. To change the `LeaderElection.LeaderElect` setting to `true` on the scheduler, pass this flag: `--extra-config=scheduler.LeaderElection.LeaderElect=true`.
|
||||
|
||||
To set the `AuthorizationMode` on the `apiserver` to `RBAC`, you can use: `--extra-config=apiserver.authorization-mode=RBAC`.
|
||||
|
||||
### クラスターの停止
|
||||
The `minikube stop` command can be used to stop your cluster.
|
||||
This command shuts down the Minikube Virtual Machine, but preserves all cluster state and data.
|
||||
Starting the cluster again will restore it to it's previous state.
|
||||
|
||||
### クラスターの削除
|
||||
The `minikube delete` command can be used to delete your cluster.
|
||||
This command shuts down and deletes the Minikube Virtual Machine. No data or state is preserved.
|
||||
|
||||
## クラスターに触れてみよう
|
||||
|
||||
### Kubectl
|
||||
|
||||
The `minikube start` command creates a [kubectl context](/docs/reference/generated/kubectl/kubectl-commands#-em-set-context-em-) called "minikube".
|
||||
This context contains the configuration to communicate with your Minikube cluster.
|
||||
|
||||
Minikube sets this context to default automatically, but if you need to switch back to it in the future, run:
|
||||
|
||||
`kubectl config use-context minikube`,
|
||||
|
||||
Or pass the context on each command like this: `kubectl get pods --context=minikube`.
|
||||
|
||||
### ダッシュボード
|
||||
|
||||
To access the [Kubernetes Dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard/), run this command in a shell after starting Minikube to get the address:
|
||||
|
||||
```shell
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
### サービス
|
||||
|
||||
To access a service exposed via a node port, run this command in a shell after starting Minikube to get the address:
|
||||
|
||||
```shell
|
||||
minikube service [-n NAMESPACE] [--url] NAME
|
||||
```
|
||||
|
||||
## ネットワーク
|
||||
|
||||
The Minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the `minikube ip` command.
|
||||
Any services of type `NodePort` can be accessed over that IP address, on the NodePort.
|
||||
|
||||
To determine the NodePort for your service, you can use a `kubectl` command like this:
|
||||
|
||||
`kubectl get service $SERVICE --output='jsonpath="{.spec.ports[0].nodePort}"'`
|
||||
|
||||
## 永続化ボリューム
|
||||
Minikube supports [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) of type `hostPath`.
|
||||
These PersistentVolumes are mapped to a directory inside the Minikube VM.
|
||||
|
||||
The Minikube VM boots into a tmpfs, so most directories will not be persisted across reboots (`minikube stop`).
|
||||
However, Minikube is configured to persist files stored under the following host directories:
|
||||
|
||||
* `/data`
|
||||
* `/var/lib/minikube`
|
||||
* `/var/lib/docker`
|
||||
|
||||
Here is an example PersistentVolume config to persist data in the `/data` directory:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv0001
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
capacity:
|
||||
storage: 5Gi
|
||||
hostPath:
|
||||
path: /data/pv0001/
|
||||
```
|
||||
|
||||
## ホストフォルダーのマウント
|
||||
Some drivers will mount a host folder within the VM so that you can easily share files between the VM and host. These are not configurable at the moment and different for the driver and OS you are using.
|
||||
|
||||
{{< note >}}
|
||||
Host folder sharing is not implemented in the KVM driver yet.
|
||||
{{< /note >}}
|
||||
|
||||
| Driver | OS | HostFolder | VM |
|
||||
| --- | --- | --- | --- |
|
||||
| VirtualBox | Linux | /home | /hosthome |
|
||||
| VirtualBox | macOS | /Users | /Users |
|
||||
| VirtualBox | Windows | C://Users | /c/Users |
|
||||
| VMware Fusion | macOS | /Users | /Users |
|
||||
| Xhyve | macOS | /Users | /Users |
|
||||
|
||||
## プライベートコンテナレジストリ
|
||||
|
||||
To access a private container registry, follow the steps on [this page](/docs/concepts/containers/images/).
|
||||
|
||||
We recommend you use `ImagePullSecrets`, but if you would like to configure access on the Minikube VM you can place the `.dockercfg` in the `/home/docker` directory or the `config.json` in the `/home/docker/.docker` directory.
|
||||
|
||||
## アドオン
|
||||
|
||||
In order to have Minikube properly start or restart custom addons,
|
||||
place the addons you wish to be launched with Minikube in the `~/.minikube/addons`
|
||||
directory. Addons in this folder will be moved to the Minikube VM and
|
||||
launched each time Minikube is started or restarted.
|
||||
|
||||
## HTTPプロキシ経由のMinikube利用
|
||||
|
||||
Minikube creates a Virtual Machine that includes Kubernetes and a Docker daemon.
|
||||
When Kubernetes attempts to schedule containers using Docker, the Docker daemon may require external network access to pull containers.
|
||||
|
||||
If you are behind an HTTP proxy, you may need to supply Docker with the proxy settings.
|
||||
To do this, pass the required environment variables as flags during `minikube start`.
|
||||
|
||||
For example:
|
||||
|
||||
```shell
|
||||
$ minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \
|
||||
--docker-env https_proxy=https://$YOURPROXY:PORT
|
||||
```
|
||||
|
||||
If your Virtual Machine address is 192.168.99.100, then chances are your proxy settings will prevent `kubectl` from directly reaching it.
|
||||
To by-pass proxy configuration for this IP address, you should modify your no_proxy settings. You can do so with:
|
||||
|
||||
```shell
|
||||
$ export no_proxy=$no_proxy,$(minikube ip)
|
||||
```
|
||||
|
||||
## 既知の問題
|
||||
* Features that require a Cloud Provider will not work in Minikube. These include:
|
||||
* LoadBalancers
|
||||
* Features that require multiple nodes. These include:
|
||||
* Advanced scheduling policies
|
||||
|
||||
## 設計
|
||||
|
||||
Minikube uses [libmachine](https://github.com/docker/machine/tree/master/libmachine) for provisioning VMs, and [kubeadm](https://github.com/kubernetes/kubeadm) to provision a Kubernetes cluster.
|
||||
|
||||
For more information about Minikube, see the [proposal](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md).
|
||||
|
||||
## 追加リンク集
|
||||
|
||||
* **Goals and Non-Goals**: For the goals and non-goals of the Minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md).
|
||||
* **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests.
|
||||
* **Building Minikube**: For instructions on how to build/test Minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md).
|
||||
* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md).
|
||||
* **Adding a New Addon**: For instruction on how to add a new addon for Minikube see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md).
|
||||
* **Updating Kubernetes**: For instructions on how to update Kubernetes see the [updating Kubernetes guide](https://git.k8s.io/minikube/docs/contributors/updating_kubernetes.md).
|
||||
|
||||
## コミュニティ
|
||||
|
||||
Contributions, questions, and comments are all welcomed and encouraged! Minikube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,333 @@
|
|||
---
|
||||
title: 複数のゾーンで動かす
|
||||
weight: 90
|
||||
---
|
||||
|
||||
## 始めに
|
||||
|
||||
Kubernetes 1.2 adds support for running a single cluster in multiple failure zones
|
||||
(GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones").
|
||||
This is a lightweight version of a broader Cluster Federation feature (previously referred to by the affectionate
|
||||
nickname ["Ubernetes"](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/multicluster/federation.md)).
|
||||
Full Cluster Federation allows combining separate
|
||||
Kubernetes clusters running in different regions or cloud providers
|
||||
(or on-premises data centers). However, many
|
||||
users simply want to run a more available Kubernetes cluster in multiple zones
|
||||
of their single cloud provider, and this is what the multizone support in 1.2 allows
|
||||
(this previously went by the nickname "Ubernetes Lite").
|
||||
|
||||
Multizone support is deliberately limited: a single Kubernetes cluster can run
|
||||
in multiple zones, but only within the same region (and cloud provider). Only
|
||||
GCE and AWS are currently supported automatically (though it is easy to
|
||||
add similar support for other clouds or even bare metal, by simply arranging
|
||||
for the appropriate labels to be added to nodes and volumes).
|
||||
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
## 機能性
|
||||
|
||||
When nodes are started, the kubelet automatically adds labels to them with
|
||||
zone information.
|
||||
|
||||
Kubernetes will automatically spread the pods in a replication controller
|
||||
or service across nodes in a single-zone cluster (to reduce the impact of
|
||||
failures.) With multiple-zone clusters, this spreading behavior is
|
||||
extended across zones (to reduce the impact of zone failures.) (This is
|
||||
achieved via `SelectorSpreadPriority`). This is a best-effort
|
||||
placement, and so if the zones in your cluster are heterogeneous
|
||||
(e.g. different numbers of nodes, different types of nodes, or
|
||||
different pod resource requirements), this might prevent perfectly
|
||||
even spreading of your pods across zones. If desired, you can use
|
||||
homogeneous zones (same number and types of nodes) to reduce the
|
||||
probability of unequal spreading.
|
||||
|
||||
When persistent volumes are created, the `PersistentVolumeLabel`
|
||||
admission controller automatically adds zone labels to them. The scheduler (via the
|
||||
`VolumeZonePredicate` predicate) will then ensure that pods that claim a
|
||||
given volume are only placed into the same zone as that volume, as volumes
|
||||
cannot be attached across zones.
|
||||
|
||||
## 制限
|
||||
|
||||
There are some important limitations of the multizone support:
|
||||
|
||||
* We assume that the different zones are located close to each other in the
|
||||
network, so we don't perform any zone-aware routing. In particular, traffic
|
||||
that goes via services might cross zones (even if some pods backing that service
|
||||
exist in the same zone as the client), and this may incur additional latency and cost.
|
||||
|
||||
* Volume zone-affinity will only work with a `PersistentVolume`, and will not
|
||||
work if you directly specify an EBS volume in the pod spec (for example).
|
||||
|
||||
* Clusters cannot span clouds or regions (this functionality will require full
|
||||
federation support).
|
||||
|
||||
* Although your nodes are in multiple zones, kube-up currently builds
|
||||
a single master node by default. While services are highly
|
||||
available and can tolerate the loss of a zone, the control plane is
|
||||
located in a single zone. Users that want a highly available control
|
||||
plane should follow the [high availability](/docs/admin/high-availability) instructions.
|
||||
|
||||
### ボリュームの制限
|
||||
The following limitations are addressed with [topology-aware volume binding](/docs/concepts/storage/storage-classes/#volume-binding-mode).
|
||||
|
||||
* StatefulSet volume zone spreading when using dynamic provisioning is currently not compatible with
|
||||
pod affinity or anti-affinity policies.
|
||||
|
||||
* If the name of the StatefulSet contains dashes ("-"), volume zone spreading
|
||||
may not provide a uniform distribution of storage across zones.
|
||||
|
||||
* When specifying multiple PVCs in a Deployment or Pod spec, the StorageClass
|
||||
needs to be configured for a specific single zone, or the PVs need to be
|
||||
statically provisioned in a specific zone. Another workaround is to use a
|
||||
StatefulSet, which will ensure that all the volumes for a replica are
|
||||
provisioned in the same zone.
|
||||
|
||||
## 全体の流れ
|
||||
|
||||
We're now going to walk through setting up and using a multi-zone
|
||||
cluster on both GCE & AWS. To do so, you bring up a full cluster
|
||||
(specifying `MULTIZONE=true`), and then you add nodes in additional zones
|
||||
by running `kube-up` again (specifying `KUBE_USE_EXISTING_MASTER=true`).
|
||||
|
||||
### クラスターの立ち上げ
|
||||
|
||||
Create the cluster as normal, but pass MULTIZONE to tell the cluster to manage multiple zones; creating nodes in us-central1-a.
|
||||
|
||||
GCE:
|
||||
|
||||
```shell
|
||||
curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash
|
||||
```
|
||||
|
||||
AWS:
|
||||
|
||||
```shell
|
||||
curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash
|
||||
```
|
||||
|
||||
This step brings up a cluster as normal, still running in a single zone
|
||||
(but `MULTIZONE=true` has enabled multi-zone capabilities).
|
||||
|
||||
### ノードはラベルが付与される
|
||||
|
||||
View the nodes; you can see that they are labeled with zone information.
|
||||
They are all in `us-central1-a` (GCE) or `us-west-2a` (AWS) so far. The
|
||||
labels are `failure-domain.beta.kubernetes.io/region` for the region,
|
||||
and `failure-domain.beta.kubernetes.io/zone` for the zone:
|
||||
|
||||
```shell
|
||||
> kubectl get nodes --show-labels
|
||||
|
||||
|
||||
NAME STATUS ROLES AGE VERSION LABELS
|
||||
kubernetes-master Ready,SchedulingDisabled <none> 6m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
|
||||
kubernetes-minion-87j9 Ready <none> 6m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
|
||||
kubernetes-minion-9vlv Ready <none> 6m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
|
||||
kubernetes-minion-a12q Ready <none> 6m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
|
||||
```
|
||||
|
||||
### 2つ目のゾーンにさらにノードを追加
|
||||
|
||||
Let's add another set of nodes to the existing cluster, reusing the
|
||||
existing master, running in a different zone (us-central1-b or us-west-2b).
|
||||
We run kube-up again, but by specifying `KUBE_USE_EXISTING_MASTER=true`
|
||||
kube-up will not create a new master, but will reuse one that was previously
|
||||
created instead.
|
||||
|
||||
GCE:
|
||||
|
||||
```shell
|
||||
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh
|
||||
```
|
||||
|
||||
On AWS we also need to specify the network CIDR for the additional
|
||||
subnet, along with the master internal IP address:
|
||||
|
||||
```shell
|
||||
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
|
||||
```
|
||||
|
||||
|
||||
View the nodes again; 3 more nodes should have launched and be tagged
|
||||
in us-central1-b:
|
||||
|
||||
```shell
|
||||
> kubectl get nodes --show-labels
|
||||
|
||||
NAME STATUS ROLES AGE VERSION LABELS
|
||||
kubernetes-master Ready,SchedulingDisabled <none> 16m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
|
||||
kubernetes-minion-281d Ready <none> 2m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
|
||||
kubernetes-minion-87j9 Ready <none> 16m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
|
||||
kubernetes-minion-9vlv Ready <none> 16m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
|
||||
kubernetes-minion-a12q Ready <none> 17m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
|
||||
kubernetes-minion-pp2f Ready <none> 2m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f
|
||||
kubernetes-minion-wf8i Ready <none> 2m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i
|
||||
```
|
||||
|
||||
### ボリュームのアフィニティ
|
||||
|
||||
Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity):
|
||||
|
||||
```json
|
||||
kubectl create -f - <<EOF
|
||||
{
|
||||
"kind": "PersistentVolumeClaim",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "claim1",
|
||||
"annotations": {
|
||||
"volume.alpha.kubernetes.io/storage-class": "foo"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"accessModes": [
|
||||
"ReadWriteOnce"
|
||||
],
|
||||
"resources": {
|
||||
"requests": {
|
||||
"storage": "5Gi"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
For version 1.3+ Kubernetes will distribute dynamic PV claims across
|
||||
the configured zones. For version 1.2, dynamic persistent volumes were
|
||||
always created in the zone of the cluster master
|
||||
(here us-central1-a / us-west-2a); that issue
|
||||
([#23330](https://github.com/kubernetes/kubernetes/issues/23330))
|
||||
was addressed in 1.3+.
|
||||
{{< /note >}}
|
||||
|
||||
Now lets validate that Kubernetes automatically labeled the zone & region the PV was created in.
|
||||
|
||||
```shell
|
||||
> kubectl get pv --show-labels
|
||||
NAME CAPACITY ACCESSMODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
|
||||
pv-gce-mj4gm 5Gi RWO Retain Bound default/claim1 manual 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a
|
||||
```
|
||||
|
||||
So now we will create a pod that uses the persistent volume claim.
|
||||
Because GCE PDs / AWS EBS volumes cannot be attached across zones,
|
||||
this means that this pod can only be created in the same zone as the volume:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: myfrontend
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- mountPath: "/var/www/html"
|
||||
name: mypd
|
||||
volumes:
|
||||
- name: mypd
|
||||
persistentVolumeClaim:
|
||||
claimName: claim1
|
||||
EOF
|
||||
```
|
||||
|
||||
Note that the pod was automatically created in the same zone as the volume, as
|
||||
cross-zone attachments are not generally permitted by cloud providers:
|
||||
|
||||
```shell
|
||||
> kubectl describe pod mypod | grep Node
|
||||
Node: kubernetes-minion-9vlv/10.240.0.5
|
||||
> kubectl get node kubernetes-minion-9vlv --show-labels
|
||||
NAME STATUS AGE VERSION LABELS
|
||||
kubernetes-minion-9vlv Ready 22m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
|
||||
```
|
||||
|
||||
### Podがゾーンをまたがって配置される
|
||||
|
||||
Pods in a replication controller or service are automatically spread
|
||||
across zones. First, let's launch more nodes in a third zone:
|
||||
|
||||
GCE:
|
||||
|
||||
```shell
|
||||
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-f NUM_NODES=3 kubernetes/cluster/kube-up.sh
|
||||
```
|
||||
|
||||
AWS:
|
||||
|
||||
```shell
|
||||
KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Verify that you now have nodes in 3 zones:
|
||||
|
||||
```shell
|
||||
kubectl get nodes --show-labels
|
||||
```
|
||||
|
||||
Create the guestbook-go example, which includes an RC of size 3, running a simple web app:
|
||||
|
||||
```shell
|
||||
find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl create -f {}
|
||||
```
|
||||
|
||||
The pods should be spread across all 3 zones:
|
||||
|
||||
```shell
|
||||
> kubectl describe pod -l app=guestbook | grep Node
|
||||
Node: kubernetes-minion-9vlv/10.240.0.5
|
||||
Node: kubernetes-minion-281d/10.240.0.8
|
||||
Node: kubernetes-minion-olsh/10.240.0.11
|
||||
|
||||
> kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels
|
||||
NAME STATUS ROLES AGE VERSION LABELS
|
||||
kubernetes-minion-9vlv Ready <none> 34m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
|
||||
kubernetes-minion-281d Ready <none> 20m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
|
||||
kubernetes-minion-olsh Ready <none> 3m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh
|
||||
```
|
||||
|
||||
|
||||
Load-balancers span all zones in a cluster; the guestbook-go example
|
||||
includes an example load-balanced service:
|
||||
|
||||
```shell
|
||||
> kubectl describe service guestbook | grep LoadBalancer.Ingress
|
||||
LoadBalancer Ingress: 130.211.126.21
|
||||
|
||||
> ip=130.211.126.21
|
||||
|
||||
> curl -s http://${ip}:3000/env | grep HOSTNAME
|
||||
"HOSTNAME": "guestbook-44sep",
|
||||
|
||||
> (for i in `seq 20`; do curl -s http://${ip}:3000/env | grep HOSTNAME; done) | sort | uniq
|
||||
"HOSTNAME": "guestbook-44sep",
|
||||
"HOSTNAME": "guestbook-hum5n",
|
||||
"HOSTNAME": "guestbook-ppm40",
|
||||
```
|
||||
|
||||
The load balancer correctly targets all the pods, even though they are in multiple zones.
|
||||
|
||||
### クラスターの停止
|
||||
|
||||
When you're done, clean up:
|
||||
|
||||
GCE:
|
||||
|
||||
```shell
|
||||
KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-f kubernetes/cluster/kube-down.sh
|
||||
KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-b kubernetes/cluster/kube-down.sh
|
||||
KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a kubernetes/cluster/kube-down.sh
|
||||
```
|
||||
|
||||
AWS:
|
||||
|
||||
```shell
|
||||
KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c kubernetes/cluster/kube-down.sh
|
||||
KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh
|
||||
KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh
|
||||
```
|
|
@ -0,0 +1,97 @@
|
|||
---
|
||||
title: ノードのセットアップの検証
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
## ノード適合テスト
|
||||
|
||||
*Node conformance test* is a containerized test framework that provides a system
|
||||
verification and functionality test for a node. The test validates whether the
|
||||
node meets the minimum requirements for Kubernetes; a node that passes the test
|
||||
is qualified to join a Kubernetes cluster.
|
||||
|
||||
## 制約
|
||||
|
||||
In Kubernetes version 1.5, node conformance test has the following limitations:
|
||||
|
||||
* Node conformance test only supports Docker as the container runtime.
|
||||
|
||||
## ノードの前提条件
|
||||
|
||||
To run node conformance test, a node must satisfy the same prerequisites as a
|
||||
standard Kubernetes node. At a minimum, the node should have the following
|
||||
daemons installed:
|
||||
|
||||
* Container Runtime (Docker)
|
||||
* Kubelet
|
||||
|
||||
## ノード適合テストの実行
|
||||
|
||||
To run the node conformance test, perform the following steps:
|
||||
|
||||
1. Point your Kubelet to localhost `--api-servers="http://localhost:8080"`,
|
||||
because the test framework starts a local master to test Kubelet. There are some
|
||||
other Kubelet flags you may care:
|
||||
* `--pod-cidr`: If you are using `kubenet`, you should specify an arbitrary CIDR
|
||||
to Kubelet, for example `--pod-cidr=10.180.0.0/24`.
|
||||
* `--cloud-provider`: If you are using `--cloud-provider=gce`, you should
|
||||
remove the flag to run the test.
|
||||
|
||||
2. Run the node conformance test with command:
|
||||
|
||||
```shell
|
||||
# $CONFIG_DIR is the pod manifest path of your Kubelet.
|
||||
# $LOG_DIR is the test output path.
|
||||
sudo docker run -it --rm --privileged --net=host \
|
||||
-v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
|
||||
k8s.gcr.io/node-test:0.2
|
||||
```
|
||||
|
||||
## 他アーキテクチャ向けのノード適合テストの実行
|
||||
|
||||
Kubernetes also provides node conformance test docker images for other
|
||||
architectures:
|
||||
|
||||
Arch | Image |
|
||||
--------|:-----------------:|
|
||||
amd64 | node-test-amd64 |
|
||||
arm | node-test-arm |
|
||||
arm64 | node-test-arm64 |
|
||||
|
||||
## 選択したテストの実行
|
||||
|
||||
To run specific tests, overwrite the environment variable `FOCUS` with the
|
||||
regular expression of tests you want to run.
|
||||
|
||||
```shell
|
||||
sudo docker run -it --rm --privileged --net=host \
|
||||
-v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
|
||||
-e FOCUS=MirrorPod \ # Only run MirrorPod test
|
||||
k8s.gcr.io/node-test:0.2
|
||||
```
|
||||
|
||||
To skip specific tests, overwrite the environment variable `SKIP` with the
|
||||
regular expression of tests you want to skip.
|
||||
|
||||
```shell
|
||||
sudo docker run -it --rm --privileged --net=host \
|
||||
-v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
|
||||
-e SKIP=MirrorPod \ # Run all conformance tests but skip MirrorPod test
|
||||
k8s.gcr.io/node-test:0.2
|
||||
```
|
||||
|
||||
Node conformance test is a containerized version of [node e2e test](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/e2e-node-tests.md).
|
||||
By default, it runs all conformance tests.
|
||||
|
||||
Theoretically, you can run any node e2e test if you configure the container and
|
||||
mount required volumes properly. But **it is strongly recommended to only run conformance
|
||||
test**, because it requires much more complex configuration to run non-conformance test.
|
||||
|
||||
## 注意事項
|
||||
|
||||
* The test leaves some docker images on the node, including the node conformance
|
||||
test image and images of containers used in the functionality
|
||||
test.
|
||||
* The test leaves dead containers on the node. These containers are created
|
||||
during the functionality test.
|
|
@ -0,0 +1,95 @@
|
|||
---
|
||||
title: KRIBを使用してDigital Rebar Provision (DRP)と共にKubernetesをインストールする
|
||||
krib-version: 2.4
|
||||
author: Rob Hirschfeld (zehicle)
|
||||
---
|
||||
|
||||
## 概要
|
||||
|
||||
This guide helps to install a Kubernetes cluster hosted on bare metal with [Digital Rebar Provision](https://github.com/digitalrebar/provision) using only its Content packages and *kubeadm*.
|
||||
|
||||
Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While [DRP can be used to invoke](https://provision.readthedocs.io/en/tip/doc/integrations/ansible.html) [kubespray](../kubespray), it also offers a self-contained Kubernetes installation known as [KRIB (Kubernetes Rebar Integrated Bootstrap)](https://github.com/digitalrebar/provision-content/tree/master/krib).
|
||||
|
||||
{{< note >}}
|
||||
KRIB is not a _stand-alone_ installer: Digital Rebar templates drive a standard *[kubeadm](/docs/admin/kubeadm/)* configuration that manages the Kubernetes installation with the [Digital Rebar cluster pattern](https://provision.readthedocs.io/en/tip/doc/arch/cluster.html#rs-cluster-pattern) to elect leaders _without external supervision_.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
KRIB features:
|
||||
|
||||
* zero-touch, self-configuring cluster without pre-configuration or inventory
|
||||
* very fast, no-ssh required automation
|
||||
* bare metal, on-premises focused platform
|
||||
* highly available cluster options (including splitting etcd from the controllers)
|
||||
* dynamic generation of a TLS infrastructure
|
||||
* composable attributes and automatic detection of hardware by profile
|
||||
* options for persistent, immutable and image-based deployments
|
||||
* support for Ubuntu 18.04, CentOS/RHEL 7 and others
|
||||
|
||||
## クラスターの作成
|
||||
|
||||
Review [Digital Rebar documentation](https://https://provision.readthedocs.io/en/tip/README.html) for details about installing the platform.
|
||||
|
||||
The Digital Rebar Provision Golang binary should be installed on a Linux-like system with 16 GB of RAM or larger (Packet.net Tiny and Rasberry Pi are also acceptable).
|
||||
|
||||
### (1/5) サーバーの発見
|
||||
|
||||
Following the [Digital Rebar installation](https://provision.readthedocs.io/en/tip/doc/quickstart.html), allow one or more servers to boot through the _Sledgehammer_ discovery process to register with the API. This will automatically install the Digital Rebar runner and to allow for next steps.
|
||||
|
||||
### (2/5) KRIBと証明書プラグインのインストール
|
||||
|
||||
Upload the KRIB Content bundle (or build from [source](https://github.com/digitalrebar/provision-content/tree/master/krib)) and the Cert Plugin for your DRP platform (e.g.: [amd64 Linux v2.4.0](https://s3-us-west-2.amazonaws.com/rebar-catalog/certs/v2.4.0-0-02301d35f9f664d6c81d904c92a9c81d3fd41d2c/amd64/linux/certs)). Both are freely available via the [RackN UX](https://portal.rackn.io).
|
||||
|
||||
### (3/5) クラスター構築の開始
|
||||
|
||||
{{< note >}}
|
||||
KRIB documentation is dynamically generated from the source and will be more up to date than this guide.
|
||||
{{< /note >}}
|
||||
|
||||
Following the [KRIB documentation](https://provision.readthedocs.io/en/tip/doc/content-packages/krib.html), create a Profile for your cluster and assign your target servers into the cluster Profile. The Profile must set `krib\cluster-name` and `etcd\cluster-name` Params to be the name of the Profile. Cluster configuration choices can be made by adding additional Params to the Profile; however, safe defaults are provided for all Params.
|
||||
|
||||
Once all target servers are assigned to the cluster Profile, start a KRIB installation Workflow by assigning one of the included Workflows to all cluster servers. For example, selecting `krib-live-cluster` will perform an immutable deployment into the Sledgehammer discovery operating system. You may use one of the pre-created read-only Workflows or choose to build your own custom variation.
|
||||
|
||||
For basic installs, no further action is required. Advanced users may choose to assign the controllers, etcd servers or other configuration values in the relevant Params.
|
||||
|
||||
### (4/5) クラスター構築を監視
|
||||
|
||||
Digital Rebar Provision provides detailed logging and live updates during the installation process. Workflow events are available via a websocket connection or monitoring the Jobs list.
|
||||
|
||||
During the installation, KRIB writes cluster configuration data back into the cluster Profile.
|
||||
|
||||
### (5/5) クラスターへのアクセス
|
||||
|
||||
The cluster is available for access via *kubectl* once the `krib/cluster-admin-conf` Param has been set. This Param contains the `kubeconfig` information necessary to access the cluster.
|
||||
|
||||
For example, if you named the cluster Profile `krib` then the following commands would allow you to connect to the installed cluster from your local terminal.
|
||||
|
||||
::
|
||||
|
||||
drpcli profiles get krib params krib/cluster-admin-conf > admin.conf
|
||||
export KUBECONFIG=admin.conf
|
||||
kubectl get nodes
|
||||
|
||||
|
||||
The installation continues after the `krib/cluster-admin-conf` is set to install the Kubernetes UI and Helm. You may interact with the cluster as soon as the `admin.conf` file is available.
|
||||
|
||||
## クラスター操作
|
||||
|
||||
KRIB provides additional Workflows to manage your cluster. Please see the [KRIB documentation](https://provision.readthedocs.io/en/tip/doc/content-packages/krib.html) for an updated list of advanced cluster operations.
|
||||
|
||||
### クラスターのスケール
|
||||
|
||||
You can add servers into your cluster by adding the cluster Profile to the server and running the appropriate Workflow.
|
||||
|
||||
### クラスターのクリーンアップ(開発者向け)
|
||||
|
||||
You can reset your cluster and wipe out all configuration and TLS certificates using the `krib-reset-cluster` Workflow on any of the servers in the cluster.
|
||||
|
||||
{{< caution >}}
|
||||
When running the reset Workflow, be sure not to accidentally target your production cluster!
|
||||
{{< /caution >}}
|
||||
|
||||
## フィードバック
|
||||
|
||||
* Slack Channel: [#community](https://rackn.slack.com/messages/community/)
|
||||
* [GitHub Issues](https://github.com/digital/provision/issues)
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: オンプレミスVM
|
||||
weight: 60
|
||||
---
|
|
@ -0,0 +1,120 @@
|
|||
---
|
||||
title: Cloudstack
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
|
||||
|
||||
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
|
||||
|
||||
This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## 前提条件
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y python-pip libssl-dev
|
||||
sudo pip install cs
|
||||
sudo pip install sshpubkeys
|
||||
sudo apt-get install software-properties-common
|
||||
sudo apt-add-repository ppa:ansible/ansible
|
||||
sudo apt-get update
|
||||
sudo apt-get install ansible
|
||||
```
|
||||
|
||||
On CloudStack server you also have to install libselinux-python :
|
||||
|
||||
```shell
|
||||
yum install libselinux-python
|
||||
```
|
||||
|
||||
[_cs_](https://github.com/exoscale/cs) is a python module for the CloudStack API.
|
||||
|
||||
Set your CloudStack endpoint, API keys and HTTP method used.
|
||||
|
||||
You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`.
|
||||
|
||||
Or create a `~/.cloudstack.ini` file:
|
||||
|
||||
```none
|
||||
[cloudstack]
|
||||
endpoint = <your cloudstack api endpoint>
|
||||
key = <your api access key>
|
||||
secret = <your api secret key>
|
||||
method = post
|
||||
```
|
||||
|
||||
We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
|
||||
|
||||
### playbookのクローン
|
||||
|
||||
```shell
|
||||
git clone https://github.com/apachecloudstack/k8s
|
||||
cd kubernetes-cloudstack
|
||||
```
|
||||
|
||||
### Kubernetesクラスターの作成
|
||||
|
||||
You simply need to run the playbook.
|
||||
|
||||
```shell
|
||||
ansible-playbook k8s.yml
|
||||
```
|
||||
|
||||
Some variables can be edited in the `k8s.yml` file.
|
||||
|
||||
```none
|
||||
vars:
|
||||
ssh_key: k8s
|
||||
k8s_num_nodes: 2
|
||||
k8s_security_group_name: k8s
|
||||
k8s_node_prefix: k8s2
|
||||
k8s_template: <templatename>
|
||||
k8s_instance_type: <serviceofferingname>
|
||||
```
|
||||
|
||||
This will start a Kubernetes master node and a number of compute nodes (by default 2).
|
||||
The `instance_type` and `template` are specific, edit them to specify your CloudStack cloud specific template and instance type (i.e. service offering).
|
||||
|
||||
Check the tasks and templates in `roles/k8s` if you want to modify anything.
|
||||
|
||||
Once the playbook as finished, it will print out the IP of the Kubernetes master:
|
||||
|
||||
```none
|
||||
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
|
||||
```
|
||||
|
||||
SSH to it using the key that was created and using the _core_ user.
|
||||
|
||||
```shell
|
||||
ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
|
||||
```
|
||||
|
||||
And you can list the machines in your cluster:
|
||||
|
||||
```shell
|
||||
fleetctl list-machines
|
||||
```
|
||||
|
||||
```none
|
||||
MACHINE IP METADATA
|
||||
a017c422... <node #1 IP> role=node
|
||||
ad13bf84... <master IP> role=master
|
||||
e9af8293... <node #2 IP> role=node
|
||||
```
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/))
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/setup/pick-right-solution/#table-of-solutions) chart.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
title: DC/OS上のKubernetes
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Mesosphereは[DC/OS](https://mesosphere.com/product/)上にKubernetesを構築する為の簡単な選択肢を提供します。それは
|
||||
|
||||
* 純粋なアップストリームのKubernetes
|
||||
* シングルクリッククラスター構築
|
||||
* デフォルトで高可用であり安全
|
||||
* Kubernetesが高速なデータプラットフォーム(例えばAkka、Cassandra、Kafka、Spark)と共に稼働
|
||||
|
||||
です。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## 公式Mesosphereガイド
|
||||
|
||||
DC/OS入門の正規のソースは[クイックスタートリポジトリ](https://github.com/mesosphere/dcos-kubernetes-quickstart)にあります。
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
title: oVirt
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## oVirtクラウドプロバイダーによる構築
|
||||
|
||||
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster.
|
||||
At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well.
|
||||
|
||||
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
|
||||
|
||||
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
|
||||
|
||||
[import]: https://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
|
||||
[install]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#create-virtual-machines
|
||||
[generate a template]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#using-templates
|
||||
[install the ovirt-guest-agent]: https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
|
||||
|
||||
## oVirtクラウドプロバイダーの使用
|
||||
|
||||
The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file:
|
||||
|
||||
```none
|
||||
[connection]
|
||||
uri = https://localhost:8443/ovirt-engine/api
|
||||
username = admin@internal
|
||||
password = admin
|
||||
```
|
||||
|
||||
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes:
|
||||
|
||||
```none
|
||||
[filters]
|
||||
# Search query used to find nodes
|
||||
vms = tag=kubernetes
|
||||
```
|
||||
|
||||
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes.
|
||||
|
||||
The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
|
||||
|
||||
```shell
|
||||
kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...
|
||||
```
|
||||
|
||||
## oVirtクラウドプロバイダーのスクリーンキャスト
|
||||
|
||||
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
|
||||
|
||||
[![Screencast](https://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](https://www.youtube.com/watch?v=JyyST4ZKne8)
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z))
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/setup/pick-right-solution/#table-of-solutions) chart.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,260 @@
|
|||
---
|
||||
title: 正しいソリューションの選択
|
||||
weight: 10
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of
|
||||
bare metal servers. The effort required to set up a cluster varies from running a single command to
|
||||
crafting your own customized cluster. Use this guide to choose a solution that fits your needs.
|
||||
|
||||
If you just want to "kick the tires" on Kubernetes, use the [local Docker-based solutions](#local-machine-solutions).
|
||||
|
||||
When you are ready to scale up to more machines and higher availability, a [hosted solution](#hosted-solutions) is the easiest to create and maintain.
|
||||
|
||||
[Turnkey cloud solutions](#turnkey-cloud-solutions) require only a few commands to create
|
||||
and cover a wide range of cloud providers. [On-Premises turnkey cloud solutions](#on-premises-turnkey-cloud-solutions) have the simplicity of the turnkey cloud solution combined with the security of your own private network.
|
||||
|
||||
If you already have a way to configure hosting resources, use [kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster with a single command per machine.
|
||||
|
||||
[Custom solutions](#custom-solutions) vary from step-by-step instructions to general advice for setting up
|
||||
a Kubernetes cluster from scratch.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## ローカルマシンを使ったソリューション
|
||||
|
||||
* [Minikube](/docs/setup/minikube/) is a method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account.
|
||||
|
||||
* [microk8s](https://microk8s.io/) provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command.
|
||||
|
||||
* [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) can use VirtualBox on your machine to deploy Kubernetes to one or more VMs for development and test scenarios. Scales to full multi-node cluster.
|
||||
|
||||
* [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers) is a Terraform/Packer/BASH based Infrastructure as Code (IaC) scripts to create a seven node (1 Boot, 1 Master, 1 Management, 1 Proxy and 3 Workers) LXD cluster on Linux Host.
|
||||
|
||||
* [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) is a multi-node (while minikube is single-node) Kubernetes cluster which only requires a docker daemon. It uses docker-in-docker technique to spawn the Kubernetes cluster.
|
||||
|
||||
* [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local/) supports a nine-instance deployment on localhost.
|
||||
|
||||
## ホスティングを使ったソリューション
|
||||
|
||||
* [AppsCode.com](https://appscode.com/products/cloud-deployment/) provides managed Kubernetes clusters for various public clouds, including AWS and Google Cloud Platform.
|
||||
|
||||
* [APPUiO](https://appuio.ch) runs an OpenShift public cloud platform, supporting any Kubernetes workload. Additionally APPUiO offers Private Managed OpenShift Clusters, running on any public or private cloud.
|
||||
|
||||
* [Amazon Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/) offers managed Kubernetes service.
|
||||
|
||||
* [Azure Kubernetes Service](https://azure.microsoft.com/services/container-service/) offers managed Kubernetes clusters.
|
||||
|
||||
* [Giant Swarm](https://giantswarm.io/product/) offers managed Kubernetes clusters in their own datacenter, on-premises, or on public clouds.
|
||||
|
||||
* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offers managed Kubernetes clusters.
|
||||
|
||||
* [IBM Cloud Kubernetes Service](https://console.bluemix.net/docs/containers/container_index.html) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data.
|
||||
|
||||
* [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration.
|
||||
|
||||
* [Kublr](https://kublr.com) offers enterprise-grade secure, scalable, highly reliable Kubernetes clusters on AWS, Azure, GCP, and on-premise. It includes out-of-the-box backup and disaster recovery, multi-cluster centralized logging and monitoring, and built-in alerting.
|
||||
|
||||
* [Madcore.Ai](https://madcore.ai) is devops-focused CLI tool for deploying Kubernetes infrastructure in AWS. Master, auto-scaling group nodes with spot-instances, ingress-ssl-lego, Heapster, and Grafana.
|
||||
|
||||
* [OpenShift Dedicated](https://www.openshift.com/dedicated/) offers managed Kubernetes clusters powered by OpenShift.
|
||||
|
||||
* [OpenShift Online](https://www.openshift.com/features/) provides free hosted access for Kubernetes applications.
|
||||
|
||||
* [Oracle Container Engine for Kubernetes](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengoverview.htm) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.
|
||||
|
||||
* [Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or on any public cloud, and provides 24/7 health monitoring and alerting. (Kube2go, a web-UI driven Kubernetes cluster deployment service Platform9 released, has been integrated to Platform9 Sandbox.)
|
||||
|
||||
* [Stackpoint.io](https://stackpoint.io) provides Kubernetes infrastructure automation and management for multiple public clouds.
|
||||
|
||||
* [SysEleven MetaKube](https://www.syseleven.io/products-services/managed-kubernetes/) offers managed Kubernetes as a service powered on our OpenStack public cloud. It includes lifecycle management, administration dashboards, monitoring, autoscaling and much more.
|
||||
|
||||
* [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) is an enterprise Kubernetes-as-a-Service offering in the VMware Cloud Services portfolio that provides easy to use, secure by default, cost effective, SaaS-based Kubernetes clusters.
|
||||
|
||||
## すぐに利用できるクラウドを使ったソリューション
|
||||
|
||||
These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
|
||||
few commands. These solutions are actively developed and have active community support.
|
||||
|
||||
* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
|
||||
* [Alibaba Cloud](/docs/setup/turnkey/alibaba-cloud/)
|
||||
* [APPUiO](https://appuio.ch)
|
||||
* [AWS](/docs/setup/turnkey/aws/)
|
||||
* [Azure](/docs/setup/turnkey/azure/)
|
||||
* [CenturyLink Cloud](/docs/setup/turnkey/clc/)
|
||||
* [Conjure-up Kubernetes with Ubuntu on AWS, Azure, Google Cloud, Oracle Cloud](/docs/getting-started-guides/ubuntu/)
|
||||
* [Gardener](https://gardener.cloud/)
|
||||
* [Google Compute Engine (GCE)](/docs/setup/turnkey/gce/)
|
||||
* [IBM Cloud](https://github.com/patrocinio/kubernetes-softlayer)
|
||||
* [Kontena Pharos](https://kontena.io/pharos/)
|
||||
* [Kubermatic](https://cloud.kubermatic.io)
|
||||
* [Kublr](https://kublr.com/)
|
||||
* [Madcore.Ai](https://madcore.ai/)
|
||||
* [Oracle Container Engine for K8s](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm)
|
||||
* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
|
||||
* [Giant Swarm](https://giantswarm.io)
|
||||
* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
|
||||
* [Stackpoint.io](/docs/setup/turnkey/stackpoint/)
|
||||
* [Tectonic by CoreOS](https://coreos.com/tectonic)
|
||||
|
||||
## すぐに利用できるオンプレミスを使ったソリューション
|
||||
These solutions allow you to create Kubernetes clusters on your internal, secure, cloud network with only a
|
||||
few commands.
|
||||
|
||||
* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
|
||||
* [APPUiO](https://appuio.ch)
|
||||
* [GKE On-Prem | Google Cloud](https://cloud.google.com/gke-on-prem/)
|
||||
* [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/)
|
||||
* [Kontena Pharos](https://kontena.io/pharos/)
|
||||
* [Kubermatic](https://www.loodse.com)
|
||||
* [Kublr](https://kublr.com/)
|
||||
* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
|
||||
* [Giant Swarm](https://giantswarm.io)
|
||||
* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
|
||||
* [SUSE CaaS Platform](https://www.suse.com/products/caas-platform)
|
||||
* [SUSE Cloud Application Platform](https://www.suse.com/products/cloud-application-platform/)
|
||||
|
||||
## カスタムソリューション
|
||||
|
||||
Kubernetes can run on a wide range of Cloud providers and bare-metal environments, and with many
|
||||
base operating systems.
|
||||
|
||||
If you can find a guide below that matches your needs, use it. It may be a little out of date, but
|
||||
it will be easier than starting from scratch. If you do want to start from scratch, either because you
|
||||
have special requirements, or just because you want to understand what is underneath a Kubernetes
|
||||
cluster, try the [Getting Started from Scratch](/docs/setup/scratch/) guide.
|
||||
|
||||
If you are interested in supporting Kubernetes on a new platform, see
|
||||
[Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md).
|
||||
|
||||
### 全般
|
||||
|
||||
If you already have a way to configure hosting resources, use
|
||||
[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster
|
||||
with a single command per machine.
|
||||
|
||||
### クラウド
|
||||
|
||||
These solutions are combinations of cloud providers and operating systems not covered by the above solutions.
|
||||
|
||||
* [CoreOS on AWS or GCE](/docs/setup/custom-cloud/coreos/)
|
||||
* [Gardener](https://gardener.cloud/)
|
||||
* [Kublr](https://kublr.com/)
|
||||
* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
|
||||
* [Kubespray](/docs/setup/custom-cloud/kubespray/)
|
||||
* [Rancher Kubernetes Engine (RKE)](https://github.com/rancher/rke)
|
||||
|
||||
### オンプレミスの仮想マシン
|
||||
|
||||
* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (uses Ansible, CoreOS and flannel)
|
||||
* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel)
|
||||
* [oVirt](/docs/setup/on-premises-vm/ovirt/)
|
||||
* [Vagrant](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
|
||||
* [VMware](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
|
||||
* [VMware vSphere](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)
|
||||
* [VMware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel)
|
||||
|
||||
### ベアメタル
|
||||
|
||||
* [CoreOS](/docs/setup/custom-cloud/coreos/)
|
||||
* [Digital Rebar](/docs/setup/on-premises-metal/krib/)
|
||||
* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/)
|
||||
* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/)
|
||||
* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
|
||||
|
||||
### 統合
|
||||
|
||||
These solutions provide integration with third-party schedulers, resource managers, and/or lower level platforms.
|
||||
|
||||
* [DCOS](/docs/setup/on-premises-vm/dcos/)
|
||||
* Community Edition DCOS uses AWS
|
||||
* Enterprise Edition DCOS supports cloud hosting, on-premises VMs, and bare metal
|
||||
|
||||
## ソリューションの表
|
||||
|
||||
Below is a table of all of the solutions listed above.
|
||||
|
||||
IaaS Provider | Config. Mgmt. | OS | Networking | Docs | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ----------------------------
|
||||
any | any | multi-support | any CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle))
|
||||
Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial
|
||||
Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial
|
||||
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial
|
||||
Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madcore.ai) | Community ([@madcore-ai](https://github.com/madcore-ai))
|
||||
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial
|
||||
Kublr | custom | multi-support | multi-support | [docs](http://docs.kublr.com/) | Commercial
|
||||
Kubermatic | | multi-support | multi-support | [docs](http://docs.kubermatic.io/) | Commercial
|
||||
IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://console.bluemix.net/docs/containers/) | Commercial
|
||||
Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial
|
||||
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | Project
|
||||
Azure Kubernetes Service | | Ubuntu | Azure | [docs](https://docs.microsoft.com/en-us/azure/aks/) | Commercial
|
||||
Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/setup/turnkey/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine)
|
||||
Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config/) | Project
|
||||
Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
|
||||
AWS | CoreOS | CoreOS | flannel | [docs](/docs/setup/turnkey/aws/) | Community
|
||||
GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires))
|
||||
Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
|
||||
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa))
|
||||
VMware vSphere | any | multi-support | multi-support | [docs](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) | [Community](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/contactus.html)
|
||||
Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap))
|
||||
lxd | Juju | Ubuntu | flannel/canal | [docs](/docs/getting-started-guides/ubuntu/local/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
|
||||
AWS | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
|
||||
Azure | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
|
||||
GCE | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
|
||||
Oracle Cloud | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
|
||||
Rackspace | custom | CoreOS | flannel/calico/canal | [docs](https://developer.rackspace.com/docs/rkaas/latest/) | [Commercial](https://www.rackspace.com/managed-kubernetes)
|
||||
VMware vSphere | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
|
||||
Bare Metal | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
|
||||
AWS | Saltstack | Debian | AWS | [docs](/docs/setup/turnkey/aws/) | Community ([@justinsb](https://github.com/justinsb))
|
||||
AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb))
|
||||
Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
|
||||
oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | Community ([@simon3z](https://github.com/simon3z))
|
||||
any | any | any | any | [docs](/docs/setup/scratch/) | Community ([@erictune](https://github.com/erictune))
|
||||
any | any | any | any | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community
|
||||
any | RKE | multi-support | flannel or canal | [docs](https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/) | [Commercial](https://rancher.com/what-is-rancher/overview/) and [Community](https://github.com/rancher/rancher)
|
||||
any | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/)
|
||||
Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial
|
||||
Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial
|
||||
IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://console.bluemix.net/docs/containers/container_index.html) | Commercial
|
||||
Digital Rebar | kubeadm | any | metal | [docs](/docs/setup/on-premises-metal/krib/) | Community ([@digitalrebar](https://github.com/digitalrebar))
|
||||
VMware Cloud PKS | | Photon OS | Canal | [docs](https://docs.vmware.com/en/VMware-Kubernetes-Engine/index.html) | Commercial
|
||||
|
||||
{{< note >}}
|
||||
The above table is ordered by version test/used in nodes, followed by support level.
|
||||
{{< /note >}}
|
||||
|
||||
### カラムの定義
|
||||
|
||||
* **IaaS Provider** is the product or organization which provides the virtual or physical machines (nodes) that Kubernetes runs on.
|
||||
* **OS** is the base operating system of the nodes.
|
||||
* **Config. Mgmt.** is the configuration management system that helps install and maintain Kubernetes on the
|
||||
nodes.
|
||||
* **Networking** is what implements the [networking model](/docs/concepts/cluster-administration/networking/). Those with networking type
|
||||
_none_ may not support more than a single node, or may support multiple VM nodes in a single physical node.
|
||||
* **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance
|
||||
tests for supporting the API and base features of Kubernetes v1.0.0.
|
||||
* **Support Levels**
|
||||
* **Project**: Kubernetes committers regularly use this configuration, so it usually works with the latest release
|
||||
of Kubernetes.
|
||||
* **Commercial**: A commercial offering with its own support arrangements.
|
||||
* **Community**: Actively supported by community contributions. May not work with recent releases of Kubernetes.
|
||||
* **Inactive**: Not actively maintained. Not recommended for first-time Kubernetes users, and may be removed.
|
||||
* **Notes** has other relevant information, such as the version of Kubernetes used.
|
||||
|
||||
<!-- reference style links below here -->
|
||||
<!-- GCE conformance test result -->
|
||||
[1]: https://gist.github.com/erictune/4cabc010906afbcc5061
|
||||
<!-- Vagrant conformance test result -->
|
||||
[2]: https://gist.github.com/derekwaynecarr/505e56036cdf010bf6b6
|
||||
<!-- Google Kubernetes Engine conformance test result -->
|
||||
[3]: https://gist.github.com/erictune/2f39b22f72565365e59b
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "Kubernetesのダウンロード"
|
||||
weight: 20
|
||||
---
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: ソースからのビルド
|
||||
---
|
||||
|
||||
あなたはソースからリリースをビルドすることもできますし、既にビルドされたリリースをダウンロードすることも可能です。もしあなたがKubernetesを開発する予定が無いのであれば、[リリースノート](/docs/setup/release/notes/)内の現在リリースされている既にビルドされたバージョンを使用することを推奨します。
|
||||
|
||||
Kubernetes のソースコードは[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes)のリポジトリからダウンロードすることが可能です。
|
||||
|
||||
## ソースからのビルド
|
||||
|
||||
もしあなたが単にソースからリリースをビルドするだけなのであれば、完全なGOの環境を準備する必要はなく、全てのビルドはDockerコンテナの中で行われます。
|
||||
|
||||
リリースをビルドすることは簡単です。
|
||||
|
||||
```shell
|
||||
git clone https://github.com/kubernetes/kubernetes.git
|
||||
cd kubernetes
|
||||
make release
|
||||
```
|
||||
|
||||
リリース手段の詳細な情報はkubernetes/kubernetes内の[`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/)ディレクトリを参照して下さい。
|
|
@ -0,0 +1,872 @@
|
|||
---
|
||||
title: ゼロからのカスタムクラスターの作成
|
||||
---
|
||||
|
||||
This guide is for people who want to craft a custom Kubernetes cluster. If you
|
||||
can find an existing Getting Started Guide that meets your needs on [this
|
||||
list](/docs/setup/), then we recommend using it, as you will be able to benefit
|
||||
from the experience of others. However, if you have specific IaaS, networking,
|
||||
configuration management, or operating system requirements not met by any of
|
||||
those guides, then this guide will provide an outline of the steps you need to
|
||||
take. Note that it requires considerably more effort than using one of the
|
||||
pre-defined guides.
|
||||
|
||||
This guide is also useful for those wanting to understand at a high level some of the
|
||||
steps that existing cluster setup scripts are making.
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
## 設計と準備
|
||||
|
||||
### 学び
|
||||
|
||||
1. You should be familiar with using Kubernetes already. We suggest you set
|
||||
up a temporary cluster by following one of the other Getting Started Guides.
|
||||
This will help you become familiar with the CLI ([kubectl](/docs/user-guide/kubectl/)) and concepts ([pods](/docs/user-guide/pods/), [services](/docs/concepts/services-networking/service/), etc.) first.
|
||||
1. You should have `kubectl` installed on your desktop. This will happen as a side
|
||||
effect of completing one of the other Getting Started Guides. If not, follow the instructions
|
||||
[here](/docs/tasks/kubectl/install/).
|
||||
|
||||
### クラウドプロバイダー
|
||||
|
||||
Kubernetes has the concept of a Cloud Provider, which is a module which provides
|
||||
an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes.
|
||||
The interface is defined in `pkg/cloudprovider/cloud.go`. It is possible to
|
||||
create a custom cluster without implementing a cloud provider (for example if using
|
||||
bare-metal), and not all parts of the interface need to be implemented, depending
|
||||
on how flags are set on various components.
|
||||
|
||||
### ノード
|
||||
|
||||
- You can use virtual or physical machines.
|
||||
- While you can build a cluster with 1 machine, in order to run all the examples and tests you
|
||||
need at least 4 nodes.
|
||||
- Many Getting-started-guides make a distinction between the master node and regular nodes. This
|
||||
is not strictly necessary.
|
||||
- Nodes will need to run some version of Linux with the x86_64 architecture. It may be possible
|
||||
to run on other OSes and Architectures, but this guide does not try to assist with that.
|
||||
- Apiserver and etcd together are fine on a machine with 1 core and 1GB RAM for clusters with 10s of nodes.
|
||||
Larger or more active clusters may benefit from more cores.
|
||||
- Other nodes can have any reasonable amount of memory and any number of cores. They need not
|
||||
have identical configurations.
|
||||
|
||||
### ネットワーク
|
||||
|
||||
#### ネットワークの接続性
|
||||
Kubernetes has a distinctive [networking model](/docs/concepts/cluster-administration/networking/).
|
||||
|
||||
Kubernetes allocates an IP address to each pod. When creating a cluster, you
|
||||
need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest
|
||||
approach is to allocate a different block of IPs to each node in the cluster as
|
||||
the node is added. A process in one pod should be able to communicate with
|
||||
another pod using the IP of the second pod. This connectivity can be
|
||||
accomplished in two ways:
|
||||
|
||||
- **Using an overlay network**
|
||||
- An overlay network obscures the underlying network architecture from the
|
||||
pod network through traffic encapsulation (for example vxlan).
|
||||
- Encapsulation reduces performance, though exactly how much depends on your solution.
|
||||
- **Without an overlay network**
|
||||
- Configure the underlying network fabric (switches, routers, etc.) to be aware of pod IP addresses.
|
||||
- This does not require the encapsulation provided by an overlay, and so can achieve
|
||||
better performance.
|
||||
|
||||
Which method you choose depends on your environment and requirements. There are various ways
|
||||
to implement one of the above options:
|
||||
|
||||
- **Use a network plugin which is called by Kubernetes**
|
||||
- Kubernetes supports the [CNI](https://github.com/containernetworking/cni) network plugin interface.
|
||||
- There are a number of solutions which provide plugins for Kubernetes (listed alphabetically):
|
||||
- [Calico](http://docs.projectcalico.org/)
|
||||
- [Flannel](https://github.com/coreos/flannel)
|
||||
- [Open vSwitch (OVS)](http://openvswitch.org/)
|
||||
- [Romana](http://romana.io/)
|
||||
- [Weave](http://weave.works/)
|
||||
- [More found here](/docs/admin/networking#how-to-achieve-this/)
|
||||
- You can also write your own.
|
||||
- **Compile support directly into Kubernetes**
|
||||
- This can be done by implementing the "Routes" interface of a Cloud Provider module.
|
||||
- The Google Compute Engine ([GCE](/docs/setup/turnkey/gce/)) and [AWS](/docs/setup/turnkey/aws/) guides use this approach.
|
||||
- **Configure the network external to Kubernetes**
|
||||
- This can be done by manually running commands, or through a set of externally maintained scripts.
|
||||
- You have to implement this yourself, but it can give you an extra degree of flexibility.
|
||||
|
||||
You will need to select an address range for the Pod IPs.
|
||||
|
||||
- Various approaches:
|
||||
- GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each
|
||||
Kubernetes cluster from that space, which leaves room for several clusters.
|
||||
Each node gets a further subdivision of this space.
|
||||
- AWS: use one VPC for whole organization, carve off a chunk for each
|
||||
cluster, or use different VPC for different clusters.
|
||||
- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR
|
||||
from which smaller CIDRs are automatically allocated to each node.
|
||||
- You need max-pods-per-node * max-number-of-nodes IPs in total. A `/24` per
|
||||
node supports 254 pods per machine and is a common choice. If IPs are
|
||||
scarce, a `/26` (62 pods per machine) or even a `/27` (30 pods) may be sufficient.
|
||||
- For example, use `10.10.0.0/16` as the range for the cluster, with up to 256 nodes
|
||||
using `10.10.0.0/24` through `10.10.255.0/24`, respectively.
|
||||
- Need to make these routable or connect with overlay.
|
||||
|
||||
Kubernetes also allocates an IP to each [service](/docs/concepts/services-networking/service/). However,
|
||||
service IPs do not necessarily need to be routable. The kube-proxy takes care
|
||||
of translating Service IPs to Pod IPs before traffic leaves the node. You do
|
||||
need to allocate a block of IPs for services. Call this
|
||||
`SERVICE_CLUSTER_IP_RANGE`. For example, you could set
|
||||
`SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"`, allowing 65534 distinct services to
|
||||
be active at once. Note that you can grow the end of this range, but you
|
||||
cannot move it without disrupting the services and pods that already use it.
|
||||
|
||||
Also, you need to pick a static IP for master node.
|
||||
|
||||
- Call this `MASTER_IP`.
|
||||
- Open any firewalls to allow access to the apiserver ports 80 and/or 443.
|
||||
- Enable ipv4 forwarding sysctl, `net.ipv4.ip_forward = 1`
|
||||
|
||||
#### ネットワークポリシー
|
||||
|
||||
Kubernetes enables the definition of fine-grained network policy between Pods using the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) resource.
|
||||
|
||||
Not all networking providers support the Kubernetes NetworkPolicy API, see [Using Network Policy](/docs/tasks/configure-pod-container/declare-network-policy/) for more information.
|
||||
|
||||
### クラスターの名前
|
||||
|
||||
You should pick a name for your cluster. Pick a short name for each cluster
|
||||
which is unique from future cluster names. This will be used in several ways:
|
||||
|
||||
- by kubectl to distinguish between various clusters you have access to. You will probably want a
|
||||
second one sometime later, such as for testing new Kubernetes releases, running in a different
|
||||
region of the world, etc.
|
||||
- Kubernetes clusters can create cloud provider resources (for example, AWS ELBs) and different clusters
|
||||
need to distinguish which resources each created. Call this `CLUSTER_NAME`.
|
||||
|
||||
### ソフトウェアバイナリ
|
||||
|
||||
You will need binaries for:
|
||||
|
||||
- etcd
|
||||
- A container runner, one of:
|
||||
- docker
|
||||
- rkt
|
||||
- Kubernetes
|
||||
- kubelet
|
||||
- kube-proxy
|
||||
- kube-apiserver
|
||||
- kube-controller-manager
|
||||
- kube-scheduler
|
||||
|
||||
#### Kubernetesのバイナリのダウンロードと展開
|
||||
|
||||
A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd.
|
||||
You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the
|
||||
[Developer Documentation](https://git.k8s.io/community/contributors/devel/). Only using a binary release is covered in this guide.
|
||||
|
||||
Download the [latest binary release](https://github.com/kubernetes/kubernetes/releases/latest) and unzip it.
|
||||
Server binary tarballs are no longer included in the Kubernetes final tarball, so you will need to locate and run
|
||||
`./kubernetes/cluster/get-kube-binaries.sh` to download and extract the client and server binaries.
|
||||
Then locate `./kubernetes/server/bin`, which contains all the necessary binaries.
|
||||
|
||||
#### イメージの選択
|
||||
|
||||
You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so
|
||||
you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler,
|
||||
we recommend that you run these as containers, so you need an image to be built.
|
||||
|
||||
You have several choices for Kubernetes images:
|
||||
|
||||
- Use images hosted on Google Container Registry (GCR):
|
||||
- For example `k8s.gcr.io/hyperkube:$TAG`, where `TAG` is the latest
|
||||
release tag, which can be found on the [latest releases page](https://github.com/kubernetes/kubernetes/releases/latest).
|
||||
- Ensure $TAG is the same tag as the release tag you are using for kubelet and kube-proxy.
|
||||
- The [hyperkube](https://releases.k8s.io/{{< param "githubbranch" >}}/cmd/hyperkube) binary is an all in one binary
|
||||
- `hyperkube kubelet ...` runs the kubelet, `hyperkube apiserver ...` runs an apiserver, etc.
|
||||
- Build your own images.
|
||||
- Useful if you are using a private registry.
|
||||
- The release contains files such as `./kubernetes/server/bin/kube-apiserver.tar` which
|
||||
can be converted into docker images using a command like
|
||||
`docker load -i kube-apiserver.tar`
|
||||
- You can verify if the image is loaded successfully with the right repository and tag using
|
||||
command like `docker images`
|
||||
|
||||
We recommend that you use the etcd version which is provided in the Kubernetes binary distribution. The Kubernetes binaries in the release
|
||||
were tested extensively with this version of etcd and not with any other version.
|
||||
The recommended version number can also be found as the value of `TAG` in `kubernetes/cluster/images/etcd/Makefile`.
|
||||
|
||||
For the minimum recommended version of etcd, refer to
|
||||
[Configuring and Updating etcd](/docs/tasks/administer-cluster/configure-upgrade-etcd/)
|
||||
|
||||
The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry):
|
||||
|
||||
- `HYPERKUBE_IMAGE=k8s.gcr.io/hyperkube:$TAG`
|
||||
- `ETCD_IMAGE=k8s.gcr.io/etcd:$ETCD_VERSION`
|
||||
|
||||
### セキュリティモデル
|
||||
|
||||
There are two main options for security:
|
||||
|
||||
- Access the apiserver using HTTP.
|
||||
- Use a firewall for security.
|
||||
- This is easier to setup.
|
||||
- Access the apiserver using HTTPS
|
||||
- Use https with certs, and credentials for user.
|
||||
- This is the recommended approach.
|
||||
- Configuring certs can be tricky.
|
||||
|
||||
If following the HTTPS approach, you will need to prepare certs and credentials.
|
||||
|
||||
#### 証明書の準備
|
||||
|
||||
You need to prepare several certs:
|
||||
|
||||
- The master needs a cert to act as an HTTPS server.
|
||||
- The kubelets optionally need certs to identify themselves as clients of the master, and when
|
||||
serving its own API over HTTPS.
|
||||
|
||||
Unless you plan to have a real CA generate your certs, you will need
|
||||
to generate a root cert and use that to sign the master, kubelet, and
|
||||
kubectl certs. How to do this is described in the [authentication
|
||||
documentation](/docs/concepts/cluster-administration/certificates/).
|
||||
|
||||
You will end up with the following files (we will use these variables later on)
|
||||
|
||||
- `CA_CERT`
|
||||
- put in on node where apiserver runs, for example in `/srv/kubernetes/ca.crt`.
|
||||
- `MASTER_CERT`
|
||||
- signed by CA_CERT
|
||||
- put in on node where apiserver runs, for example in `/srv/kubernetes/server.crt`
|
||||
- `MASTER_KEY `
|
||||
- put in on node where apiserver runs, for example in `/srv/kubernetes/server.key`
|
||||
- `KUBELET_CERT`
|
||||
- optional
|
||||
- `KUBELET_KEY`
|
||||
- optional
|
||||
|
||||
#### 認証情報の準備
|
||||
|
||||
The admin user (and any users) need:
|
||||
|
||||
- a token or a password to identify them.
|
||||
- tokens are just long alphanumeric strings, 32 chars for example. See
|
||||
- `TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/[:space:]" | dd bs=32 count=1 2>/dev/null)`
|
||||
|
||||
Your tokens and passwords need to be stored in a file for the apiserver
|
||||
to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`.
|
||||
The format for this file is described in the [authentication documentation](/docs/reference/access-authn-authz/authentication/#static-token-file).
|
||||
|
||||
For distributing credentials to clients, the convention in Kubernetes is to put the credentials
|
||||
into a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/).
|
||||
|
||||
The kubeconfig file for the administrator can be created as follows:
|
||||
|
||||
- If you have already used Kubernetes with a non-custom cluster (for example, used a Getting Started
|
||||
Guide), you will already have a `$HOME/.kube/config` file.
|
||||
- You need to add certs, keys, and the master IP to the kubeconfig file:
|
||||
- If using the firewall-only security option, set the apiserver this way:
|
||||
- `kubectl config set-cluster $CLUSTER_NAME --server=http://$MASTER_IP --insecure-skip-tls-verify=true`
|
||||
- Otherwise, do this to set the apiserver ip, client certs, and user credentials.
|
||||
- `kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP`
|
||||
- `kubectl config set-credentials $USER --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN`
|
||||
- Set your cluster as the default cluster to use:
|
||||
- `kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER`
|
||||
- `kubectl config use-context $CONTEXT_NAME`
|
||||
|
||||
Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how
|
||||
many distinct files to make:
|
||||
|
||||
1. Use the same credential as the admin
|
||||
- This is simplest to setup.
|
||||
1. One token and kubeconfig file for all kubelets, one for all kube-proxy, one for admin.
|
||||
- This mirrors what is done on GCE today
|
||||
1. Different credentials for every kubelet, etc.
|
||||
- We are working on this but all the pieces are not ready yet.
|
||||
|
||||
You can make the files by copying the `$HOME/.kube/config` or by using the following template:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
users:
|
||||
- name: kubelet
|
||||
user:
|
||||
token: ${KUBELET_TOKEN}
|
||||
clusters:
|
||||
- name: local
|
||||
cluster:
|
||||
certificate-authority: /srv/kubernetes/ca.crt
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local
|
||||
user: kubelet
|
||||
name: service-account-context
|
||||
current-context: service-account-context
|
||||
```
|
||||
|
||||
Put the kubeconfig(s) on every node. The examples later in this
|
||||
guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
|
||||
`/var/lib/kubelet/kubeconfig`.
|
||||
|
||||
## ノードの基本的なソフトウェアの設定とインストール
|
||||
|
||||
This section discusses how to configure machines to be Kubernetes nodes.
|
||||
|
||||
You should run three daemons on every node:
|
||||
|
||||
- docker or rkt
|
||||
- kubelet
|
||||
- kube-proxy
|
||||
|
||||
You will also need to do assorted other configuration on top of a
|
||||
base OS install.
|
||||
|
||||
Tip: One possible starting point is to setup a cluster using an existing Getting
|
||||
Started Guide. After getting a cluster running, you can then copy the init.d scripts or systemd unit files from that
|
||||
cluster, and then modify them for use on your custom cluster.
|
||||
|
||||
### Docker
|
||||
|
||||
The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice. Kubelet will log a warning and refuse to start pods if the version is too old, so pick a version and try it.
|
||||
|
||||
If you previously had Docker installed on a node without setting Kubernetes-specific
|
||||
options, you may have a Docker-created bridge and iptables rules. You may want to remove these
|
||||
as follows before proceeding to configure Docker for Kubernetes.
|
||||
|
||||
```shell
|
||||
iptables -t nat -F
|
||||
ip link set docker0 down
|
||||
ip link delete docker0
|
||||
```
|
||||
|
||||
The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network.
|
||||
Some suggested docker options:
|
||||
|
||||
- create your own bridge for the per-node CIDR ranges, call it cbr0, and set `--bridge=cbr0` option on docker.
|
||||
- set `--iptables=false` so docker will not manipulate iptables for host-ports (too coarse on older docker versions, may be fixed in newer versions)
|
||||
so that kube-proxy can manage iptables instead of docker.
|
||||
- `--ip-masq=false`
|
||||
- if you have setup PodIPs to be routable, then you want this false, otherwise, docker will
|
||||
rewrite the PodIP source-address to a NodeIP.
|
||||
- some environments (for example GCE) still need you to masquerade out-bound traffic when it leaves the cloud environment. This is very environment specific.
|
||||
- if you are using an overlay network, consult those instructions.
|
||||
- `--mtu=`
|
||||
- may be required when using Flannel, because of the extra packet size due to udp encapsulation
|
||||
- `--insecure-registry $CLUSTER_SUBNET`
|
||||
- to connect to a private registry, if you set one up, without using SSL.
|
||||
|
||||
You may want to increase the number of open files for docker:
|
||||
|
||||
- `DOCKER_NOFILE=1000000`
|
||||
|
||||
Where this config goes depends on your node OS. For example, GCE's Debian-based distro uses `/etc/default/docker`.
|
||||
|
||||
Ensure docker is working correctly on your system before proceeding with the rest of the
|
||||
installation, by following examples given in the Docker documentation.
|
||||
|
||||
### rkt
|
||||
|
||||
[rkt](https://github.com/coreos/rkt) is an alternative to Docker. You only need to install one of Docker or rkt.
|
||||
The minimum version required is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6).
|
||||
|
||||
[systemd](http://www.freedesktop.org/wiki/Software/systemd/) is required on your node to run rkt. The
|
||||
minimum version required to match rkt v0.5.6 is
|
||||
[systemd 215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html).
|
||||
|
||||
[rkt metadata service](https://github.com/coreos/rkt/blob/master/Documentation/networking/overview.md) is also required
|
||||
for rkt networking support. You can start rkt metadata service by using command like
|
||||
`sudo systemd-run rkt metadata-service`
|
||||
|
||||
Then you need to configure your kubelet with flag:
|
||||
|
||||
- `--container-runtime=rkt`
|
||||
|
||||
### kubelet
|
||||
|
||||
All nodes should run kubelet. See [Software Binaries](#software-binaries).
|
||||
|
||||
Arguments to consider:
|
||||
|
||||
- If following the HTTPS security approach:
|
||||
- `--kubeconfig=/var/lib/kubelet/kubeconfig`
|
||||
- Otherwise, if taking the firewall-based security approach
|
||||
- `--config=/etc/kubernetes/manifests`
|
||||
- `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Cluster Services](#starting-cluster-services).)
|
||||
- `--cluster-domain=` to the dns domain prefix to use for cluster DNS addresses.
|
||||
- `--docker-root=`
|
||||
- `--root-dir=`
|
||||
- `--pod-cidr=` The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master.
|
||||
- `--register-node` (described in [Node](/docs/admin/node/) documentation.)
|
||||
|
||||
### kube-proxy
|
||||
|
||||
All nodes should run kube-proxy. (Running kube-proxy on a "master" node is not
|
||||
strictly required, but being consistent is easier.) Obtain a binary as described for
|
||||
kubelet.
|
||||
|
||||
Arguments to consider:
|
||||
|
||||
- If following the HTTPS security approach:
|
||||
- `--master=https://$MASTER_IP`
|
||||
- `--kubeconfig=/var/lib/kube-proxy/kubeconfig`
|
||||
- Otherwise, if taking the firewall-based security approach
|
||||
- `--master=http://$MASTER_IP`
|
||||
|
||||
Note that on some Linux platforms, you may need to manually install the
|
||||
`conntrack` package which is a dependency of kube-proxy, or else kube-proxy
|
||||
cannot be started successfully.
|
||||
|
||||
For more details about debugging kube-proxy problems, refer to
|
||||
[Debug Services](/docs/tasks/debug-application-cluster/debug-service/)
|
||||
|
||||
### ネットワーク
|
||||
|
||||
Each node needs to be allocated its own CIDR range for pod networking.
|
||||
Call this `NODE_X_POD_CIDR`.
|
||||
|
||||
A bridge called `cbr0` needs to be created on each node. The bridge is explained
|
||||
further in the [networking documentation](/docs/concepts/cluster-administration/networking/). The bridge itself
|
||||
needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call
|
||||
this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`,
|
||||
then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix
|
||||
because of how this is used later.
|
||||
|
||||
If you have turned off Docker's IP masquerading to allow pods to talk to each
|
||||
other, then you may need to do masquerading just for destination IPs outside
|
||||
the cluster network. For example:
|
||||
|
||||
```shell
|
||||
iptables -t nat -A POSTROUTING ! -d ${CLUSTER_SUBNET} -m addrtype ! --dst-type LOCAL -j MASQUERADE
|
||||
```
|
||||
|
||||
This will rewrite the source address from
|
||||
the PodIP to the Node IP for traffic bound outside the cluster, and kernel
|
||||
[connection tracking](http://www.iptables.info/en/connection-state.html)
|
||||
will ensure that responses destined to the node still reach
|
||||
the pod.
|
||||
|
||||
NOTE: This is environment specific. Some environments will not need
|
||||
any masquerading at all. Others, such as GCE, will not allow pod IPs to send
|
||||
traffic to the internet, but have no problem with them inside your GCE Project.
|
||||
|
||||
### その他
|
||||
|
||||
- Enable auto-upgrades for your OS package manager, if desired.
|
||||
- Configure log rotation for all node components (for example using [logrotate](http://linux.die.net/man/8/logrotate)).
|
||||
- Setup liveness-monitoring (for example using [supervisord](http://supervisord.org/)).
|
||||
- Setup volume plugin support (optional)
|
||||
- Install any client binaries for optional volume types, such as `glusterfs-client` for GlusterFS
|
||||
volumes.
|
||||
|
||||
### 設定管理ツールの使用
|
||||
|
||||
The previous steps all involved "conventional" system administration techniques for setting up
|
||||
machines. You may want to use a Configuration Management system to automate the node configuration
|
||||
process. There are examples of Ansible, Juju, and CoreOS Cloud Config in the
|
||||
various Getting Started Guides.
|
||||
|
||||
## クラスターのブートストラッピング
|
||||
|
||||
While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using
|
||||
traditional system administration/automation approaches, the remaining *master* components of Kubernetes are
|
||||
all configured and managed *by Kubernetes*:
|
||||
|
||||
- Their options are specified in a Pod spec (yaml or json) rather than an /etc/init.d file or
|
||||
systemd unit.
|
||||
- They are kept running by Kubernetes rather than by init.
|
||||
|
||||
### etcd
|
||||
|
||||
You will need to run one or more instances of etcd.
|
||||
|
||||
- Highly available and easy to restore - Run 3 or 5 etcd instances with, their logs written to a directory backed
|
||||
by durable storage (RAID, GCE PD)
|
||||
- Not highly available, but easy to restore - Run one etcd instance, with its log written to a directory backed
|
||||
by durable storage (RAID, GCE PD).
|
||||
|
||||
{{< note >}}May result in operations outages in case of
|
||||
instance outage. {{< /note >}}
|
||||
- Highly available - Run 3 or 5 etcd instances with non durable storage.
|
||||
|
||||
{{< note >}}Log can be written to non-durable storage
|
||||
because storage is replicated.{{< /note >}}
|
||||
|
||||
See [cluster-troubleshooting](/docs/admin/cluster-troubleshooting/) for more discussion on factors affecting cluster
|
||||
availability.
|
||||
|
||||
To run an etcd instance:
|
||||
|
||||
1. Copy [`cluster/gce/manifests/etcd.manifest`](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/manifests/etcd.manifest)
|
||||
1. Make any modifications needed
|
||||
1. Start the pod by putting it into the kubelet manifest directory
|
||||
|
||||
### Apiserver、Controller Manager、およびScheduler
|
||||
|
||||
The apiserver, controller manager, and scheduler will each run as a pod on the master node.
|
||||
|
||||
For each of these components, the steps to start them running are similar:
|
||||
|
||||
1. Start with a provided template for a pod.
|
||||
1. Set the `HYPERKUBE_IMAGE` to the values chosen in [Selecting Images](#selecting-images).
|
||||
1. Determine which flags are needed for your cluster, using the advice below each template.
|
||||
1. Set the flags to be individual strings in the command array (for example $ARGN below)
|
||||
1. Start the pod by putting the completed template into the kubelet manifest directory.
|
||||
1. Verify that the pod is started.
|
||||
|
||||
#### Apiserver podテンプレート
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "kube-apiserver"
|
||||
},
|
||||
"spec": {
|
||||
"hostNetwork": true,
|
||||
"containers": [
|
||||
{
|
||||
"name": "kube-apiserver",
|
||||
"image": "${HYPERKUBE_IMAGE}",
|
||||
"command": [
|
||||
"/hyperkube",
|
||||
"apiserver",
|
||||
"$ARG1",
|
||||
"$ARG2",
|
||||
...
|
||||
"$ARGN"
|
||||
],
|
||||
"ports": [
|
||||
{
|
||||
"name": "https",
|
||||
"hostPort": 443,
|
||||
"containerPort": 443
|
||||
},
|
||||
{
|
||||
"name": "local",
|
||||
"hostPort": 8080,
|
||||
"containerPort": 8080
|
||||
}
|
||||
],
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "srvkube",
|
||||
"mountPath": "/srv/kubernetes",
|
||||
"readOnly": true
|
||||
},
|
||||
{
|
||||
"name": "etcssl",
|
||||
"mountPath": "/etc/ssl",
|
||||
"readOnly": true
|
||||
}
|
||||
],
|
||||
"livenessProbe": {
|
||||
"httpGet": {
|
||||
"scheme": "HTTP",
|
||||
"host": "127.0.0.1",
|
||||
"port": 8080,
|
||||
"path": "/healthz"
|
||||
},
|
||||
"initialDelaySeconds": 15,
|
||||
"timeoutSeconds": 15
|
||||
}
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"name": "srvkube",
|
||||
"hostPath": {
|
||||
"path": "/srv/kubernetes"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "etcssl",
|
||||
"hostPath": {
|
||||
"path": "/etc/ssl"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Here are some apiserver flags you may need to set:
|
||||
|
||||
- `--cloud-provider=` see [cloud providers](#cloud-providers)
|
||||
- `--cloud-config=` see [cloud providers](#cloud-providers)
|
||||
- `--address=${MASTER_IP}` *or* `--bind-address=127.0.0.1` and `--address=127.0.0.1` if you want to run a proxy on the master node.
|
||||
- `--service-cluster-ip-range=$SERVICE_CLUSTER_IP_RANGE`
|
||||
- `--etcd-servers=http://127.0.0.1:4001`
|
||||
- `--tls-cert-file=/srv/kubernetes/server.cert`
|
||||
- `--tls-private-key-file=/srv/kubernetes/server.key`
|
||||
- `--enable-admission-plugins=$RECOMMENDED_LIST`
|
||||
- See [admission controllers](/docs/reference/access-authn-authz/admission-controllers/) for recommended arguments.
|
||||
- `--allow-privileged=true`, only if you trust your cluster user to run pods as root.
|
||||
|
||||
If you are following the firewall-only security approach, then use these arguments:
|
||||
|
||||
- `--token-auth-file=/dev/null`
|
||||
- `--insecure-bind-address=$MASTER_IP`
|
||||
- `--advertise-address=$MASTER_IP`
|
||||
|
||||
If you are using the HTTPS approach, then set:
|
||||
|
||||
- `--client-ca-file=/srv/kubernetes/ca.crt`
|
||||
- `--token-auth-file=/srv/kubernetes/known_tokens.csv`
|
||||
- `--basic-auth-file=/srv/kubernetes/basic_auth.csv`
|
||||
|
||||
This pod mounts several node file system directories using the `hostPath` volumes. Their purposes are:
|
||||
|
||||
- The `/etc/ssl` mount allows the apiserver to find the SSL root certs so it can
|
||||
authenticate external services, such as a cloud provider.
|
||||
- This is not required if you do not use a cloud provider (bare-metal for example).
|
||||
- The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the
|
||||
node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image.
|
||||
- Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template).
|
||||
- Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl.
|
||||
|
||||
*TODO* document proxy-ssh setup.
|
||||
|
||||
##### クラウドプロバイダー
|
||||
|
||||
Apiserver supports several cloud providers.
|
||||
|
||||
- options for `--cloud-provider` flag are `aws`, `azure`, `cloudstack`, `fake`, `gce`, `mesos`, `openstack`, `ovirt`, `rackspace`, `vsphere`, or unset.
|
||||
- unset used for bare metal setups.
|
||||
- support for new IaaS is added by contributing code [here](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/cloudprovider/providers)
|
||||
|
||||
Some cloud providers require a config file. If so, you need to put config file into apiserver image or mount through hostPath.
|
||||
|
||||
- `--cloud-config=` set if cloud provider requires a config file.
|
||||
- Used by `aws`, `gce`, `mesos`, `openstack`, `ovirt` and `rackspace`.
|
||||
- You must put config file into apiserver image or mount through hostPath.
|
||||
- Cloud config file syntax is [Gcfg](https://code.google.com/p/gcfg/).
|
||||
- AWS format defined by type [AWSCloudConfig](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/cloudprovider/providers/aws/aws.go)
|
||||
- There is a similar type in the corresponding file for other cloud providers.
|
||||
|
||||
#### Scheduler podテンプレート
|
||||
|
||||
Complete this template for the scheduler pod:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "kube-scheduler"
|
||||
},
|
||||
"spec": {
|
||||
"hostNetwork": true,
|
||||
"containers": [
|
||||
{
|
||||
"name": "kube-scheduler",
|
||||
"image": "$HYPERKUBE_IMAGE",
|
||||
"command": [
|
||||
"/hyperkube",
|
||||
"scheduler",
|
||||
"--master=127.0.0.1:8080",
|
||||
"$SCHEDULER_FLAG1",
|
||||
...
|
||||
"$SCHEDULER_FLAGN"
|
||||
],
|
||||
"livenessProbe": {
|
||||
"httpGet": {
|
||||
"scheme": "HTTP",
|
||||
"host": "127.0.0.1",
|
||||
"port": 10251,
|
||||
"path": "/healthz"
|
||||
},
|
||||
"initialDelaySeconds": 15,
|
||||
"timeoutSeconds": 15
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Typically, no additional flags are required for the scheduler.
|
||||
|
||||
Optionally, you may want to mount `/var/log` as well and redirect output there.
|
||||
|
||||
#### Controller Manager podテンプレート
|
||||
|
||||
Template for controller manager pod:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "kube-controller-manager"
|
||||
},
|
||||
"spec": {
|
||||
"hostNetwork": true,
|
||||
"containers": [
|
||||
{
|
||||
"name": "kube-controller-manager",
|
||||
"image": "$HYPERKUBE_IMAGE",
|
||||
"command": [
|
||||
"/hyperkube",
|
||||
"controller-manager",
|
||||
"$CNTRLMNGR_FLAG1",
|
||||
...
|
||||
"$CNTRLMNGR_FLAGN"
|
||||
],
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "srvkube",
|
||||
"mountPath": "/srv/kubernetes",
|
||||
"readOnly": true
|
||||
},
|
||||
{
|
||||
"name": "etcssl",
|
||||
"mountPath": "/etc/ssl",
|
||||
"readOnly": true
|
||||
}
|
||||
],
|
||||
"livenessProbe": {
|
||||
"httpGet": {
|
||||
"scheme": "HTTP",
|
||||
"host": "127.0.0.1",
|
||||
"port": 10252,
|
||||
"path": "/healthz"
|
||||
},
|
||||
"initialDelaySeconds": 15,
|
||||
"timeoutSeconds": 15
|
||||
}
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"name": "srvkube",
|
||||
"hostPath": {
|
||||
"path": "/srv/kubernetes"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "etcssl",
|
||||
"hostPath": {
|
||||
"path": "/etc/ssl"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Flags to consider using with controller manager:
|
||||
|
||||
- `--cluster-cidr=`, the CIDR range for pods in cluster.
|
||||
- `--allocate-node-cidrs=`, if you are using `--cloud-provider=`, allocate and set the CIDRs for pods on the cloud provider.
|
||||
- `--cloud-provider=` and `--cloud-config` as described in apiserver section.
|
||||
- `--service-account-private-key-file=/srv/kubernetes/server.key`, used by the [service account](/docs/user-guide/service-accounts) feature.
|
||||
- `--master=127.0.0.1:8080`
|
||||
|
||||
#### Apiserver、Scheduler、およびController Managerの起動と確認
|
||||
|
||||
Place each completed pod template into the kubelet config dir
|
||||
(whatever `--config=` argument of kubelet is set to, typically
|
||||
`/etc/kubernetes/manifests`). The order does not matter: scheduler and
|
||||
controller manager will retry reaching the apiserver until it is up.
|
||||
|
||||
Use `ps` or `docker ps` to verify that each process has started. For example, verify that kubelet has started a container for the apiserver like this:
|
||||
|
||||
```shell
|
||||
$ sudo docker ps | grep apiserver
|
||||
5783290746d5 k8s.gcr.io/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
|
||||
```
|
||||
|
||||
Then try to connect to the apiserver:
|
||||
|
||||
```shell
|
||||
$ echo $(curl -s http://localhost:8080/healthz)
|
||||
ok
|
||||
$ curl -s http://localhost:8080/api
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
If you have selected the `--register-node=true` option for kubelets, they will now begin self-registering with the apiserver.
|
||||
You should soon be able to see all your nodes by running the `kubectl get nodes` command.
|
||||
Otherwise, you will need to manually create node objects.
|
||||
|
||||
### クラスターサービスの開始
|
||||
|
||||
You will want to complete your Kubernetes clusters by adding cluster-wide
|
||||
services. These are sometimes called *addons*, and [an overview
|
||||
of their purpose is in the admin guide](/docs/admin/cluster-components/#addons).
|
||||
|
||||
Notes for setting up each cluster service are given below:
|
||||
|
||||
* Cluster DNS:
|
||||
* Required for many Kubernetes examples
|
||||
* [Setup instructions](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/)
|
||||
* [Admin Guide](/docs/concepts/services-networking/dns-pod-service/)
|
||||
* Cluster-level Logging
|
||||
* [Cluster-level Logging Overview](/docs/user-guide/logging/overview/)
|
||||
* [Cluster-level Logging with Elasticsearch](/docs/user-guide/logging/elasticsearch/)
|
||||
* [Cluster-level Logging with Stackdriver Logging](/docs/user-guide/logging/stackdriver/)
|
||||
* Container Resource Monitoring
|
||||
* [Setup instructions](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/)
|
||||
* GUI
|
||||
* [Setup instructions](https://github.com/kubernetes/dashboard)
|
||||
|
||||
## トラブルシューティング
|
||||
|
||||
### validate-clusterを実行
|
||||
|
||||
`cluster/validate-cluster.sh` is used by `cluster/kube-up.sh` to determine if
|
||||
the cluster start succeeded.
|
||||
|
||||
Example usage and output:
|
||||
|
||||
```shell
|
||||
KUBECTL_PATH=$(which kubectl) NUM_NODES=3 KUBERNETES_PROVIDER=local cluster/validate-cluster.sh
|
||||
Found 3 node(s).
|
||||
NAME STATUS AGE VERSION
|
||||
node1.local Ready 1h v1.6.9+a3d1dfa6f4335
|
||||
node2.local Ready 1h v1.6.9+a3d1dfa6f4335
|
||||
node3.local Ready 1h v1.6.9+a3d1dfa6f4335
|
||||
Validate output:
|
||||
NAME STATUS MESSAGE ERROR
|
||||
controller-manager Healthy ok
|
||||
scheduler Healthy ok
|
||||
etcd-1 Healthy {"health": "true"}
|
||||
etcd-2 Healthy {"health": "true"}
|
||||
etcd-0 Healthy {"health": "true"}
|
||||
Cluster validation succeeded
|
||||
```
|
||||
|
||||
### podsとservicesの検査
|
||||
|
||||
Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](/docs/setup/turnkey/gce/#inspect-your-cluster).
|
||||
You should see some services. You should also see "mirror pods" for the apiserver, scheduler and controller-manager, plus any add-ons you started.
|
||||
|
||||
### 例を試す
|
||||
|
||||
At this point you should be able to run through one of the basic examples, such as the [nginx example](/examples/application/deployment.yaml).
|
||||
|
||||
### 適合テストの実行
|
||||
|
||||
You may want to try to run the [Conformance test](http://releases.k8s.io/{{< param "githubbranch" >}}/test/e2e_node/conformance/run_test.sh). Any failures may give a hint as to areas that need more attention.
|
||||
|
||||
### ネットワーク
|
||||
|
||||
The nodes must be able to connect to each other using their private IP. Verify this by
|
||||
pinging or SSH-ing from one node to another.
|
||||
|
||||
### 困った時は
|
||||
|
||||
If you run into trouble, see the section on [troubleshooting](/docs/setup/turnkey/gce/#troubleshooting), post to the
|
||||
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on [Slack](/docs/troubleshooting#slack).
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | | Community ([@erictune](https://github.com/erictune))
|
||||
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart.
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: すぐに利用できるクラウドソリューション
|
||||
weight: 40
|
||||
---
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: Alibaba CloudでKubernetesを動かす
|
||||
---
|
||||
|
||||
## Alibaba Cloud Container Service
|
||||
|
||||
[Alibaba Cloud Container Service](https://www.aliyun.com/product/containerservice)はAlibaba Cloud ECSインスタンスのクラスター上でDockerアプリケーションを起動して管理します。著名なオープンソースのコンテナオーケストレーターであるDocker SwarmおよびKubernetesをサポートしています。
|
||||
|
||||
クラスターの構築と管理を簡素化する為に、[Alibaba Cloud Container Serviceの為のKubernetesサポート](https://www.aliyun.com/solution/kubernetes/)を使用します。[Kubernetes walk-through](https://help.aliyun.com/document_detail/53751.html)に従ってすぐに始めることができ、中国語の[Alibaba CloudにおけるKubernetesサポートの為のチュートリアル](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1)もあります。
|
||||
|
||||
カスタムバイナリもしくはオープンソースKubernetesを使用する場合は、以下の手順に従って下さい。
|
||||
|
||||
## 構築のカスタム
|
||||
|
||||
[Alibaba Cloudプロバイダーが実装されたKubernetesのソースコード](https://github.com/AliyunContainerService/kubernetes)はオープンソースであり、GitHubから入手可能です。
|
||||
|
||||
さらなる情報は英語の[Kubernetesのクイックデプロイメント - Alibaba CloudのVPC環境](https://www.alibabacloud.com/forum/read-830)および[中国語](https://yq.aliyun.com/articles/66474)をご覧下さい。
|
|
@ -0,0 +1,89 @@
|
|||
---
|
||||
title: AWS EC2上でKubernetesを動かす
|
||||
content_template: templates/task
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This page describes how to install a Kubernetes cluster on AWS.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.
|
||||
|
||||
### サポートされているプロダクショングレードのツール
|
||||
|
||||
* [conjure-up](/docs/getting-started-guides/ubuntu/) is an open-source installer for Kubernetes that creates Kubernetes clusters with native AWS integrations on Ubuntu.
|
||||
|
||||
* [Kubernetes Operations](https://github.com/kubernetes/kops) - Production Grade K8s Installation, Upgrades, and Management. Supports running Debian, Ubuntu, CentOS, and RHEL in AWS.
|
||||
|
||||
* [CoreOS Tectonic](https://coreos.com/tectonic/) includes the open-source [Tectonic Installer](https://github.com/coreos/tectonic-installer) that creates Kubernetes clusters with Container Linux nodes on AWS.
|
||||
|
||||
* CoreOS originated and the Kubernetes Incubator maintains [a CLI tool, kube-aws](https://github.com/kubernetes-incubator/kube-aws), that creates and manages Kubernetes clusters with [Container Linux](https://coreos.com/why/) nodes, using AWS tools: EC2, CloudFormation and Autoscaling.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## クラスターの始まり
|
||||
|
||||
### コマンドライン管理ツール: kubectl
|
||||
|
||||
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
|
||||
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
|
||||
|
||||
Next, add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```shell
|
||||
# macOS
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
||||
# Linux
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/)
|
||||
|
||||
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
|
||||
For more information, please read [kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
|
||||
### 例
|
||||
|
||||
See [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
|
||||
|
||||
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
|
||||
|
||||
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)
|
||||
|
||||
## クラスターのスケーリング
|
||||
|
||||
Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the [Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation.
|
||||
|
||||
## クラスターの解体
|
||||
|
||||
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
|
||||
`kubernetes` directory:
|
||||
|
||||
```shell
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
|
||||
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community
|
||||
AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/docs/getting-started-guides/ubuntu) | 100% | Commercial, Community
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
||||
## 参考文献
|
||||
|
||||
Please see the [Kubernetes docs](/docs/) for more details on administering
|
||||
and using a Kubernetes cluster.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
title: Azure 上で Kubernetes を動かす
|
||||
---
|
||||
|
||||
## Azure Kubernetes Service (AKS)
|
||||
|
||||
[Azure Kubernetes Service](https://azure.microsoft.com/ja-jp/services/kubernetes-service/)は、Kubernetesクラスターのためのシンプルなデプロイ機能を提供します。
|
||||
|
||||
Azure Kubernetes Serviceを利用してAzure上にKubernetesクラスターをデプロイする例:
|
||||
|
||||
**[Microsoft Azure Kubernetes Service](https://docs.microsoft.com/ja-jp/azure/aks/intro-kubernetes)**
|
||||
|
||||
## デプロイのカスタマイズ: ACS-Engine
|
||||
|
||||
Azure Kubernetes Serviceのコア部分は**オープンソース**であり、コミュニティのためにGitHub上で公開され、利用およびコントリビュートすることができます: **[ACS-Engine](https://github.com/Azure/acs-engine)**.
|
||||
|
||||
ACS-Engineは、Azure Kubernetes Serviceが公式にサポートしている機能を超えてデプロイをカスタマイズしたい場合に適した選択肢です。
|
||||
既存の仮想ネットワークへのデプロイや、複数のagent poolを利用するなどのカスタマイズをすることができます。
|
||||
コミュニティによるACS-Engineへのコントリビュートが、Azure Kubernetes Serviceに組み込まれる場合もあります。
|
||||
|
||||
ACS-Engineへの入力は、Azure Kubernetes Serviceを使用してクラスターを直接デプロイすることに利用されるARMテンプレートの構文に似ています。
|
||||
処理結果はAzure Resource Managerテンプレートとして出力され、ソース管理に組み込んだり、AzureにKubernetesクラスターをデプロイするために使うことができます。
|
||||
|
||||
**[ACS-Engine Kubernetes Walkthrough](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md)** を参照して、すぐに始めることができます。
|
||||
|
||||
## Azure上でCoreOS Tectonicを動かす
|
||||
|
||||
Azureで利用できるCoreOS Tectonic Installerは**オープンソース**であり、コミュニティのためにGitHub上で公開され、利用およびコントリビュートすることができます: **[Tectonic Installer](https://github.com/coreos/tectonic-installer)**.
|
||||
|
||||
Tectonic Installerは、 [Hashicorp が提供する Terraform](https://www.terraform.io/docs/providers/azurerm/)のAzure Resource Manager(ARM)プロバイダーを用いてクラスターをカスタマイズしたい場合に適した選択肢です。
|
||||
これを利用することにより、Terraformと親和性の高いツールを使用してカスタマイズしたり連携したりすることができます。
|
||||
|
||||
[Tectonic Installer for Azure Guide](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html)を参照して、すぐに始めることができます。
|
|
@ -0,0 +1,342 @@
|
|||
---
|
||||
title: CenturyLink Cloud上でKubernetesを動かす
|
||||
---
|
||||
|
||||
{: toc}
|
||||
|
||||
These scripts handle the creation, deletion and expansion of Kubernetes clusters on CenturyLink Cloud.
|
||||
|
||||
You can accomplish all these tasks with a single command. We have made the Ansible playbooks used to perform these tasks available [here](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md).
|
||||
|
||||
## ヘルプの検索
|
||||
|
||||
If you run into any problems or want help with anything, we are here to help. Reach out to use via any of the following ways:
|
||||
|
||||
- Submit a github issue
|
||||
- Send an email to Kubernetes AT ctl DOT io
|
||||
- Visit [http://info.ctl.io/kubernetes](http://info.ctl.io/kubernetes)
|
||||
|
||||
## 仮想マシンもしくは物理サーバーのクラスター、その選択
|
||||
|
||||
- We support Kubernetes clusters on both Virtual Machines or Physical Servers. If you want to use physical servers for the worker nodes (minions), simple use the --minion_type=bareMetal flag.
|
||||
- For more information on physical servers, visit: [https://www.ctl.io/bare-metal/](https://www.ctl.io/bare-metal/)
|
||||
- Physical serves are only available in the VA1 and GB3 data centers.
|
||||
- VMs are available in all 13 of our public cloud locations
|
||||
|
||||
## 必要条件
|
||||
|
||||
The requirements to run this script are:
|
||||
|
||||
- A linux administrative host (tested on ubuntu and macOS)
|
||||
- python 2 (tested on 2.7.11)
|
||||
- pip (installed with python as of 2.7.9)
|
||||
- git
|
||||
- A CenturyLink Cloud account with rights to create new hosts
|
||||
- An active VPN connection to the CenturyLink Cloud from your linux host
|
||||
|
||||
## スクリプトのインストール
|
||||
|
||||
After you have all the requirements met, please follow these instructions to install this script.
|
||||
|
||||
1) Clone this repository and cd into it.
|
||||
|
||||
```shell
|
||||
git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc
|
||||
```
|
||||
|
||||
2) Install all requirements, including
|
||||
|
||||
* Ansible
|
||||
* CenturyLink Cloud SDK
|
||||
* Ansible Modules
|
||||
|
||||
```shell
|
||||
sudo pip install -r ansible/requirements.txt
|
||||
```
|
||||
|
||||
3) Create the credentials file from the template and use it to set your ENV variables
|
||||
|
||||
```shell
|
||||
cp ansible/credentials.sh.template ansible/credentials.sh
|
||||
vi ansible/credentials.sh
|
||||
source ansible/credentials.sh
|
||||
|
||||
```
|
||||
|
||||
4) Grant your machine access to the CenturyLink Cloud network by using a VM inside the network or [ configuring a VPN connection to the CenturyLink Cloud network.](https://www.ctl.io/knowledge-base/network/how-to-configure-client-vpn/)
|
||||
|
||||
|
||||
#### スクリプトのインストールの例: Ububtu 14の手順
|
||||
|
||||
If you use an ubuntu 14, for your convenience we have provided a step by step
|
||||
guide to install the requirements and install the script.
|
||||
|
||||
```shell
|
||||
# system
|
||||
apt-get update
|
||||
apt-get install -y git python python-crypto
|
||||
curl -O https://bootstrap.pypa.io/get-pip.py
|
||||
python get-pip.py
|
||||
|
||||
# installing this repository
|
||||
mkdir -p ~home/k8s-on-clc
|
||||
cd ~home/k8s-on-clc
|
||||
git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc.git
|
||||
cd adm-kubernetes-on-clc/
|
||||
pip install -r requirements.txt
|
||||
|
||||
# getting started
|
||||
cd ansible
|
||||
cp credentials.sh.template credentials.sh; vi credentials.sh
|
||||
source credentials.sh
|
||||
```
|
||||
|
||||
|
||||
|
||||
## クラスターの作成
|
||||
|
||||
To create a new Kubernetes cluster, simply run the ```kube-up.sh``` script. A complete
|
||||
list of script options and some examples are listed below.
|
||||
|
||||
```shell
|
||||
CLC_CLUSTER_NAME=[name of kubernetes cluster]
|
||||
cd ./adm-kubernetes-on-clc
|
||||
bash kube-up.sh -c="$CLC_CLUSTER_NAME"
|
||||
```
|
||||
|
||||
It takes about 15 minutes to create the cluster. Once the script completes, it
|
||||
will output some commands that will help you setup kubectl on your machine to
|
||||
point to the new cluster.
|
||||
|
||||
When the cluster creation is complete, the configuration files for it are stored
|
||||
locally on your administrative host, in the following directory
|
||||
|
||||
```shell
|
||||
> CLC_CLUSTER_HOME=$HOME/.clc_kube/$CLC_CLUSTER_NAME/
|
||||
```
|
||||
|
||||
|
||||
#### クラスターの作成: スクリプトのオプション
|
||||
|
||||
```shell
|
||||
Usage: kube-up.sh [OPTIONS]
|
||||
Create servers in the CenturyLinkCloud environment and initialize a Kubernetes cluster
|
||||
Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in
|
||||
order to access the CenturyLinkCloud API
|
||||
|
||||
All options (both short and long form) require arguments, and must include "="
|
||||
between option name and option value.
|
||||
|
||||
-h (--help) display this help and exit
|
||||
-c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names
|
||||
-t= (--minion_type=) standard -> VM (default), bareMetal -> physical]
|
||||
-d= (--datacenter=) VA1 (default)
|
||||
-m= (--minion_count=) number of kubernetes minion nodes
|
||||
-mem= (--vm_memory=) number of GB ram for each minion
|
||||
-cpu= (--vm_cpu=) number of virtual cps for each minion node
|
||||
-phyid= (--server_conf_id=) physical server configuration id, one of
|
||||
physical_server_20_core_conf_id
|
||||
physical_server_12_core_conf_id
|
||||
physical_server_4_core_conf_id (default)
|
||||
-etcd_separate_cluster=yes create a separate cluster of three etcd nodes,
|
||||
otherwise run etcd on the master node
|
||||
```
|
||||
|
||||
## クラスターの拡張
|
||||
|
||||
To expand an existing Kubernetes cluster, run the ```add-kube-node.sh```
|
||||
script. A complete list of script options and some examples are listed [below](#cluster-expansion-script-options).
|
||||
This script must be run from the same host that created the cluster (or a host
|
||||
that has the cluster artifact files stored in ```~/.clc_kube/$cluster_name```).
|
||||
|
||||
```shell
|
||||
cd ./adm-kubernetes-on-clc
|
||||
bash add-kube-node.sh -c="name_of_kubernetes_cluster" -m=2
|
||||
```
|
||||
|
||||
#### クラスターの拡張: スクリプトのオプション
|
||||
|
||||
```shell
|
||||
Usage: add-kube-node.sh [OPTIONS]
|
||||
Create servers in the CenturyLinkCloud environment and add to an
|
||||
existing CLC kubernetes cluster
|
||||
|
||||
Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in
|
||||
order to access the CenturyLinkCloud API
|
||||
|
||||
-h (--help) display this help and exit
|
||||
-c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names
|
||||
-m= (--minion_count=) number of kubernetes minion nodes to add
|
||||
```
|
||||
|
||||
## クラスターの削除
|
||||
|
||||
There are two ways to delete an existing cluster:
|
||||
|
||||
1) Use our python script:
|
||||
|
||||
```shell
|
||||
python delete_cluster.py --cluster=clc_cluster_name --datacenter=DC1
|
||||
```
|
||||
|
||||
2) Use the CenturyLink Cloud UI. To delete a cluster, log into the CenturyLink
|
||||
Cloud control portal and delete the parent server group that contains the
|
||||
Kubernetes Cluster. We hope to add a scripted option to do this soon.
|
||||
|
||||
## 例
|
||||
|
||||
Create a cluster with name of k8s_1, 1 master node and 3 worker minions (on physical machines), in VA1
|
||||
|
||||
```shell
|
||||
bash kube-up.sh --clc_cluster_name=k8s_1 --minion_type=bareMetal --minion_count=3 --datacenter=VA1
|
||||
```
|
||||
|
||||
Create a cluster with name of k8s_2, an ha etcd cluster on 3 VMs and 6 worker minions (on VMs), in VA1
|
||||
|
||||
```shell
|
||||
bash kube-up.sh --clc_cluster_name=k8s_2 --minion_type=standard --minion_count=6 --datacenter=VA1 --etcd_separate_cluster=yes
|
||||
```
|
||||
|
||||
Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VMs) with higher mem/cpu, in UC1:
|
||||
|
||||
```shell
|
||||
bash kube-up.sh --clc_cluster_name=k8s_3 --minion_type=standard --minion_count=10 --datacenter=VA1 -mem=6 -cpu=4
|
||||
```
|
||||
|
||||
|
||||
|
||||
## クラスターの機能とアーキテクチャ
|
||||
|
||||
We configure the Kubernetes cluster with the following features:
|
||||
|
||||
* KubeDNS: DNS resolution and service discovery
|
||||
* Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling.
|
||||
* Grafana: Kubernetes/Docker metric dashboard
|
||||
* KubeUI: Simple web interface to view Kubernetes state
|
||||
* Kube Dashboard: New web interface to interact with your cluster
|
||||
|
||||
We use the following to create the Kubernetes cluster:
|
||||
|
||||
* Kubernetes 1.1.7
|
||||
* Ubuntu 14.04
|
||||
* Flannel 0.5.4
|
||||
* Docker 1.9.1-0~trusty
|
||||
* Etcd 2.2.2
|
||||
|
||||
## 任意のアドオン
|
||||
|
||||
* Logging: We offer an integrated centralized logging ELK platform so that all
|
||||
Kubernetes and docker logs get sent to the ELK stack. To install the ELK stack
|
||||
and configure Kubernetes to send logs to it, follow [the log
|
||||
aggregation documentation](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/log_aggregration.md). Note: We don't install this by default as
|
||||
the footprint isn't trivial.
|
||||
|
||||
## クラスターの管理
|
||||
|
||||
The most widely used tool for managing a Kubernetes cluster is the command-line
|
||||
utility ```kubectl```. If you do not already have a copy of this binary on your
|
||||
administrative machine, you may run the script ```install_kubectl.sh``` which will
|
||||
download it and install it in ```/usr/bin/local```.
|
||||
|
||||
The script requires that the environment variable ```CLC_CLUSTER_NAME``` be defined
|
||||
|
||||
```install_kubectl.sh``` also writes a configuration file which will embed the necessary
|
||||
authentication certificates for the particular cluster. The configuration file is
|
||||
written to the ```${CLC_CLUSTER_HOME}/kube``` directory
|
||||
|
||||
```shell
|
||||
export KUBECONFIG=${CLC_CLUSTER_HOME}/kube/config
|
||||
kubectl version
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
### プログラムでクラスターへアクセス
|
||||
|
||||
It's possible to use the locally stored client certificates to access the apiserver. For example, you may want to use any of the [Kubernetes API client libraries](/docs/reference/using-api/client-libraries/) to program against your Kubernetes cluster in the programming language of your choice.
|
||||
|
||||
To demonstrate how to use these locally stored certificates, we provide the following example of using ```curl``` to communicate to the master apiserver via https:
|
||||
|
||||
```shell
|
||||
curl \
|
||||
--cacert ${CLC_CLUSTER_HOME}/pki/ca.crt \
|
||||
--key ${CLC_CLUSTER_HOME}/pki/kubecfg.key \
|
||||
--cert ${CLC_CLUSTER_HOME}/pki/kubecfg.crt https://${MASTER_IP}:6443
|
||||
```
|
||||
|
||||
But please note, this *does not* work out of the box with the ```curl``` binary
|
||||
distributed with macOS.
|
||||
|
||||
### ブラウザーを使ったクラスターへのアクセス
|
||||
|
||||
We install [the kubernetes dashboard](/docs/tasks/web-ui-dashboard/). When you
|
||||
create a cluster, the script should output URLs for these interfaces like this:
|
||||
|
||||
kubernetes-dashboard is running at ```https://${MASTER_IP}:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy```.
|
||||
|
||||
Note on Authentication to the UIs: The cluster is set up to use basic
|
||||
authentication for the user _admin_. Hitting the url at
|
||||
```https://${MASTER_IP}:6443``` will require accepting the self-signed certificate
|
||||
from the apiserver, and then presenting the admin password written to file at:
|
||||
|
||||
```> _${CLC_CLUSTER_HOME}/kube/admin_password.txt_```
|
||||
|
||||
|
||||
### 設定ファイル
|
||||
|
||||
Various configuration files are written into the home directory *CLC_CLUSTER_HOME* under
|
||||
```.clc_kube/${CLC_CLUSTER_NAME}``` in several subdirectories. You can use these files
|
||||
to access the cluster from machines other than where you created the cluster from.
|
||||
|
||||
* ```config/```: Ansible variable files containing parameters describing the master and minion hosts
|
||||
* ```hosts/```: hosts files listing access information for the ansible playbooks
|
||||
* ```kube/```: ```kubectl``` configuration files, and the basic-authentication password for admin access to the Kubernetes API
|
||||
* ```pki/```: public key infrastructure files enabling TLS communication in the cluster
|
||||
* ```ssh/```: SSH keys for root access to the hosts
|
||||
|
||||
|
||||
## ```kubectl``` usage examples
|
||||
|
||||
There are a great many features of _kubectl_. Here are a few examples
|
||||
|
||||
List existing nodes, pods, services and more, in all namespaces, or in just one:
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
kubectl get --all-namespaces pods
|
||||
kubectl get --all-namespaces services
|
||||
kubectl get --namespace=kube-system replicationcontrollers
|
||||
```
|
||||
|
||||
The Kubernetes API server exposes services on web URLs, which are protected by requiring
|
||||
client certificates. If you run a kubectl proxy locally, ```kubectl``` will provide
|
||||
the necessary certificates and serve locally over http.
|
||||
|
||||
```shell
|
||||
kubectl proxy -p 8001
|
||||
```
|
||||
|
||||
Then, you can access urls like ```http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/``` without the need for client certificates in your browser.
|
||||
|
||||
|
||||
## どのKubernetesの機能がCenturyLink Cloud上で動かないのか
|
||||
|
||||
These are the known items that don't work on CenturyLink cloud but do work on other cloud providers:
|
||||
|
||||
- At this time, there is no support services of the type [LoadBalancer](/docs/tasks/access-application-cluster/create-external-load-balancer/). We are actively working on this and hope to publish the changes sometime around April 2016.
|
||||
|
||||
- At this time, there is no support for persistent storage volumes provided by
|
||||
CenturyLink Cloud. However, customers can bring their own persistent storage
|
||||
offering. We ourselves use Gluster.
|
||||
|
||||
|
||||
## Ansibleのファイル
|
||||
|
||||
If you want more information about our Ansible files, please [read this file](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md)
|
||||
|
||||
## 参考文献
|
||||
|
||||
Please see the [Kubernetes docs](/docs/) for more details on administering
|
||||
and using a Kubernetes cluster.
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,224 @@
|
|||
---
|
||||
title: Google Compute Engine上でKubernetesを動かす
|
||||
content_template: templates/task
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted cluster installation and management.
|
||||
|
||||
For an easy way to experiment with the Kubernetes development environment, click the button below
|
||||
to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo.
|
||||
|
||||
[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
|
||||
|
||||
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
|
||||
|
||||
### 前提条件
|
||||
|
||||
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](https://console.cloud.google.com) for more details.
|
||||
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
|
||||
1. Enable the [Compute Engine Instance Group Manager API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview) in the [Google Cloud developers console](https://console.developers.google.com/apis/library).
|
||||
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
|
||||
1. Make sure you have credentials for GCloud by running `gcloud auth login`.
|
||||
1. (Optional) In order to make API calls against GCE, you must also run `gcloud auth application-default login`.
|
||||
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
|
||||
1. Make sure you can SSH into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## クラスターの起動
|
||||
|
||||
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
|
||||
|
||||
|
||||
```shell
|
||||
curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```shell
|
||||
wget -q -O - https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
|
||||
|
||||
By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services.
|
||||
|
||||
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
|
||||
|
||||
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
|
||||
|
||||
```shell
|
||||
cd kubernetes
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
|
||||
|
||||
If you run into trouble, please see the section on [troubleshooting](/docs/setup/turnkey/gce/#troubleshooting), post to the
|
||||
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on [Slack](/docs/troubleshooting/#slack).
|
||||
|
||||
The next few steps will show you:
|
||||
|
||||
1. How to set up the command line client on your workstation to manage the cluster
|
||||
1. Examples of how to use the cluster
|
||||
1. How to delete the cluster
|
||||
1. How to start clusters with non-default options (like larger clusters)
|
||||
|
||||
## ワークステーション上でのKubernetesコマンドラインツールのインストール
|
||||
|
||||
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
|
||||
|
||||
The [kubectl](/docs/user-guide/kubectl/) tool controls the Kubernetes cluster
|
||||
manager. It lets you inspect your cluster resources, create, delete, and update
|
||||
components, and much more. You will use it to look at your new cluster and bring
|
||||
up example apps.
|
||||
|
||||
You can use `gcloud` to install the `kubectl` command-line tool on your workstation:
|
||||
|
||||
```shell
|
||||
gcloud components install kubectl
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The kubectl version bundled with `gcloud` may be older than the one
|
||||
downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/kubectl/install/)
|
||||
document to see how you can set up the latest `kubectl` on your workstation.
|
||||
{{< /note >}}
|
||||
|
||||
## クラスターの始まり
|
||||
|
||||
### クラスターの様子を見る
|
||||
|
||||
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
|
||||
|
||||
```shell
|
||||
kubectl get --all-namespaces services
|
||||
```
|
||||
|
||||
should show a set of [services](/docs/user-guide/services) that look something like this:
|
||||
|
||||
```shell
|
||||
NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
|
||||
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
|
||||
kube-system kube-dns ClusterIP 10.0.0.2 <none> 53/TCP,53/UDP 1d
|
||||
kube-system kube-ui ClusterIP 10.0.0.3 <none> 80/TCP 1d
|
||||
...
|
||||
```
|
||||
|
||||
Similarly, you can take a look at the set of [pods](/docs/user-guide/pods) that were created during cluster startup.
|
||||
You can do this via the
|
||||
|
||||
```shell
|
||||
kubectl get --all-namespaces pods
|
||||
```
|
||||
|
||||
command.
|
||||
|
||||
You'll see a list of pods that looks something like this (the name specifics will be different):
|
||||
|
||||
```shell
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-5f4fbb68df-mc8z8 1/1 Running 0 15m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
|
||||
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
|
||||
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
|
||||
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
|
||||
```
|
||||
|
||||
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
|
||||
|
||||
### いくつかの例の実行
|
||||
|
||||
Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
|
||||
|
||||
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough.
|
||||
|
||||
## クラスターの解体
|
||||
|
||||
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
|
||||
|
||||
```shell
|
||||
cd kubernetes
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
|
||||
|
||||
## カスタマイズ
|
||||
|
||||
The script above relies on Google Storage to stage the Kubernetes release. It
|
||||
then will start (by default) a single master VM along with 4 worker VMs. You
|
||||
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
|
||||
You can view a transcript of a successful cluster creation
|
||||
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
|
||||
|
||||
## トラブルシューティング
|
||||
|
||||
### プロジェクトの設定
|
||||
|
||||
You need to have the Google Cloud Storage API, and the Google Cloud Storage
|
||||
JSON API enabled. It is activated by default for new projects. Otherwise, it
|
||||
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
|
||||
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
|
||||
details.
|
||||
|
||||
Also ensure that-- as listed in the [Prerequisites section](#前提条件)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
|
||||
|
||||
### クラスター初期化のハング
|
||||
|
||||
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
|
||||
|
||||
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
|
||||
|
||||
### SSH
|
||||
|
||||
If you're having trouble SSHing into your instances, ensure the GCE firewall
|
||||
isn't blocking port 22 to your VMs. By default, this should work but if you
|
||||
have edited firewall rules or created a new non-default network, you'll need to
|
||||
expose it: `gcloud compute firewall-rules create default-ssh --network=<network-name>
|
||||
--description "SSH allowed from anywhere" --allow tcp:22`
|
||||
|
||||
Additionally, your GCE SSH key must either have no passcode or you need to be
|
||||
using `ssh-agent`.
|
||||
|
||||
### ネットワーク
|
||||
|
||||
The instances must be able to connect to each other using their private IP. The
|
||||
script uses the "default" network which should have a firewall rule called
|
||||
"default-allow-internal" which allows traffic on any port on the private IPs.
|
||||
If this rule is missing from the default network or if you change the network
|
||||
being used in `cluster/config-default.sh` create a new rule with the following
|
||||
field values:
|
||||
|
||||
* Source Ranges: `10.0.0.0/8`
|
||||
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | | Project
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
||||
## 参考文献
|
||||
|
||||
Please see the [Kubernetes docs](/docs/) for more details on administering
|
||||
and using a Kubernetes cluster.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,187 @@
|
|||
---
|
||||
title: Stackpoint.ioを利用して複数のクラウド上でKubernetesを動かす
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
[StackPointCloud](https://stackpoint.io/) is the universal control plane for Kubernetes Anywhere. StackPointCloud allows you to deploy and manage a Kubernetes cluster to the cloud provider of your choice in 3 steps using a web-based interface.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## AWS
|
||||
|
||||
To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Amazon Web Services (AWS).
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Access Key ID and a Secret Access Key from AWS. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on AWS, [consult the Kubernetes documentation](/docs/getting-started-guides/aws/).
|
||||
|
||||
|
||||
## GCE
|
||||
|
||||
To create a Kubernetes cluster on GCE, you will need the Service Account JSON Data from Google.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Google Compute Engine (GCE).
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on GCE, [consult the Kubernetes documentation](/docs/getting-started-guides/gce/).
|
||||
|
||||
|
||||
## Google Kubernetes Engine
|
||||
|
||||
To create a Kubernetes cluster on Google Kubernetes Engine, you will need the Service Account JSON Data from Google.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Google Kubernetes Engine.
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on Google Kubernetes Engine, consult [the official documentation](/docs/home/).
|
||||
|
||||
|
||||
## DigitalOcean
|
||||
|
||||
To create a Kubernetes cluster on DigitalOcean, you will need a DigitalOcean API Token.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select DigitalOcean.
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your DigitalOcean API Token. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on DigitalOcean, consult [the official documentation](/docs/home/).
|
||||
|
||||
|
||||
## Microsoft Azure
|
||||
|
||||
To create a Kubernetes cluster on Microsoft Azure, you will need an Azure Subscription ID, Username/Email, and Password.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Microsoft Azure.
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Azure Subscription ID, Username/Email, and Password. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on Azure, [consult the Kubernetes documentation](/docs/getting-started-guides/azure/).
|
||||
|
||||
|
||||
## Packet
|
||||
|
||||
To create a Kubernetes cluster on Packet, you will need a Packet API Key.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Packet.
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Packet API Key. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on Packet, consult [the official documentation](/docs/home/).
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,7 @@
|
|||
この機能は、現在 *alpha版* です。
|
||||
|
||||
* バージョン名には `alpha` がつきます(例:`v1alpha1`)。
|
||||
* 現在もバグが多く含まれる可能性があり、この機能を利用するとバグが顕在化することがあります。そのため、現時点ではデフォルトで無効化されています。
|
||||
* 予告なく、随時この機能のサポートを中止する場合があります。
|
||||
* 予告なく、今後のリリースにおいて、互換性のないAPIの仕様変更が入る場合があります。
|
||||
* 一時的な検証目的の利用に留めてください。現時点ではバグが顕在化するリスクが高く、また長期的なサポートも保証されていません。
|
|
@ -0,0 +1,8 @@
|
|||
この機能は、現在 *beta版* です。
|
||||
|
||||
* バージョン名には `beta` がつきます(例:`v2beta3`)。
|
||||
* コードが十分にテストされているため、この機能は安全に有効化できます。デフォルトでも有効化されています。
|
||||
* 今後も継続して、この機能は包括的にサポートされる見通しですが、細かい部分が変更になる場合があります。
|
||||
* 今後のbeta版または安定版のリリースにおいては、オブジェクトのデータの形式や意味の両方あるいはいずれかについて、互換性のない変更が入る場合があります。その際は、次期バージョンへの移行手順も提供します。その移行にあたっては、APIオブジェクトの削除・改変・再作成が必要になる場合があります。特に改変には、多少の検討が必要になることがあります。また、それを適用する際には、この機能に依存するアプリケーションの一時停止が必要になる場合があります。
|
||||
* 今後のリリースにおいて互換性のない変更が入る可能性があります。そのため、業務用途外の検証としてのみ利用が推奨されています。ただし、個別にアップグレード可能な環境が複数ある場合は、この制限事項の限りではありません。
|
||||
* **beta版の機能の積極的な試用とフィードバックにご協力をお願いします!一度beta版から安定版になると、それ以降は変更を加えることが困難になります。**
|
|
@ -0,0 +1,2 @@
|
|||
|
||||
この機能は、現在 *非推奨* の状態です。詳細については、[Kubernetes Deprecation Policy](/docs/reference/deprecation-policy/)を参照してください。
|
|
@ -0,0 +1,5 @@
|
|||
|
||||
この機能は、現在 *安定版* です。
|
||||
|
||||
* バージョン名は、 `vX` (`X`はバージョン番号を示す整数) という規則でつけられています。
|
||||
* 安定版となっている機能は、これ以降のバージョンにおいても長期にわたって利用可能です。
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
headless: true
|
||||
|
||||
resources:
|
||||
- src: "*alpha*"
|
||||
title: "alpha"
|
||||
- src: "*beta*"
|
||||
title: "beta"
|
||||
- src: "*deprecated*"
|
||||
title: "deprecated"
|
||||
- src: "*stable*"
|
||||
title: "stable"
|
||||
---
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
title: チュートリアル
|
||||
main_menu: true
|
||||
weight: 60
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
本セクションにはチュートリアルが含まれています。チュートリアルでは、単一の[タスク](/docs/tasks/)よりも大きな目標を達成する方法を示します。通常、チュートリアルにはいくつかのセクションがあり、各セクションには一連のステップがあります。各チュートリアルを進める前に、後で参照できるように[標準化された用語集](/docs/reference/glossary/)ページをブックマークしておくことをお勧めします。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## 基本
|
||||
|
||||
* [Kubernetesの基本](/ja/docs/tutorials/kubernetes-basics/)は、Kubernetesのシステムを理解し、基本的な機能を試すのに役立つ、詳細な対話式のチュートリアルです。
|
||||
|
||||
* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
|
||||
|
||||
* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
|
||||
|
||||
* [Hello Minikube](/docs/tutorials/hello-minikube/)
|
||||
|
||||
## 設定
|
||||
|
||||
* [ConfigMapを用いたRedisの設定](/docs/tutorials/configuration/configure-redis-using-configmap/)
|
||||
|
||||
## ステートレスアプリケーション
|
||||
|
||||
* [クラスタ内のアプリケーションにアクセスするために外部IPアドレスを公開する](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [例: Redisを使用したPHPゲストブックアプリケーションのデプロイ](/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
## ステートフルアプリケーション
|
||||
|
||||
* [StatefulSetの基本](/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
|
||||
* [例: 永続化ボリュームを使ったWordPressとMySQLのデプロイ](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
|
||||
|
||||
* [例: Stateful Setsを使ったCassandraのデプロイ](/docs/tutorials/stateful-application/cassandra/)
|
||||
|
||||
* [CP(一貫性+分断耐性)分散システムZooKeeperの実行](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## CI/CDパイプライン
|
||||
|
||||
* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
|
||||
|
||||
* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
|
||||
|
||||
* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
|
||||
|
||||
* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
|
||||
|
||||
## クラスタ
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
||||
## サービス
|
||||
|
||||
* [Source IPを使う](/docs/tutorials/services/source-ip/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
チュートリアルを書きたい場合は、[ページテンプレートの使用](/docs/contribute/style/page-templates/)を参照し、チュートリアルのページタイプとチュートリアルテンプレートについてご確認ください。
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,261 @@
|
|||
---
|
||||
title: Hello Minikube
|
||||
content_template: templates/tutorial
|
||||
weight: 5
|
||||
menu:
|
||||
main:
|
||||
title: "Get Started"
|
||||
weight: 10
|
||||
post: >
|
||||
<p>手を動かす準備はできていますか?本チュートリアルでは、Node.jsを使った簡単な"Hello World"を実行するKubernetesクラスタをビルドします。</p>
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
このチュートリアルでは、[Minikube](/docs/getting-started-guides/minikube)とKatacodaを使用して、Kubernetes上でシンプルなHello WorldのNode.jsアプリケーションを動かす方法を紹介します。Katacodaはブラウザで無償のKubernetes環境を提供します。
|
||||
|
||||
{{< note >}}
|
||||
[Minikubeをローカルにインストール](/docs/tasks/tools/install-minikube/)している場合もこのチュートリアルを進めることが可能です。
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture objectives %}}
|
||||
|
||||
* Minikubeへのhello worldアプリケーションのデプロイ
|
||||
* アプリケーションの実行
|
||||
* アプリケーションログの確認
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
このチュートリアルは下記のファイルからビルドされるコンテナーイメージを提供します:
|
||||
|
||||
{{< codenew language="js" file="minikube/server.js" >}}
|
||||
|
||||
{{< codenew language="conf" file="minikube/Dockerfile" >}}
|
||||
|
||||
`docker build`コマンドについての詳細な情報は、[Dockerのドキュメント](https://docs.docker.com/engine/reference/commandline/build/)を参照してください。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture lessoncontent %}}
|
||||
|
||||
## Minikubeクラスタの作成
|
||||
|
||||
1. **Launch Terminal** をクリックしてください
|
||||
|
||||
{{< kat-button >}}
|
||||
|
||||
{{< note >}}Minikubeをローカルにインストール済みの場合は、`minikube start`を実行してください。{{< /note >}}
|
||||
|
||||
2. ブラウザーでKubernetesダッシュボードを開いてください:
|
||||
|
||||
```shell
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
3. Katacoda環境のみ:ターミナルペーン上部の+ボタンをクリックしてから **Select port to view on Host 1** をクリックしてください。
|
||||
|
||||
4. Katacoda環境のみ:30000を入力し、**Display Port**をクリックしてください。
|
||||
|
||||
## Deploymentの作成
|
||||
|
||||
Kubernetesの[*Pod*](/docs/concepts/workloads/pods/pod/) は、コンテナの管理やネットワーキングの目的でまとめられた、1つ以上のコンテナのグループです。このチュートリアルのPodがもつコンテナは1つのみです。Kubernetesの [*Deployment*](/docs/concepts/workloads/controllers/deployment/) はPodの状態を確認し、Podのコンテナが停止した場合には再起動します。DeploymentはPodの作成やスケールを管理するために推奨される方法(手段)です。
|
||||
|
||||
1. `kubectl create` コマンドを使用してPodを管理するDeploymentを作成してください。Podは提供されたDockerイメージを元にコンテナを実行します。
|
||||
|
||||
```shell
|
||||
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node --port=8080
|
||||
```
|
||||
|
||||
2. Deploymentを確認します:
|
||||
|
||||
```shell
|
||||
kubectl get deployments
|
||||
```
|
||||
|
||||
出力:
|
||||
|
||||
```shell
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
hello-node 1 1 1 1 1m
|
||||
```
|
||||
|
||||
3. Podを確認します:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
出力:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
4. クラスタイベントを確認します:
|
||||
|
||||
```shell
|
||||
kubectl get events
|
||||
```
|
||||
|
||||
5. `kubectl` で設定を確認します:
|
||||
|
||||
```shell
|
||||
kubectl config view
|
||||
```
|
||||
|
||||
{{< note >}} `kubectl`コマンドの詳細な情報は[kubectl overview](/docs/user-guide/kubectl-overview/)を参照してください。{{< /note >}}
|
||||
|
||||
## Serviceの作成
|
||||
|
||||
通常、PodはKubernetesクラスタ内部のIPアドレスからのみアクセスすることができます。`hello-node`コンテナをKubernetesの仮想ネットワークの外部からアクセスするためには、Kubernetesの[*Service*](/docs/concepts/services-networking/service/)としてポッドを公開する必要があります。
|
||||
|
||||
1. `kubectl expose` コマンドを使用してPodをインターネットに公開します:
|
||||
|
||||
```shell
|
||||
kubectl expose deployment hello-node --type=LoadBalancer
|
||||
```
|
||||
|
||||
`--type=LoadBalancer`フラグはServiceをクラスタ外部に公開したいことを示しています。
|
||||
|
||||
2. 作成したServiceを確認します:
|
||||
|
||||
```shell
|
||||
kubectl get services
|
||||
```
|
||||
|
||||
出力:
|
||||
|
||||
```shell
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s
|
||||
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m
|
||||
```
|
||||
|
||||
ロードバランサーをサポートするクラウドプロバイダーでは、Serviceにアクセスするための外部IPアドレスが提供されます。
|
||||
Minikube では、`LoadBalancer`タイプは`minikube service`コマンドを使用した接続可能なServiceを作成します。
|
||||
|
||||
3. 次のコマンドを実行します:
|
||||
|
||||
```shell
|
||||
minikube service hello-node
|
||||
```
|
||||
|
||||
4. Katacoda環境のみ:ターミナル画面上部の+ボタンをクリックして **Select port to view on Host 1** をクリックしてください。
|
||||
|
||||
5. Katacoda環境のみ:8080を入力し、**Display Port**をクリックしてください。
|
||||
|
||||
"Hello World"メッセージが表示されるアプリケーションのブラウザウィンドウが開きます。
|
||||
|
||||
## アドオンの有効化
|
||||
|
||||
Minikubeはビルトインのアドオンがあり、有効化、無効化、あるいはローカルのKubernetes環境に公開することができます。
|
||||
|
||||
1. サポートされているアドオンをリストアップします:
|
||||
|
||||
```shell
|
||||
minikube addons list
|
||||
```
|
||||
|
||||
出力:
|
||||
|
||||
```shell
|
||||
addon-manager: enabled
|
||||
coredns: disabled
|
||||
dashboard: enabled
|
||||
default-storageclass: enabled
|
||||
efk: disabled
|
||||
freshpod: disabled
|
||||
heapster: disabled
|
||||
ingress: disabled
|
||||
kube-dns: enabled
|
||||
metrics-server: disabled
|
||||
nvidia-driver-installer: disabled
|
||||
nvidia-gpu-device-plugin: disabled
|
||||
registry: disabled
|
||||
registry-creds: disabled
|
||||
storage-provisioner: enabled
|
||||
```
|
||||
|
||||
2. ここでは例として`heapster`のアドオンを有効化します:
|
||||
|
||||
```shell
|
||||
minikube addons enable heapster
|
||||
```
|
||||
|
||||
出力:
|
||||
|
||||
```shell
|
||||
heapster was successfully enabled
|
||||
```
|
||||
|
||||
3. 作成されたポッドとサービスを確認します:
|
||||
|
||||
```shell
|
||||
kubectl get pod,svc -n kube-system
|
||||
```
|
||||
|
||||
出力:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/heapster-9jttx 1/1 Running 0 26s
|
||||
pod/influxdb-grafana-b29w8 2/2 Running 0 26s
|
||||
pod/kube-addon-manager-minikube 1/1 Running 0 34m
|
||||
pod/kube-dns-6dcb57bcc8-gv7mw 3/3 Running 0 34m
|
||||
pod/kubernetes-dashboard-5498ccf677-cgspw 1/1 Running 0 34m
|
||||
pod/storage-provisioner 1/1 Running 0 34m
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/heapster ClusterIP 10.96.241.45 <none> 80/TCP 26s
|
||||
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m
|
||||
service/kubernetes-dashboard NodePort 10.109.29.1 <none> 80:30000/TCP 34m
|
||||
service/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s
|
||||
service/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s
|
||||
```
|
||||
|
||||
4. `heapster`を無効化します:
|
||||
|
||||
```shell
|
||||
minikube addons disable heapster
|
||||
```
|
||||
|
||||
出力:
|
||||
|
||||
```shell
|
||||
heapster was successfully disabled
|
||||
```
|
||||
|
||||
## クリーンアップ
|
||||
|
||||
クラスタに作成したリソースをクリーンアップします:
|
||||
|
||||
```shell
|
||||
kubectl delete service hello-node
|
||||
kubectl delete deployment hello-node
|
||||
```
|
||||
|
||||
(オプション)Minikubeの仮想マシン(VM)を停止します:
|
||||
|
||||
```shell
|
||||
minikube stop
|
||||
```
|
||||
|
||||
(オプション)MinikubeのVMを削除します:
|
||||
|
||||
```shell
|
||||
minikube delete
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* [Deploymentオブジェクト](/docs/concepts/workloads/controllers/deployment/)について学ぶ.
|
||||
* [アプリケーションのデプロイ](/docs/user-guide/deploying-applications/)について学ぶ.
|
||||
* [Serviceオブジェクト](/docs/concepts/services-networking/service/)について学ぶ.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,114 @@
|
|||
---
|
||||
title: Kubernetesの基本を学ぶ
|
||||
linkTitle: Kubernetesの基本を学ぶ
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-9">
|
||||
<h2>Kubernetesの基本</h2>
|
||||
<p>このチュートリアルでは、Kubernetesクラスタオーケストレーションシステムの基本について学びます。各モジュールには、Kubernetesの主な機能と概念に関する背景情報と、インタラクティブなオンラインチュートリアルが含まれています。これらの対話型チュートリアルでは、簡単なクラスタとその<a href="/ja/docs/concepts/overview/what-is-kubernetes/#why-containers">コンテナ化されたアプリケーション</a> を自分で管理できます。</p>
|
||||
<p>この対話型のチュートリアルでは、以下のことを学ぶことができます:</p>
|
||||
<ul>
|
||||
<li>コンテナ化されたアプリケーションをクラスタにデプロイ</li>
|
||||
<li>Deploymentのスケーリング</li>
|
||||
<li>新しいソフトウェアのバージョンでコンテナ化されたアプリケーションをアップデート</li>
|
||||
<li>コンテナ化されたアプリケーションのデバッグ</li>
|
||||
</ul>
|
||||
<p>このチュートリアルでは、Katacodaを使用して、Webブラウザ上の仮想ターミナルでMinikubeを実行します。Minikubeは、どこでも実行できるKubernetesの小規模なローカル環境です。ソフトウェアをインストールしたり、何かを設定したりする必要はありません。各対話型チュートリアルは、Webブラウザ自体の上で直接実行されます</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-9">
|
||||
<h2>Kubernetesはどんなことができるの?</h2>
|
||||
<p>モダンなWebサービスでは、ユーザはアプリケーションが24時間365日利用可能であることを期待しており、開発者はそれらのアプリケーションの新しいバージョンを1日に数回デプロイすることを期待しています。コンテナ化は、パッケージソフトウェアがこれらの目標を達成するのを助け、アプリケーションをダウンタイムなしで簡単かつ迅速にリリース、アップデートできるようにします。Kubernetesを使用すると、コンテナ化されたアプリケーションをいつでもどこでも好きなときに実行できるようになり、それらが機能するために必要なリソースとツールを見つけやすくなります。<a href="/ja/docs/concepts/overview/what-is-kubernetes/">Kubernetes</a>は、コンテナオーケストレーションにおけるGoogleのこれまでの経験と、コミュニティから得られた最善のアイデアを組み合わせて設計された、プロダクションレディなオープンソースプラットフォームです。</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="basics-modules" class="content__modules">
|
||||
<h2>Kubernetesの基本モジュール</h2>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<div class="row">
|
||||
<div class="col-md-4">
|
||||
<div class="thumbnail">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_01.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><h5>1. Kubernetesクラスタの作成</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="thumbnail">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_02.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><h5>2. アプリケーションのデプロイ</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="thumbnail">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/explore/explore-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/explore/explore-intro/"><h5>3. デプロイしたアプリケーションの探索</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-12">
|
||||
<div class="row">
|
||||
<div class="col-md-4">
|
||||
<div class="thumbnail">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/expose/expose-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/expose/expose-intro/"><h5>4. アプリケーションの公開</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="thumbnail">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/scale/scale-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_05.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/scale/scale-intro/"><h5>5. アプリケーションのスケールアップ</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="thumbnail">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/update/update-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_06.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/ja/docs/tutorials/kubernetes-basics/update/update-intro/"><h5>6. アプリケーションのアップデート</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/" role="button">チュートリアルを始める<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: クラスタの作成
|
||||
weight: 10
|
||||
---
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: 対話型チュートリアル - クラスタの作成
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content katacoda-content">
|
||||
|
||||
<div class="katacoda">
|
||||
<div class="katacoda__alert">
|
||||
ターミナルを使うには、PCまたはタブレットをお使いください
|
||||
</div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/1" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;"></div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/" role="button">モジュール2へ進む<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,108 @@
|
|||
---
|
||||
title: Minikubeを使ったクラスタの作成
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>目標</h3>
|
||||
<ul>
|
||||
<li>Kubernetesクラスタとは何かを学ぶ</li>
|
||||
<li>Minikubeとは何かを学ぶ</li>
|
||||
<li>Kubernetesクラスタを、オンラインのターミナルを使って動かす</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>Kubernetesクラスタ</h3>
|
||||
<p>
|
||||
<b>Kubernetesは、単一のユニットとして機能するように接続された、可用性の高いコンピュータのクラスタをまとめあげます。</b>Kubernetesの抽象化により、コンテナ化されたアプリケーションを個々のマシンに特に結び付けることなくクラスタにデプロイできます。この新しいデプロイモデルを利用するには、アプリケーションを個々のホストから切り離す方法でアプリケーションをパッケージ化(つまり、コンテナ化)する必要があります。コンテナ化されたアプリケーションは、アプリケーションがホストに深く統合されたパッケージとして特定のマシンに直接インストールされていた従来のデプロイモデルよりも柔軟で、より迅速に利用可能です。<b>Kubernetesはより効率的な方法で、クラスタ全体のアプリケーションコンテナの配布とスケジューリングを自動化します。</b>Kubernetesは<a href="https://github.com/kubernetes/kubernetes">オープンソース</a>のプラットフォームであり、プロダクションレディです。
|
||||
</p>
|
||||
<p>Kubernetesクラスタは以下の2種類のリソースで構成されています:
|
||||
<ul>
|
||||
<li><b>マスター</b>がクラスタを管理する</li>
|
||||
<li><b>Node</b>がアプリケーションを動かすワーカーとなる</li>
|
||||
</ul>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>まとめ:</h3>
|
||||
<ul>
|
||||
<li>Kubernetesクラスタ</li>
|
||||
<li>Minikube</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>
|
||||
Kubernetesは、コンピュータクラスタ内およびコンピュータクラスタ間でのアプリケーションコンテナの配置(スケジューリング)および実行を調整する、プロダクショングレードのオープンソースプラットフォームです。
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">クラスタダイアグラム</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_01_cluster.svg"></p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><b>マスターはクラスタの管理を担当します。</b>マスターは、アプリケーションのスケジューリング、望ましい状態の維持、アプリケーションのスケーリング、新しい更新のロールアウトなど、クラスタ内のすべての動作をまとめあげます。</p>
|
||||
<p><b>Nodeは、Kubernetesクラスタのワーカーマシンとして機能するVMまたは物理マシンです。</b>各NodeにはKubeletがあり、これはNodeを管理し、Kubernetesマスターと通信するためのエージェントです。Nodeには<a href="https://www.docker.com/">Docker</a>や<a href="https://coreos.com/rkt/">rkt</a>などのコンテナ操作を処理するためのツールもあるはずです。プロダクションのトラフィックを処理するKubernetesクラスタには、最低3つのNodeが必要です。</p>
|
||||
</div>
|
||||
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i> マスターはクラスタを管理するために、Nodeは実行中のアプリケーションをホストするために使用されます。 </i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>Kubernetesにアプリケーションをデプロイするときは、マスターにアプリケーションコンテナを起動するように指示します。マスターはコンテナがクラスタのNodeで実行されるようにスケジュールします。<b>Nodeは、マスターが公開しているKubernetes APIを使用してマスターと通信します。</b>エンドユーザーは、Kubernetes APIを直接使用して対話することもできます。</p>
|
||||
|
||||
<p>Kubernetesクラスタは、物理マシンまたは仮想マシンのどちらにも配置できます。Kubernetes開発を始めるために<a href="https://github.com/kubernetes/minikube">Minikube</a>を使うことができます。Minikubeは、ローカルマシン上にVMを作成し、1つのNodeのみを含む単純なクラスタをデプロイする軽量なKubernetes実装です。Minikubeは、Linux、macOS、およびWindowsシステムで利用可能です。Minikube CLIは、起動、停止、ステータス、削除など、クラスタを操作するための基本的なブートストラップ操作を提供します。ただし、このチュートリアルでは、Minikubeがプリインストールされた状態で提供されているオンラインのターミナルを使用します。</p>
|
||||
|
||||
<p>Kubernetesが何であるかがわかったので、オンラインチュートリアルに行き、最初のクラスタを動かしましょう!</p>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/" role="button">対話型のチュートリアルを始める <span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: アプリケーションのデプロイ
|
||||
weight: 20
|
||||
---
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: 対話型チュートリアル - アプリケーションのデプロイ
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content katacoda-content">
|
||||
|
||||
<br>
|
||||
<div class="katacoda">
|
||||
<div class="katacoda__alert">
|
||||
ターミナルを使うには、PCまたはタブレットをお使いください
|
||||
</div>
|
||||
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/7" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/explore/explore-intro/" role="button">モジュール3へ進む<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,107 @@
|
|||
---
|
||||
title: kubectlを使ったDeploymentの作成
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>目標</h3>
|
||||
<ul>
|
||||
<li>アプリケーションのデプロイについて学ぶ</li>
|
||||
<li>kubectlを使って、Kubenretes上にはじめてのアプリケーションをデプロイする</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>Kubernetes Deployments</h3>
|
||||
<p>
|
||||
実行中のKubernetesクラスタを入手すると、その上にコンテナ化アプリケーションをデプロイすることができます。そのためには、Kubernetesの<b>Deployment</b> の設定を作成します。DeploymentはKubernetesにあなたのアプリケーションのインスタンスを作成し、更新する方法を指示します。Deploymentを作成すると、Kubernetesマスターは指定されたアプリケーションインスタンスをクラスタ内の個々のNodeにスケジュールします。
|
||||
</p>
|
||||
|
||||
<p>アプリケーションインスタンスが作成されると、Kubernetes Deploymentコントローラーは、それらのインスタンスを継続的に監視します。インスタンスをホストしているNodeが停止、削除された場合、Deploymentコントローラーがそれを置き換えます。<b>これは、マシンの故障やメンテナンスに対処するためのセルフヒーリングの仕組みを提供しています。</b></p>
|
||||
|
||||
<p>オーケストレーションが入る前の世界では、インストールスクリプトを使用してアプリケーションを起動することはよくありましたが、マシン障害が発生した場合に復旧する事はできませんでした。アプリケーションのインスタンスを作成し、それらをNode間で実行し続けることで、Kubernetes Deploymentsはアプリケーションの管理に根本的に異なるアプローチを提供します。</p>
|
||||
|
||||
</div>
|
||||
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>まとめ:</h3>
|
||||
<ul>
|
||||
<li>Deployments</li>
|
||||
<li>kubectl</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>
|
||||
Deploymentは、アプリケーションのインスタンスを作成および更新する責務があります。
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">Kubenretes上にはじめてのアプリケーションをデプロイする</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg"></p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
|
||||
<p>Kubernetesのコマンドラインインターフェイスである<b>kubectl</b>を使用して、Deploymentを作成、管理できます。kubectlはKubernetes APIを使用してクラスタと対話します。このモジュールでは、Kubernetesクラスタでアプリケーションを実行するDeploymentを作成するために必要な、最も一般的なkubectlコマンドについて学びます。</p>
|
||||
|
||||
<p>Deploymentを作成するときは、アプリケーションのコンテナイメージと実行するレプリカの数を指定する必要があります。Deploymentを更新することで、あとでその情報を変更できます。チュートリアルのモジュール<a href="/ja/docs/tutorials/kubernetes-basics/scale-intro/">5</a>と<a href="/ja/docs/tutorials/kubernetes-basics/update-intro/">6</a>では、Deploymentをどのようにスケール、更新できるかについて説明します。</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>Kubernetesにデプロイするには、アプリケーションをサポートされているコンテナ形式のいずれかにパッケージ化する必要があります。</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
|
||||
<p>最初のDeploymentには、Dockerコンテナにパッケージされた<a href="https://nodejs.org">Node.js</a>アプリケーションを使用します。Node.jsアプリケーションを作成してDockerコンテナをデプロイするには、<a href="/ja/docs/tutorials/hello-minikube/#create-your-node-js-application">Hello Minikubeチュートリアル</a>の指示に従ってください。</p>
|
||||
|
||||
<p>Deploymentが何であるかがわかったので、オンラインチュートリアルに行き、最初のアプリケーションをデプロイしましょう!</p>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/" role="button">対話型のチュートリアルを始める <span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: アプリケーションの探索
|
||||
weight: 30
|
||||
---
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: 対話型チュートリアル - デプロイしたアプリケーションの探索
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content katacoda-content">
|
||||
|
||||
<br>
|
||||
<div class="katacoda">
|
||||
|
||||
<div class="katacoda__alert">
|
||||
ターミナルを使うには、PCまたはタブレットをお使いください
|
||||
</div>
|
||||
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/4" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/expose/expose-intro/" role="button">モジュール4へ進む<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,141 @@
|
|||
---
|
||||
title: PodとNodeについて
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>目標</h3>
|
||||
<ul>
|
||||
<li>KubernetesのPodについて学ぶ</li>
|
||||
<li>KubernetesのNodeについて学ぶ</li>
|
||||
<li>デプロイされたアプリケーションのトラブルシューティング</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<h2>Kubernetes Pod</h2>
|
||||
<p>モジュール<a href="/ja/docs/tutorials/kubernetes-basics/deploy-intro/">2</a>でDeploymentを作成したときに、KubernetesはアプリケーションインスタンスをホストするためのPodを作成しました。Podは、1つ以上のアプリケーションコンテナ(Dockerやrktなど)のグループとそれらのコンテナの共有リソースを表すKubernetesの抽象概念です。 Podには以下のものが含まれます:</p>
|
||||
<ul>
|
||||
<li>共有ストレージ(ボリューム)</li>
|
||||
<li>ネットワーキング(クラスタに固有のIPアドレス)</li>
|
||||
<li>コンテナのイメージバージョンや使用するポートなどの、各コンテナをどう動かすかに関する情報</li>
|
||||
</ul>
|
||||
<p>Podは、アプリケーション固有の「論理ホスト」をモデル化し、比較的密接に結合されたさまざまなアプリケーションコンテナを含むことができます。 たとえば、Podには、Node.jsアプリケーションを含むコンテナと、Node.js Webサーバによって公開されるデータを供給する別のコンテナの両方を含めることができます。Pod内のコンテナはIPアドレスとポートスペースを共有し、常に同じ場所に配置され、同じスケジュールに入れられ、同じNode上の共有コンテキストで実行されます。</p>
|
||||
<p>Podは、Kubernetesプラットフォームの原子単位です。 Kubernetes上にDeploymentを作成すると、そのDeploymentはその中にコンテナを持つPodを作成します(コンテナを直接作成するのではなく)。 各Podは、スケジュールされているNodeに関連付けられており、終了(再起動ポリシーに従って)または削除されるまでそこに残ります。 Nodeに障害が発生した場合、同じPodがクラスタ内の他の使用可能なNodeにスケジュールされます。</p>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>まとめ:</h3>
|
||||
<ul>
|
||||
<li>Pod</li>
|
||||
<li>Node</li>
|
||||
<li>kubectlの主要なコマンド</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>
|
||||
Podは1つ以上のアプリケーションコンテナ(Dockerやrktなど)のグループであり、共有ストレージ(ボリューム)、IPアドレス、それらの実行方法に関する情報が含まれています。
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">Podの概要</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_03_pods.svg"></p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2>Node</h2>
|
||||
<p>Podは常に<b>Node</b>上で動作します。NodeはKubernetesではワーカーマシンであり、クラスタによって仮想、物理マシンのどちらであってもかまいません。各Nodeはマスターによって管理されます。Nodeは複数のPodを持つことができ、Kubernetesマスターはクラスタ内のNode間でPodのスケジュールを自動的に処理します。マスターの自動スケジューリングは各Nodeで利用可能なリソースを考慮に入れます。</p>
|
||||
|
||||
<p>すべてのKubernetesNodeでは少なくとも以下のものが動作します。</p>
|
||||
<ul>
|
||||
<li>Kubelet: KubernetesマスターとNode間の通信を担当するプロセス。マシン上で実行されているPodとコンテナを管理します。</li>
|
||||
<li>レジストリからコンテナイメージを取得し、コンテナを解凍し、アプリケーションを実行することを担当する、Docker、rktのようなコンテナランタイム。</li>
|
||||
</ul>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i> コンテナ同士が密接に結合され、ディスクなどのリソースを共有する必要がある場合は、コンテナを1つのPodにまとめてスケジュールする必要があります。 </i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">Nodeの概要</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_03_nodes.svg"></p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2>kubectlを使ったトラブルシューティング</h2>
|
||||
<p>モジュール<a href="/ja/docs/tutorials/kubernetes-basics/deploy/deploy-intro/">2</a>では、kubectlのコマンドラインインターフェースを使用しました。モジュール3でもこれを使用して、デプロイされたアプリケーションとその環境に関する情報を入手します。最も一般的な操作は、次のkubectlコマンドで実行できます。</p>
|
||||
<ul>
|
||||
<li><b>kubectl get</b> - リソースの一覧を表示</li>
|
||||
<li><b>kubectl describe</b> - 単一リソースに関する詳細情報を表示</li>
|
||||
<li><b>kubectl logs</b> - 単一Pod上の単一コンテナ内のログを表示</li>
|
||||
<li><b>kubectl exec</b> - 単一Pod上の単一コンテナ内でコマンドを実行</li>
|
||||
</ul>
|
||||
|
||||
<p>これらのコマンドを使用して、アプリケーションがいつデプロイされたか、それらの現在の状況、実行中の場所、および構成を確認することができます。</p>
|
||||
|
||||
<p>クラスタのコンポーネントとコマンドラインの詳細についてわかったので、次にデプロイしたアプリケーションを探索してみましょう。</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i> NodeはKubernetesではワーカーマシンであり、クラスタに応じてVMまたは物理マシンになります。 複数のPodを1つのNodeで実行できます。 </i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/explore/explore-interactive/" role="button">対話型のチュートリアルを始める <span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: アプリケーションの公開
|
||||
weight: 40
|
||||
---
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
title: 対話型チュートリアル - アプリケーションの公開
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content katacoda-content">
|
||||
|
||||
<div class="katacoda">
|
||||
<div class="katacoda__alert">
|
||||
ターミナルを使うには、PCまたはタブレットをお使いください
|
||||
</div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/8" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/scale/scale-intro/" role="button">モジュール5へ進む<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,114 @@
|
|||
---
|
||||
title: Serviceを使ったアプリケーションの公開
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h3>目標</h3>
|
||||
<ul>
|
||||
<li>KubernetesにおけるServiceについて理解する</li>
|
||||
<li>ラベルとLabelSelectorオブジェクトがServiceにどう関係しているかを理解する</li>
|
||||
<li>Serviceを使って、Kubernetesクラスタの外にアプリケーションを公開する</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>Kubernetes Serviceの概要</h3>
|
||||
|
||||
<p>Kubernetes Podの寿命は永続的ではありません。実際、<a href="/ja/docs/concepts/workloads/pods/pod-overview/">Pod</a>には<a href="/ja/docs/concepts/workloads/pods/pod-lifecycle/">ライフサイクル</a>があります。ワーカーのNodeが停止すると、そのNodeで実行されているPodも失われます。そうなると、<a href="/ja/docs/user-guide/replication-controller/#what-is-a-replicationcontroller">ReplicationController</a>は、新しいPodを作成してアプリケーションを実行し続けるために、クラスタを動的に目的の状態に戻すことができます。別の例として、3つのレプリカを持つ画像処理バックエンドを考えます。それらのレプリカは交換可能です。フロントエンドシステムはバックエンドのレプリカを気にしたり、Podが失われて再作成されたとしても配慮すべきではありません。ただし、Kubernetesクラスタ内の各Podは、同じNode上のPodであっても一意のIPアドレスを持っているため、アプリケーションが機能し続けるように、Pod間の変更を自動的に調整する方法が必要です。</p>
|
||||
|
||||
<p>KubernetesのServiceは、Podの論理セットと、それらにアクセスするためのポリシーを定義する抽象概念です。Serviceによって、依存Pod間の疎結合が可能になります。Serviceは、すべてのKubernetesオブジェクトのように、YAML<a href="/ja/docs/concepts/configuration/overview/#general-config-tips">(推奨)</a>またはJSONを使って定義されます。Serviceが対象とするPodのセットは通常、<i>LabelSelector</i>によって決定されます(なぜ仕様に<code>セレクタ</code>を含めずにServiceが必要になるのかについては下記を参照してください)。</p>
|
||||
|
||||
<p>各Podには固有のIPアドレスがありますが、それらのIPは、Serviceなしではクラスタの外部に公開されません。Serviceによって、アプリケーションはトラフィックを受信できるようになります。ServiceSpecで<code>type</code>を指定することで、Serviceをさまざまな方法で公開することができます。</p>
|
||||
<ul>
|
||||
<li><i>ClusterIP</i> (既定値) - クラスタ内の内部IPでServiceを公開します。この型では、Serviceはクラスタ内からのみ到達可能になります。</li>
|
||||
<li><i>NodePort</i> - NATを使用して、クラスタ内の選択された各Nodeの同じポートにServiceを公開します。<code><NodeIP>:<NodePort></code>を使用してクラスタの外部からServiceにアクセスできるようにします。これはClusterIPのスーパーセットです。</li>
|
||||
<li><i>LoadBalancer</i> - 現在のクラウドに外部ロードバランサを作成し(サポートされている場合)、Serciceに固定の外部IPを割り当てます。これはNodePortのスーパーセットです。</li>
|
||||
<li><i>ExternalName</i> - 仕様の<code>externalName</code>で指定した名前のCNAMEレコードを返すことによって、任意の名前を使ってServiceを公開します。プロキシは使用されません。このタイプはv1.7以上の<code>kube-dns</code>を必要とします。</li>
|
||||
</ul>
|
||||
<p>さまざまな種類のServiceに関する詳細情報は<a href="/ja/docs/tutorials/services/source-ip/">Using Source IP</a> tutorialにあります。<a href="/ja/docs/concepts/services-networking/connect-applications-service">アプリケーションとServiceの接続</a>も参照してください。</p>
|
||||
<p>加えて、Serviceには、仕様に<code>selector</code>を定義しないというユースケースがいくつかあります。<code>selector</code>を指定せずに作成したServiceについて、対応するEndpointsオブジェクトは作成されません。これによって、ユーザーは手動でServiceを特定のエンドポイントにマッピングできます。セレクタがない可能性があるもう1つの可能性は、<code>type:ExternalName</code>を厳密に使用していることです。</p>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>まとめ</h3>
|
||||
<ul>
|
||||
<li>Podを外部トラフィックに公開する</li>
|
||||
<li>複数のPod間でトラフィックを負荷分散する</li>
|
||||
<li>ラベルを使う</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>Kubernetes Serviceは、Podの論理セットを定義し、それらのPodに対する外部トラフィックの公開、負荷分散、およびサービス検出を可能にする抽象化層です。</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h3>Serviceとラベル</h3>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_04_services.svg" width="150%" height="150%"></p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>Serviceは、一連のPodにトラフィックをルーティングします。Serviceは、アプリケーションに影響を与えることなく、KubernetesでPodが死んだり複製したりすることを可能にする抽象概念です。(アプリケーションのフロントエンドおよびバックエンドコンポーネントなどの)依存Pod間の検出とルーティングは、Kubernetes Serviceによって処理されます。</p>
|
||||
<p>Serviceは、ラベルとセレクタを使用して一連のPodを照合します。これは、Kubernetes内のオブジェクトに対する論理操作を可能にするグループ化のプリミティブです。ラベルはオブジェクトに付けられたkey/valueのペアであり、さまざまな方法で使用できます。</p>
|
||||
<ul>
|
||||
<li>開発、テスト、および本番用のオブジェクトを指定する</li>
|
||||
<li>バージョンタグを埋め込む</li>
|
||||
<li>タグを使用してオブジェクトを分類する</li>
|
||||
</ul>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>kubectlの<br> <code> --expose </code>を使用して、Deploymentの作成と同時にServiceを作成できます。</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_04_labels.svg"></p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>ラベルは、作成時またはそれ以降にオブジェクトにアタッチでき、いつでも変更可能です。Serviceを使用してアプリケーションを公開し、いくつかのラベルを適用してみましょう。</p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/expose/expose-interactive/" role="button">対話型のチュートリアルを始める<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
</main>
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: アプリケーションのスケーリング
|
||||
weight: 50
|
||||
---
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: 対話型チュートリアル - アプリケーションのスケーリング
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content katacoda-content">
|
||||
|
||||
<div class="katacoda">
|
||||
<div class="katacoda__alert">
|
||||
ターミナルを使うには、PCまたはタブレットをお使いください
|
||||
</div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/5" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/update/update-intro/" role="button">モジュール6へ進む<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
<a class="scrolltop" href="#top"></a>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,121 @@
|
|||
---
|
||||
title: アプリケーションの複数インスタンスを実行
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>目標</h3>
|
||||
<ul>
|
||||
<li>kubectlを使用してアプリケーションをスケールする</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>アプリケーションのスケーリング</h3>
|
||||
|
||||
<p>前回のモジュールでは、<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Deployment</a>を作成し、それを<a href="https://kubernetes.io/docs/concepts/services-networking/service/">Service</a>経由で公開しました。該当のDeploymentでは、アプリケーションを実行するためのPodを1つだけ作成しました。トラフィックが増加した場合、ユーザーの需要に対応するためにアプリケーションをスケールする必要があります。</p>
|
||||
|
||||
<p><b>スケーリング</b>は、Deploymentのレプリカの数を変更することによって実現可能です。</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>まとめ</h3>
|
||||
<ul>
|
||||
<li>Deploymentのスケーリング</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>kubectl runコマンドの--replicasパラメーターを使用することで、最初から複数のインスタンスを含むDeploymentを作成できます。</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">スケーリングの概要</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-1"></div>
|
||||
<div class="col-md-8">
|
||||
<div id="myCarousel" class="carousel" data-ride="carousel" data-interval="3000">
|
||||
<ol class="carousel-indicators">
|
||||
<li data-target="#myCarousel" data-slide-to="0" class="active"></li>
|
||||
<li data-target="#myCarousel" data-slide-to="1"></li>
|
||||
</ol>
|
||||
<div class="carousel-inner" role="listbox">
|
||||
<div class="item active">
|
||||
<img src="/docs/tutorials/kubernetes-basics/public/images/module_05_scaling1.svg">
|
||||
</div>
|
||||
|
||||
<div class="item">
|
||||
<img src="/docs/tutorials/kubernetes-basics/public/images/module_05_scaling2.svg">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<a class="left carousel-control" href="#myCarousel" role="button" data-slide="prev">
|
||||
<span class="sr-only ">前</span>
|
||||
</a>
|
||||
<a class="right carousel-control" href="#myCarousel" role="button" data-slide="next">
|
||||
<span class="sr-only">次</span>
|
||||
</a>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
|
||||
<p>Deploymentをスケールアウトすると、新しいPodが作成され、使用可能なリソースを持つNodeにスケジュールされます。スケールすると、Podの数が増えて新たな望ましい状態になります。KubernetesはPodの<a href="http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/">オートスケーリング</a>もサポートしていますが、このチュートリアルでは範囲外です。スケーリングを0に設定することも可能で、指定された配置のすべてのPodを終了させます。</p>
|
||||
|
||||
<p>アプリケーションの複数インスタンスを実行するには、それらすべてにトラフィックを分散する方法が必要になります。Serviceには、公開されたDeploymentのすべてのPodにネットワークトラフィックを分散する統合ロードバランサがあります。Serviceは、エンドポイントを使用して実行中のPodを継続的に監視し、トラフィックが使用可能なPodにのみ送信されるようにします。</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>スケーリングは、Deploymentのレプリカの数を変更することによって実現可能です。</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>アプリケーションの複数のインスタンスを実行すると、ダウンタイムなしでローリングアップデートを実行できます。それについては、次のモジュールで学習します。それでは、オンラインのターミナルを使って、アプリケーションをデプロイしてみましょう。</p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/scale/scale-interactive/" role="button">対話型のチュートリアルを始める <span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: アプリケーションのアップデート
|
||||
weight: 60
|
||||
---
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: 対話型チュートリアル - アプリケーションのアップデート
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content katacoda-content">
|
||||
|
||||
<div class="katacoda">
|
||||
<div class="katacoda__alert">
|
||||
ターミナルを使うには、PCまたはタブレットをお使いください
|
||||
</div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/6" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/" role="button">Kubernetesの基本に戻る<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,136 @@
|
|||
---
|
||||
title: ローリングアップデートの実行
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="ja">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="https://fonts.googleapis.com/css?family=Roboto+Slab:300,400,700" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>目標</h3>
|
||||
<ul>
|
||||
<li>kubectlを使ってローリングアップデートを実行する</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>アプリケーションのアップデート</h3>
|
||||
|
||||
<p>ユーザーはアプリケーションが常に利用可能であることを期待し、開発者はそれらの新しいバージョンを1日に数回デプロイすることが期待されます。Kubernetesでは、アプリケーションのアップデートをローリングアップデートで行います。<b>ローリングアップデート</b>では、Podインスタンスを新しいインスタンスで段階的にアップデートすることで、ダウンタイムなしでDeploymentをアップデートできます。新しいPodは、利用可能なリソースを持つNodeにスケジュールされます。</p>
|
||||
|
||||
<p>前回のモジュールでは、複数のインスタンスを実行するようにアプリケーションをデプロイしました。これは、アプリケーションの可用性に影響を与えずにアップデートを行うための要件です。デフォルトでは、アップデート中に使用できなくなる可能性があるPodの最大数と作成できる新しいPodの最大数は1です。どちらのオプションも、Podの数または全体数に対する割合(%)のいずれかに設定できます。Kubernetesでは、アップデートはバージョン管理されており、Deploymentにおけるアップデートは以前の(stable)バージョンに戻すことができます。</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>まとめ</h3>
|
||||
<ul>
|
||||
<li>アプリケーションのアップデート</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>ローリングアップデートでは、Podを新しいインスタンスで段階的にアップデートすることで、ダウンタイムなしDeploymentをアップデートできます。</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">ローリングアップデートの概要</h2>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-1"></div>
|
||||
<div class="col-md-8">
|
||||
<div id="myCarousel" class="carousel" data-ride="carousel" data-interval="3000">
|
||||
<ol class="carousel-indicators">
|
||||
<li data-target="#myCarousel" data-slide-to="0" class="active"></li>
|
||||
<li data-target="#myCarousel" data-slide-to="1"></li>
|
||||
<li data-target="#myCarousel" data-slide-to="2"></li>
|
||||
<li data-target="#myCarousel" data-slide-to="3"></li>
|
||||
</ol>
|
||||
<div class="carousel-inner" role="listbox">
|
||||
<div class="item active">
|
||||
<img src="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates1.svg" >
|
||||
</div>
|
||||
|
||||
<div class="item">
|
||||
<img src="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates2.svg">
|
||||
</div>
|
||||
|
||||
<div class="item">
|
||||
<img src="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates3.svg">
|
||||
</div>
|
||||
|
||||
<div class="item">
|
||||
<img src="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates4.svg">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<a class="left carousel-control" href="#myCarousel" role="button" data-slide="prev">
|
||||
<span class="sr-only ">Previous</span>
|
||||
</a>
|
||||
<a class="right carousel-control" href="#myCarousel" role="button" data-slide="next">
|
||||
<span class="sr-only">Next</span>
|
||||
</a>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
|
||||
<p>アプリケーションのスケーリングと同様に、Deploymentがパブリックに公開されている場合、Serviceはアップデート中に利用可能なPodのみにトラフィックを負荷分散します。 利用可能なPodは、アプリケーションのユーザーが利用できるインスタンスです。</p>
|
||||
|
||||
<p>ローリングアップデートでは、次の操作が可能です。</p>
|
||||
<ul>
|
||||
<li>コンテナイメージのアップデートを介した、ある環境から別の環境へのアプリケーションの昇格</li>
|
||||
<li>以前のバージョンへのロールバック</li>
|
||||
<li>ダウンタイムなしでのアプリケーションのCI/CD</li>
|
||||
|
||||
</ul>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>Deploymentがパブリックに公開されている場合、Serviceはアップデート中に利用可能なPodにのみトラフィックを負荷分散します。 </i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>次の対話型チュートリアルでは、アプリケーションを新しいバージョンにアップデートし、ロールバックも実行します。</p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/ja/docs/tutorials/kubernetes-basics/update/update-interactive/" role="button">対話型のチュートリアルを始める <span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,13 @@
|
|||
Note: These tests are importing code from kubernetes that isn't really
|
||||
meant to be used outside the repo. This causes vendoring problems. As
|
||||
a result, we have to work around those with these lines in the travis
|
||||
config:
|
||||
|
||||
```
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apiserver
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/client-go
|
||||
- cp -r $GOPATH/src/k8s.io/kubernetes/vendor/* $GOPATH/src/
|
||||
- rm -rf $GOPATH/src/k8s.io/kubernetes/vendor/*
|
||||
- cp -r $GOPATH/src/k8s.io/kubernetes/staging/src/* $GOPATH/src/
|
||||
```
|
|
@ -0,0 +1,69 @@
|
|||
# This is an example of how to setup cloud-controller-manger as a Daemonset in your cluster.
|
||||
# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master
|
||||
# Note that this Daemonset will not work straight out of the box for your cloud, this is
|
||||
# meant to be a guideline.
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: cloud-controller-manager
|
||||
namespace: kube-system
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: system:cloud-controller-manager
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cloud-controller-manager
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: cloud-controller-manager
|
||||
name: cloud-controller-manager
|
||||
namespace: kube-system
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: cloud-controller-manager
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: cloud-controller-manager
|
||||
spec:
|
||||
serviceAccountName: cloud-controller-manager
|
||||
containers:
|
||||
- name: cloud-controller-manager
|
||||
# for in-tree providers we use k8s.gcr.io/cloud-controller-manager
|
||||
# this can be replaced with any other image for out-of-tree providers
|
||||
image: k8s.gcr.io/cloud-controller-manager:v1.8.0
|
||||
command:
|
||||
- /usr/local/bin/cloud-controller-manager
|
||||
- --cloud-provider=<YOUR_CLOUD_PROVIDER> # Add your own cloud provider here!
|
||||
- --leader-elect=true
|
||||
- --use-service-account-credentials
|
||||
# these flags will vary for every cloud provider
|
||||
- --allocate-node-cidrs=true
|
||||
- --configure-cloud-routes=true
|
||||
- --cluster-cidr=172.17.0.0/16
|
||||
tolerations:
|
||||
# this is required so CCM can bootstrap itself
|
||||
- key: node.cloudprovider.kubernetes.io/uninitialized
|
||||
value: "true"
|
||||
effect: NoSchedule
|
||||
# this is to have the daemonset runnable on master nodes
|
||||
# the taint may vary depending on your cluster setup
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
# this is to restrict CCM to only run on master nodes
|
||||
# the node selector may vary depending on your cluster setup
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: ""
|
|
@ -0,0 +1,13 @@
|
|||
kind: InitializerConfiguration
|
||||
apiVersion: admissionregistration.k8s.io/v1alpha1
|
||||
metadata:
|
||||
name: pvlabel.kubernetes.io
|
||||
initializers:
|
||||
- name: pvlabel.kubernetes.io
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
apiVersions:
|
||||
- "*"
|
||||
resources:
|
||||
- persistentvolumes
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox:1.28
|
||||
command:
|
||||
- sleep
|
||||
- "3600"
|
||||
imagePullPolicy: IfNotPresent
|
||||
restartPolicy: Always
|
|
@ -0,0 +1,33 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: dns-autoscaler
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: dns-autoscaler
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: dns-autoscaler
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: dns-autoscaler
|
||||
spec:
|
||||
containers:
|
||||
- name: autoscaler
|
||||
image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.1.1
|
||||
resources:
|
||||
requests:
|
||||
cpu: "20m"
|
||||
memory: "10Mi"
|
||||
command:
|
||||
- /cluster-proportional-autoscaler
|
||||
- --namespace=kube-system
|
||||
- --configmap=dns-autoscaler
|
||||
- --target=<SCALE_TARGET>
|
||||
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
|
||||
# If using small nodes, "nodesPerReplica" should dominate.
|
||||
- --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"min":1}}
|
||||
- --logtostderr=true
|
||||
- --v=2
|
|
@ -0,0 +1,25 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: fluentd-config
|
||||
data:
|
||||
fluentd.conf: |
|
||||
<source>
|
||||
type tail
|
||||
format none
|
||||
path /var/log/1.log
|
||||
pos_file /var/log/1.log.pos
|
||||
tag count.format1
|
||||
</source>
|
||||
|
||||
<source>
|
||||
type tail
|
||||
format none
|
||||
path /var/log/2.log
|
||||
pos_file /var/log/2.log.pos
|
||||
tag count.format2
|
||||
</source>
|
||||
|
||||
<match **>
|
||||
type google_cloud
|
||||
</match>
|
|
@ -0,0 +1,39 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: busybox
|
||||
args:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- >
|
||||
i=0;
|
||||
while true;
|
||||
do
|
||||
echo "$i: $(date)" >> /var/log/1.log;
|
||||
echo "$(date) INFO $i" >> /var/log/2.log;
|
||||
i=$((i+1));
|
||||
sleep 1;
|
||||
done
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: count-agent
|
||||
image: k8s.gcr.io/fluentd-gcp:1.30
|
||||
env:
|
||||
- name: FLUENTD_ARGS
|
||||
value: -c /etc/fluentd-config/fluentd.conf
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: config-volume
|
||||
mountPath: /etc/fluentd-config
|
||||
volumes:
|
||||
- name: varlog
|
||||
emptyDir: {}
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: fluentd-config
|
|
@ -0,0 +1,38 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: busybox
|
||||
args:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- >
|
||||
i=0;
|
||||
while true;
|
||||
do
|
||||
echo "$i: $(date)" >> /var/log/1.log;
|
||||
echo "$(date) INFO $i" >> /var/log/2.log;
|
||||
i=$((i+1));
|
||||
sleep 1;
|
||||
done
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: count-log-1
|
||||
image: busybox
|
||||
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log']
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: count-log-2
|
||||
image: busybox
|
||||
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log']
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
volumes:
|
||||
- name: varlog
|
||||
emptyDir: {}
|
|
@ -0,0 +1,26 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: busybox
|
||||
args:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- >
|
||||
i=0;
|
||||
while true;
|
||||
do
|
||||
echo "$i: $(date)" >> /var/log/1.log;
|
||||
echo "$(date) INFO $i" >> /var/log/2.log;
|
||||
i=$((i+1));
|
||||
sleep 1;
|
||||
done
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
volumes:
|
||||
- name: varlog
|
||||
emptyDir: {}
|
|
@ -0,0 +1,10 @@
|
|||
{
|
||||
"kind": "Namespace",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "development",
|
||||
"labels": {
|
||||
"name": "development"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,10 @@
|
|||
{
|
||||
"kind": "Namespace",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "production",
|
||||
"labels": {
|
||||
"name": "production"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1.5"
|
||||
requests:
|
||||
cpu: "500m"
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-4-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "800m"
|
||||
requests:
|
||||
cpu: "100m"
|
|
@ -0,0 +1,8 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-4-ctr
|
||||
image: vish/stress
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "800m"
|
||||
requests:
|
||||
cpu: "500m"
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: cpu-min-max-demo-lr
|
||||
spec:
|
||||
limits:
|
||||
- max:
|
||||
cpu: "800m"
|
||||
min:
|
||||
cpu: "200m"
|
||||
type: Container
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: default-cpu-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-cpu-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: default-cpu-demo-3-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
cpu: "0.75"
|