Remove on-premises-vm section for ja
parent
150cef66c4
commit
9e49f5c840
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
title: オンプレミスVM
|
||||
weight: 40
|
||||
---
|
|
@ -1,118 +0,0 @@
|
|||
---
|
||||
title: Cloudstack
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
|
||||
|
||||
[CoreOS](https://coreos.com) templates for CloudStack are built [nightly](https://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](https://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
|
||||
|
||||
This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## 前提条件
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y python-pip libssl-dev
|
||||
sudo pip install cs
|
||||
sudo pip install sshpubkeys
|
||||
sudo apt-get install software-properties-common
|
||||
sudo apt-add-repository ppa:ansible/ansible
|
||||
sudo apt-get update
|
||||
sudo apt-get install ansible
|
||||
```
|
||||
|
||||
On CloudStack server you also have to install libselinux-python :
|
||||
|
||||
```shell
|
||||
yum install libselinux-python
|
||||
```
|
||||
|
||||
[_cs_](https://github.com/exoscale/cs) is a python module for the CloudStack API.
|
||||
|
||||
Set your CloudStack endpoint, API keys and HTTP method used.
|
||||
|
||||
You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`.
|
||||
|
||||
Or create a `~/.cloudstack.ini` file:
|
||||
|
||||
```none
|
||||
[cloudstack]
|
||||
endpoint = <your cloudstack api endpoint>
|
||||
key = <your api access key>
|
||||
secret = <your api secret key>
|
||||
method = post
|
||||
```
|
||||
|
||||
We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
|
||||
|
||||
### playbookのクローン
|
||||
|
||||
```shell
|
||||
git clone https://github.com/apachecloudstack/k8s
|
||||
cd kubernetes-cloudstack
|
||||
```
|
||||
|
||||
### Kubernetesクラスターの作成
|
||||
|
||||
You simply need to run the playbook.
|
||||
|
||||
```shell
|
||||
ansible-playbook k8s.yml
|
||||
```
|
||||
|
||||
Some variables can be edited in the `k8s.yml` file.
|
||||
|
||||
```none
|
||||
vars:
|
||||
ssh_key: k8s
|
||||
k8s_num_nodes: 2
|
||||
k8s_security_group_name: k8s
|
||||
k8s_node_prefix: k8s2
|
||||
k8s_template: <templatename>
|
||||
k8s_instance_type: <serviceofferingname>
|
||||
```
|
||||
|
||||
This will start a Kubernetes master node and a number of compute nodes (by default 2).
|
||||
The `instance_type` and `template` are specific, edit them to specify your CloudStack cloud specific template and instance type (i.e. service offering).
|
||||
|
||||
Check the tasks and templates in `roles/k8s` if you want to modify anything.
|
||||
|
||||
Once the playbook as finished, it will print out the IP of the Kubernetes master:
|
||||
|
||||
```none
|
||||
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
|
||||
```
|
||||
|
||||
SSH to it using the key that was created and using the _core_ user.
|
||||
|
||||
```shell
|
||||
ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
|
||||
```
|
||||
|
||||
And you can list the machines in your cluster:
|
||||
|
||||
```shell
|
||||
fleetctl list-machines
|
||||
```
|
||||
|
||||
```none
|
||||
MACHINE IP METADATA
|
||||
a017c422... <node #1 IP> role=node
|
||||
ad13bf84... <master IP> role=master
|
||||
e9af8293... <node #2 IP> role=node
|
||||
```
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/production-environment/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/))
|
||||
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
title: DC/OS上のKubernetes
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Mesosphereは[DC/OS](https://mesosphere.com/product/)上にKubernetesを構築するための簡単な選択肢を提供します。それは
|
||||
|
||||
* 純粋なアップストリームのKubernetes
|
||||
* シングルクリッククラスター構築
|
||||
* デフォルトで高可用であり安全
|
||||
* Kubernetesが高速なデータプラットフォーム(例えばAkka、Cassandra、Kafka、Spark)と共に稼働
|
||||
|
||||
です。
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## 公式Mesosphereガイド
|
||||
|
||||
DC/OS入門の正規のソースは[クイックスタートリポジトリ](https://github.com/mesosphere/dcos-kubernetes-quickstart)にあります。
|
||||
|
||||
|
|
@ -1,68 +0,0 @@
|
|||
---
|
||||
title: oVirt
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## oVirtクラウドプロバイダーによる構築
|
||||
|
||||
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster.
|
||||
At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well.
|
||||
|
||||
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
|
||||
|
||||
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
|
||||
|
||||
[import]: https://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
|
||||
[install]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#create-virtual-machines
|
||||
[generate a template]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#using-templates
|
||||
[install the ovirt-guest-agent]: https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
|
||||
|
||||
## oVirtクラウドプロバイダーの使用
|
||||
|
||||
The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file:
|
||||
|
||||
```none
|
||||
[connection]
|
||||
uri = https://localhost:8443/ovirt-engine/api
|
||||
username = admin@internal
|
||||
password = admin
|
||||
```
|
||||
|
||||
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes:
|
||||
|
||||
```none
|
||||
[filters]
|
||||
# Search query used to find nodes
|
||||
vms = tag=kubernetes
|
||||
```
|
||||
|
||||
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes.
|
||||
|
||||
The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
|
||||
|
||||
```shell
|
||||
kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...
|
||||
```
|
||||
|
||||
## oVirtクラウドプロバイダーのスクリーンキャスト
|
||||
|
||||
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
|
||||
|
||||
[![Screencast](https://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](https://www.youtube.com/watch?v=JyyST4ZKne8)
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
oVirt | | | | [docs](/docs/setup/production-environment/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z))
|
||||
|
||||
|
Loading…
Reference in New Issue