Merge pull request #40025 from ystkfujii/cleanup/drop_turnkey
[ja] Drop turnkey cloud solutionspull/40788/head
commit
e3cd33d943
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
title: ターンキークラウドソリューション
|
||||
weight: 30
|
||||
---
|
|
@ -1,17 +0,0 @@
|
|||
---
|
||||
title: Alibaba CloudでKubernetesを動かす
|
||||
---
|
||||
|
||||
## Alibaba Cloud Container Service
|
||||
|
||||
[Alibaba Cloud Container Service](https://www.alibabacloud.com/product/container-service)はAlibaba Cloud ECSインスタンスのクラスター上もしくはサーバーレスの形態でDockerアプリケーションを起動して管理します。著名なオープンソースのコンテナオーケストレーターであるDocker SwarmおよびKubernetesをサポートしています。
|
||||
|
||||
クラスターの構築と管理を簡素化するために、[Alibaba Cloud Container ServiceのためのKubernetesサポート](https://www.alibabacloud.com/product/kubernetes)を使用します。[Kubernetes walk-through](https://www.alibabacloud.com/help/doc-detail/86737.htm)に従ってすぐに始めることができ、中国語の[Alibaba CloudにおけるKubernetesサポートのためのチュートリアル](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1)もあります。
|
||||
|
||||
カスタムバイナリもしくはオープンソースKubernetesを使用する場合は、以下の手順に従って下さい。
|
||||
|
||||
## 構築のカスタム
|
||||
|
||||
[Alibaba Cloudプロバイダーが実装されたKubernetesのソースコード](https://github.com/AliyunContainerService/kubernetes)はオープンソースであり、GitHubから入手可能です。
|
||||
|
||||
さらなる情報は英語の[Kubernetesのクイックデプロイメント - Alibaba CloudのVPC環境](https://www.alibabacloud.com/forum/read-830)をご覧下さい。
|
|
@ -1,82 +0,0 @@
|
|||
---
|
||||
title: AWS EC2上でKubernetesを動かす
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
このページでは、AWS上でKubernetesクラスターをインストールする方法について説明します。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
AWS上でKubernetesクラスターを作成するには、AWSからアクセスキーIDおよびシークレットアクセスキーを入手する必要があります。
|
||||
|
||||
### サポートされているプロダクショングレードのツール
|
||||
|
||||
* [conjure-up](https://docs.conjure-up.io/stable/en/cni/k8s-and-aws)はUbuntu上でネイティブなAWSインテグレーションを用いてKubernetesクラスターを作成するオープンソースのインストーラーです。
|
||||
|
||||
* [Kubernetes Operations](https://github.com/kubernetes/kops) - プロダクショングレードなKubernetesのインストール、アップグレード、管理が可能です。AWS上のDebian、Ubuntu、CentOS、RHELをサポートしています。
|
||||
|
||||
* [kube-aws](https://github.com/kubernetes-incubator/kube-aws) EC2、CloudFormation、Auto Scalingを使用して、[Flatcar Linux](https://www.flatcar-linux.org/)ノードでKubernetesクラスターを作成および管理します。
|
||||
|
||||
* [KubeOne](https://github.com/kubermatic/kubeone)は可用性の高いKubernetesクラスターを作成、アップグレード、管理するための、オープンソースのライフサイクル管理ツールです。
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## クラスターの始まり
|
||||
|
||||
### コマンドライン管理ツール: kubectl
|
||||
|
||||
クラスターの起動スクリプトによってワークステーション上に`kubernetes`ディレクトリが作成されます。もしくは、Kubernetesの最新リリースを[こちら](https://github.com/kubernetes/kubernetes/releases)からダウンロードすることも可能です。
|
||||
|
||||
次に、kubectlにアクセスするために適切なバイナリフォルダーを`PATH`へ追加します:
|
||||
|
||||
```shell
|
||||
# macOS
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
||||
# Linux
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
ツールに関する最新のドキュメントページはこちらです: [kubectl manual](/docs/reference/kubectl/kubectl/)
|
||||
|
||||
デフォルトでは、`kubectl`はクラスターの起動中に生成された`kubeconfig`ファイルをAPIに対する認証に使用します。
|
||||
詳細な情報は、[kubeconfig files](/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)を参照してください。
|
||||
|
||||
### 例
|
||||
|
||||
新しいクラスターを試すには、[簡単なnginxの例](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。
|
||||
|
||||
"Guestbook"アプリケーションは、Kubernetesを始めるもう一つのポピュラーな例です: [guestbookの例](https://github.com/kubernetes/examples/tree/master/guestbook/)
|
||||
|
||||
より完全なアプリケーションについては、[examplesディレクトリ](https://github.com/kubernetes/examples/tree/master/)を参照してください。
|
||||
|
||||
## クラスターのスケーリング
|
||||
|
||||
`kubectl`を使用したノードの追加および削除はサポートしていません。インストール中に作成された[Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html)内の'Desired'および'Max'プロパティを手動で調整することで、ノード数をスケールさせることができます。
|
||||
|
||||
## クラスターの解体
|
||||
|
||||
クラスターのプロビジョニングに使用した環境変数がexportされていることを確認してから、`kubernetes`ディレクトリ内で以下のスクリプトを実行してください:
|
||||
|
||||
```shell
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS プロバイダー | 構成管理 | OS | ネットワーク | ドキュメント | 適合 | サポートレベル
|
||||
-------------------- | ------------ | ------------- | ------------ | --------------------------------------------- | ---------| ----------------------------
|
||||
AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
|
||||
AWS | CoreOS | CoreOS | flannel | - | | Community
|
||||
AWS | Juju | Ubuntu | flannel, calico, canal | - | 100% | Commercial, Community
|
||||
AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | [docs](https://github.com/kubermatic/kubeone) | 100% | Commercial, Community
|
||||
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
---
|
||||
title: Azure 上で Kubernetes を動かす
|
||||
---
|
||||
|
||||
## Azure Kubernetes Service (AKS)
|
||||
|
||||
[Azure Kubernetes Service](https://azure.microsoft.com/ja-jp/services/kubernetes-service/)は、Kubernetesクラスターのためのシンプルなデプロイ機能を提供します。
|
||||
|
||||
Azure Kubernetes Serviceを利用してAzure上にKubernetesクラスターをデプロイする例:
|
||||
|
||||
**[Microsoft Azure Kubernetes Service](https://docs.microsoft.com/ja-jp/azure/aks/intro-kubernetes)**
|
||||
|
||||
## デプロイのカスタマイズ: AKS-Engine
|
||||
|
||||
Azure Kubernetes Serviceのコア部分は**オープンソース**であり、コミュニティのためにGitHub上で公開され、利用およびコントリビュートすることができます: **[AKS-Engine](https://github.com/Azure/aks-engine)**。レガシーな [ACS-Engine](https://github.com/Azure/acs-engine) のコードベースはAKS-engineのために廃止となりました。
|
||||
|
||||
AKS-Engineは、Azure Kubernetes Serviceが公式にサポートしている機能を超えてデプロイをカスタマイズしたい場合に適した選択肢です。
|
||||
既存の仮想ネットワークへのデプロイや、複数のagent poolを利用するなどのカスタマイズをすることができます。
|
||||
コミュニティによるAKS-Engineへのコントリビュートが、Azure Kubernetes Serviceに組み込まれる場合もあります。
|
||||
|
||||
AKS-Engineへの入力は、Kubernetesクラスターを記述するapimodelのJSONファイルです。これはAzure Kubernetes Serviceを使用してクラスターを直接デプロイするために使用されるAzure Resource Manager (ARM) のテンプレート構文と似ています。
|
||||
処理結果はARMテンプレートとして出力され、ソース管理に組み込んだり、AzureにKubernetesクラスターをデプロイするために使うことができます。
|
||||
|
||||
**[AKS-Engine Kubernetes Tutorial](https://github.com/Azure/aks-engine/blob/master/docs/tutorials/README.md)** を参照して始めることができます。
|
||||
|
||||
## Azure上でCoreOS Tectonicを動かす
|
||||
|
||||
Azureで利用できるCoreOS Tectonic Installerは**オープンソース**であり、コミュニティのためにGitHub上で公開され、利用およびコントリビュートすることができます: **[Tectonic Installer](https://github.com/coreos/tectonic-installer)**.
|
||||
|
||||
Tectonic Installerは、 [Hashicorp が提供する Terraform](https://www.terraform.io/docs/providers/azurerm/)のAzure Resource Manager(ARM)プロバイダーを用いてクラスターをカスタマイズしたい場合に適した選択肢です。
|
||||
これを利用することにより、Terraformと親和性の高いツールを使用してカスタマイズしたり連携したりすることができます。
|
||||
|
||||
[Tectonic Installer for Azure Guide](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html)を参照して、すぐに始めることができます。
|
|
@ -1,217 +0,0 @@
|
|||
---
|
||||
title: Google Compute Engine上でKubernetesを動かす
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
The example below creates a Kubernetes cluster with 3 worker node Virtual Machines and a master Virtual Machine (i.e. 4 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted cluster installation and management.
|
||||
|
||||
For an easy way to experiment with the Kubernetes development environment, click the button below
|
||||
to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo.
|
||||
|
||||
[](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
|
||||
|
||||
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
|
||||
|
||||
### 前提条件
|
||||
|
||||
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](https://console.cloud.google.com) for more details.
|
||||
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
|
||||
1. Enable the [Compute Engine Instance Group Manager API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview) in the [Google Cloud developers console](https://console.developers.google.com/apis/library).
|
||||
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
|
||||
1. Make sure you have credentials for GCloud by running `gcloud auth login`.
|
||||
1. (Optional) In order to make API calls against GCE, you must also run `gcloud auth application-default login`.
|
||||
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
|
||||
1. Make sure you can SSH into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## クラスターの起動
|
||||
|
||||
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
|
||||
|
||||
|
||||
```shell
|
||||
curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```shell
|
||||
wget -q -O - https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
|
||||
|
||||
By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services.
|
||||
|
||||
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
|
||||
|
||||
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
|
||||
|
||||
```shell
|
||||
cd kubernetes
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
|
||||
|
||||
If you run into trouble, please see the section on [troubleshooting](/ja/docs/setup/production-environment/turnkey/gce/#troubleshooting), post to the
|
||||
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on `#gke` Slack channel.
|
||||
|
||||
The next few steps will show you:
|
||||
|
||||
1. How to set up the command line client on your workstation to manage the cluster
|
||||
1. Examples of how to use the cluster
|
||||
1. How to delete the cluster
|
||||
1. How to start clusters with non-default options (like larger clusters)
|
||||
|
||||
## ワークステーション上でのKubernetesコマンドラインツールのインストール
|
||||
|
||||
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
|
||||
|
||||
The [kubectl](/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster
|
||||
manager. It lets you inspect your cluster resources, create, delete, and update
|
||||
components, and much more. You will use it to look at your new cluster and bring
|
||||
up example apps.
|
||||
|
||||
You can use `gcloud` to install the `kubectl` command-line tool on your workstation:
|
||||
|
||||
```shell
|
||||
gcloud components install kubectl
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The kubectl version bundled with `gcloud` may be older than the one
|
||||
The [kubectl](/ja/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster
|
||||
document to see how you can set up the latest `kubectl` on your workstation.
|
||||
{{< /note >}}
|
||||
|
||||
## クラスターの始まり
|
||||
|
||||
### クラスターの様子を見る
|
||||
|
||||
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
|
||||
|
||||
```shell
|
||||
kubectl get --all-namespaces services
|
||||
```
|
||||
|
||||
should show a set of [services](/docs/concepts/services-networking/service/) that look something like this:
|
||||
|
||||
```shell
|
||||
NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
|
||||
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
|
||||
kube-system kube-dns ClusterIP 10.0.0.2 <none> 53/TCP,53/UDP 1d
|
||||
kube-system kube-ui ClusterIP 10.0.0.3 <none> 80/TCP 1d
|
||||
...
|
||||
```
|
||||
|
||||
Similarly, you can take a look at the set of [pods](/ja/docs/concepts/workloads/pods/) that were created during cluster startup.
|
||||
You can do this via the
|
||||
|
||||
```shell
|
||||
kubectl get --all-namespaces pods
|
||||
```
|
||||
|
||||
command.
|
||||
|
||||
You'll see a list of pods that looks something like this (the name specifics will be different):
|
||||
|
||||
```shell
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-5f4fbb68df-mc8z8 1/1 Running 0 15m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
|
||||
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
|
||||
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
|
||||
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
|
||||
```
|
||||
|
||||
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
|
||||
|
||||
### いくつかの例の実行
|
||||
|
||||
Then, see [a simple nginx example](/ja/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
|
||||
|
||||
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/master/). The [guestbook example](https://github.com/kubernetes/examples/tree/master/guestbook/) is a good "getting started" walkthrough.
|
||||
|
||||
## クラスターの解体
|
||||
|
||||
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
|
||||
|
||||
```shell
|
||||
cd kubernetes
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
|
||||
|
||||
## カスタマイズ
|
||||
|
||||
The script above relies on Google Storage to stage the Kubernetes release. It
|
||||
then will start (by default) a single master VM along with 3 worker VMs. You
|
||||
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
|
||||
You can view a transcript of a successful cluster creation
|
||||
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
|
||||
|
||||
## トラブルシューティング
|
||||
|
||||
### プロジェクトの設定
|
||||
|
||||
You need to have the Google Cloud Storage API, and the Google Cloud Storage
|
||||
JSON API enabled. It is activated by default for new projects. Otherwise, it
|
||||
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
|
||||
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
|
||||
details.
|
||||
|
||||
Also ensure that-- as listed in the [Prerequisites section](#前提条件)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
|
||||
|
||||
### クラスター初期化のハング
|
||||
|
||||
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
|
||||
|
||||
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
|
||||
|
||||
### SSH
|
||||
|
||||
If you're having trouble SSHing into your instances, ensure the GCE firewall
|
||||
isn't blocking port 22 to your VMs. By default, this should work but if you
|
||||
have edited firewall rules or created a new non-default network, you'll need to
|
||||
expose it: `gcloud compute firewall-rules create default-ssh --network=<network-name>
|
||||
--description "SSH allowed from anywhere" --allow tcp:22`
|
||||
|
||||
Additionally, your GCE SSH key must either have no passcode or you need to be
|
||||
using `ssh-agent`.
|
||||
|
||||
### ネットワーク
|
||||
|
||||
The instances must be able to connect to each other using their private IP. The
|
||||
script uses the "default" network which should have a firewall rule called
|
||||
"default-allow-internal" which allows traffic on any port on the private IPs.
|
||||
If this rule is missing from the default network or if you change the network
|
||||
being used in `cluster/config-default.sh` create a new rule with the following
|
||||
field values:
|
||||
|
||||
* Source Ranges: `10.0.0.0/8`
|
||||
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
|
||||
|
||||
## サポートレベル
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
GCE | Saltstack | Debian | GCE | [docs](/ja/docs/setup/production-environment/turnkey/gce/) | | Project
|
||||
|
|
@ -1,63 +0,0 @@
|
|||
---
|
||||
title: IBM Cloud Privateを使ってマルチクラウドでKubernetesを動かす
|
||||
---
|
||||
|
||||
IBM® Cloud Private is a turnkey cloud solution and an on-premises turnkey cloud solution. IBM Cloud Private delivers pure upstream Kubernetes with the typical management components that are required to run real enterprise workloads. These workloads include health management, log management, audit trails, and metering for tracking usage of workloads on the platform.
|
||||
|
||||
IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from [Docker Hub](https://hub.docker.com/r/ibmcom/icp-inception/). The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform. If you want to try IBM Cloud Private, you can use either the hosted trial, the tutorial, or the self-guided demo. You can also try the free community edition. For details, see [Get started with IBM Cloud Private](https://www.ibm.com/cloud/private/get-started).
|
||||
|
||||
For more information, explore the following resources:
|
||||
|
||||
* [IBM Cloud Private](https://www.ibm.com/cloud/private)
|
||||
* [Reference architecture for IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud)
|
||||
* [IBM Cloud Private documentation](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html)
|
||||
|
||||
## IBM Cloud PrivateとTerraform
|
||||
|
||||
The following modules are available where you can deploy IBM Cloud Private by using Terraform:
|
||||
|
||||
* AWS: [Deploy IBM Cloud Private to AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws)
|
||||
* Azure: [Deploy IBM Cloud Private to Azure](https://github.com/ibm-cloud-architecture/terraform-icp-azure)
|
||||
* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud)
|
||||
* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack)
|
||||
* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy)
|
||||
* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware)
|
||||
|
||||
## AWS上でのIBM Cloud Private
|
||||
|
||||
You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) using Terraform.
|
||||
|
||||
IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws).
|
||||
|
||||
## Azure上でのIBM Cloud Private
|
||||
|
||||
You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [IBM Cloud Private on Azure](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/azure_overview.html).
|
||||
|
||||
## Red Hat OpenShiftを用いたIBM Cloud Private
|
||||
|
||||
You can deploy IBM certified software containers that are running on IBM Cloud Private onto Red Hat OpenShift.
|
||||
|
||||
Integration capabilities:
|
||||
|
||||
* Supports Linux® 64-bit platform in offline-only installation mode
|
||||
* Single-master configuration
|
||||
* Integrated IBM Cloud Private cluster management console and catalog
|
||||
* Integrated core platform services, such as monitoring, metering, and logging
|
||||
* IBM Cloud Private uses the OpenShift image registry
|
||||
|
||||
For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/openshift/overview.html).
|
||||
|
||||
## VirtualBox上でのIBM Cloud Private
|
||||
|
||||
To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox).
|
||||
|
||||
## VMware上でのIBM Cloud Private
|
||||
|
||||
You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects:
|
||||
|
||||
* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md)
|
||||
* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel)
|
||||
|
||||
The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud.
|
||||
|
||||
For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/vmwaresolutions?topic=vmwaresolutions-icp_overview).
|
Loading…
Reference in New Issue