now that phase out in k8s/cluster/ directory, so remove relative docs (#6951)
parent
b084db85e8
commit
fb3f73e6f6
|
@ -60,9 +60,7 @@ toc:
|
|||
- docs/getting-started-guides/coreos/index.md
|
||||
- docs/getting-started-guides/cloudstack.md
|
||||
- docs/getting-started-guides/vsphere.md
|
||||
- docs/getting-started-guides/photon-controller.md
|
||||
- docs/getting-started-guides/dcos.md
|
||||
- docs/getting-started-guides/libvirt-coreos.md
|
||||
- docs/getting-started-guides/ovirt.md
|
||||
|
||||
- title: rkt
|
||||
|
|
|
@ -1,341 +0,0 @@
|
|||
---
|
||||
approvers:
|
||||
- erictune
|
||||
- idvoretskyi
|
||||
title: CoreOS on libvirt
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
### Highlights
|
||||
|
||||
* Super-fast cluster boot-up (few seconds instead of several minutes for vagrant)
|
||||
* Reduced disk usage thanks to [COW](https://en.wikibooks.org/wiki/QEMU/Images#Copy_on_write)
|
||||
* Reduced memory footprint thanks to [KSM](https://www.kernel.org/doc/Documentation/vm/ksm.txt)
|
||||
|
||||
### Warnings about `libvirt-coreos` use case
|
||||
|
||||
The primary goal of the `libvirt-coreos` cluster provider is to deploy a multi-node Kubernetes cluster on local VMs as fast as possible and to be as light as possible in term of resources used.
|
||||
|
||||
In order to achieve that goal, its deployment is very different from the "standard production deployment" method used on other providers. This was done on purpose in order to implement some optimizations made possible by the fact that we know that all VMs will be running on the same physical machine.
|
||||
|
||||
The `libvirt-coreos` cluster provider doesn't aim at being production look-alike.
|
||||
|
||||
Another difference is that no security is enforced on `libvirt-coreos` at all. For example,
|
||||
|
||||
* Kube API server is reachable via a clear-text connection (no SSL);
|
||||
* Kube API server requires no credentials;
|
||||
* etcd access is not protected;
|
||||
* Kubernetes secrets are not protected as securely as they are on production environments;
|
||||
* etc.
|
||||
|
||||
So, a k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like:
|
||||
|
||||
* un-authenticated access to Kube API server;
|
||||
* Access to Kubernetes private data structures inside etcd;
|
||||
* etc.
|
||||
|
||||
On the other hand, `libvirt-coreos` might be useful for people investigating low level implementation of Kubernetes because debugging techniques like sniffing the network traffic or introspecting the etcd content are easier on `libvirt-coreos` than on a production deployment.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html)
|
||||
1. Install [ebtables](http://ebtables.netfilter.org/)
|
||||
1. Install [qemu](http://wiki.qemu.org/Main_Page)
|
||||
1. Install [libvirt](http://libvirt.org/)
|
||||
1. Install [openssl](http://openssl.org/)
|
||||
1. Enable and start the libvirt daemon, e.g.:
|
||||
|
||||
- `systemctl enable libvirtd && systemctl start libvirtd` for systemd-based systems
|
||||
- `/etc/init.d/libvirt-bin start` for init.d-based systems
|
||||
|
||||
1. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html)
|
||||
1. Check that your $HOME is accessible to the qemu user²
|
||||
|
||||
#### ¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
|
||||
|
||||
You can test it with the following command:
|
||||
|
||||
```shell
|
||||
virsh -c qemu:///system pool-list
|
||||
```
|
||||
|
||||
If you have access error messages, please read [https://libvirt.org/acl.html](https://libvirt.org/acl.html) and [https://libvirt.org/aclpolkit.html](https://libvirt.org/aclpolkit.html).
|
||||
|
||||
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
|
||||
|
||||
```shell
|
||||
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
|
||||
```
|
||||
|
||||
```conf
|
||||
polkit.addRule(function(action, subject) {
|
||||
if (action.id == "org.libvirt.unix.manage" &&
|
||||
subject.user == "$USER") {
|
||||
return polkit.Result.YES;
|
||||
polkit.log("action=" + action);
|
||||
polkit.log("subject=" + subject);
|
||||
}
|
||||
});
|
||||
EOF
|
||||
```
|
||||
|
||||
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
|
||||
|
||||
```shell
|
||||
$ ls -l /var/run/libvirt/libvirt-sock
|
||||
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
|
||||
|
||||
$ usermod -a -G libvirtd $USER
|
||||
# $USER needs to logout/login to have the new group be taken into account
|
||||
```
|
||||
|
||||
(Replace `$USER` with your login name)
|
||||
|
||||
#### ² Qemu will run with a specific user. It must have access to the VMs drives
|
||||
|
||||
All the disk drive resources needed by the VM (CoreOS disk image, Kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
|
||||
|
||||
As we're using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool.
|
||||
|
||||
If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like:
|
||||
|
||||
```shell
|
||||
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
|
||||
```
|
||||
|
||||
In order to fix that issue, you have several possibilities:
|
||||
|
||||
* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory:
|
||||
* backed by a filesystem with a lot of free disk space
|
||||
* writable by your user;
|
||||
* accessible by the qemu user.
|
||||
* Grant the qemu user access to the storage pool.
|
||||
|
||||
On Arch:
|
||||
|
||||
```shell
|
||||
setfacl -m g:kvm:--x ~
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
|
||||
|
||||
There is both an automated way and a manual, customizable way of setting up libvirt Kubernetes clusters on CoreOS.
|
||||
|
||||
#### Automated setup
|
||||
|
||||
There is an automated setup script on [https://get.k8s.io]( https://get.k8s.io ) that will download the tarball for Kubernetes and spawn a Kubernetes cluster on a local CoreOS instances that the script creates. To run this script, use wget or curl with the KUBERNETES_PROVIDER environment variable set to libvirt-coreos:
|
||||
|
||||
```shell
|
||||
export KUBERNETES_PROVIDER=libvirt-coreos; wget -q -O - https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
Here is the curl version of this command:
|
||||
|
||||
```shell
|
||||
export KUBERNETES_PROVIDER=libvirt-coreos; curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
This script downloads and unpacks the tarball, then spawns a Kubernetes cluster on CoreOS instances with the following characteristics:
|
||||
|
||||
- Total of 4 KVM/QEMU instances
|
||||
- 1 instance acting as a Kubernetes master node
|
||||
- 3 instances acting as minion nodes
|
||||
|
||||
If you'd like to run this cluster with customized settings, follow the manual setup instructions.
|
||||
|
||||
#### Manual setup
|
||||
|
||||
To start your local cluster, open a shell and run:
|
||||
|
||||
```shell
|
||||
cd kubernetes
|
||||
|
||||
export KUBERNETES_PROVIDER=libvirt-coreos
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
|
||||
|
||||
The `NUM_NODES` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
|
||||
|
||||
The `KUBE_PUSH` environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
|
||||
|
||||
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`.
|
||||
* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`.
|
||||
|
||||
### Management
|
||||
|
||||
You can check that your machines are there and running with:
|
||||
|
||||
```shell
|
||||
$ virsh -c qemu:///system list
|
||||
Id Name State
|
||||
----------------------------------------------------
|
||||
15 kubernetes_master running
|
||||
16 kubernetes_node-01 running
|
||||
17 kubernetes_node-02 running
|
||||
18 kubernetes_node-03 running
|
||||
```
|
||||
|
||||
You can check that the Kubernetes cluster is working with:
|
||||
|
||||
```shell
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
192.168.10.2 Ready 4h v1.6.0+fff5156
|
||||
192.168.10.3 Ready 4h v1.6.0+fff5156
|
||||
192.168.10.4 Ready 4h v1.6.0+fff5156
|
||||
```
|
||||
|
||||
The VMs are running [CoreOS](https://coreos.com/).
|
||||
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
|
||||
The user to use to connect to the VM is `core`.
|
||||
The IP to connect to the master is 192.168.10.1.
|
||||
The IPs to connect to the nodes are 192.168.10.2 and onwards.
|
||||
|
||||
Connect to `kubernetes_master`:
|
||||
|
||||
```shell
|
||||
ssh core@192.168.10.1
|
||||
```
|
||||
|
||||
Connect to `kubernetes_node-01`:
|
||||
|
||||
```shell
|
||||
ssh core@192.168.10.2
|
||||
```
|
||||
|
||||
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
|
||||
|
||||
All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately:
|
||||
|
||||
```shell
|
||||
export KUBERNETES_PROVIDER=libvirt-coreos
|
||||
```
|
||||
|
||||
Bring up a libvirt-CoreOS cluster of 5 nodes
|
||||
|
||||
```shell
|
||||
NUM_NODES=5 cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Destroy the libvirt-CoreOS cluster
|
||||
|
||||
```shell
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`:
|
||||
|
||||
```shell
|
||||
cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
|
||||
|
||||
```shell
|
||||
KUBE_PUSH=local cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Interact with the cluster
|
||||
|
||||
```shell
|
||||
kubectl ...
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### !!! Cannot find kubernetes-server-linux-amd64.tar.gz
|
||||
|
||||
Build the release tarballs:
|
||||
|
||||
```shell
|
||||
make release
|
||||
```
|
||||
|
||||
#### Can't find virsh in PATH, please fix and retry.
|
||||
|
||||
Install libvirt
|
||||
|
||||
On Arch:
|
||||
|
||||
```shell
|
||||
pacman -S qemu libvirt
|
||||
```
|
||||
|
||||
On Ubuntu 14.04:
|
||||
|
||||
```shell
|
||||
aptitude install qemu-system-x86 libvirt-bin
|
||||
```
|
||||
|
||||
On Fedora 21:
|
||||
|
||||
```shell
|
||||
yum install qemu libvirt
|
||||
```
|
||||
|
||||
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
|
||||
|
||||
Start the libvirt daemon
|
||||
|
||||
On Arch:
|
||||
|
||||
```shell
|
||||
systemctl start libvirtd virtlogd.socket
|
||||
```
|
||||
|
||||
The `virtlogd.socket` is not started with the libvirtd daemon. If you enable the `libvirtd.service` it is linked and started automatically on the next boot.
|
||||
|
||||
On Ubuntu 14.04:
|
||||
|
||||
```shell
|
||||
service libvirt-bin start
|
||||
```
|
||||
|
||||
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
|
||||
|
||||
Fix libvirt access permission (Remember to adapt `$USER`)
|
||||
|
||||
On Arch and Fedora 21:
|
||||
|
||||
```shell
|
||||
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
|
||||
```
|
||||
|
||||
```conf
|
||||
polkit.addRule(function(action, subject) {
|
||||
if (action.id == "org.libvirt.unix.manage" &&
|
||||
subject.user == "$USER") {
|
||||
return polkit.Result.YES;
|
||||
polkit.log("action=" + action);
|
||||
polkit.log("subject=" + subject);
|
||||
}
|
||||
});
|
||||
EOF
|
||||
```
|
||||
|
||||
On Ubuntu:
|
||||
|
||||
```shell
|
||||
usermod -a -G libvirtd $USER
|
||||
```
|
||||
|
||||
#### error: Out of memory initializing network (virsh net-create...)
|
||||
|
||||
Ensure libvirtd has been restarted since ebtables was installed.
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos/) | | Community ([@lhuard1A](https://github.com/lhuard1A))
|
||||
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
|
@ -1,238 +0,0 @@
|
|||
---
|
||||
approvers:
|
||||
- bprashanth
|
||||
title: VMware Photon Controller
|
||||
---
|
||||
|
||||
The example below creates a Kubernetes cluster using VMware's Photon
|
||||
Controller. The cluster will have one Kubernetes master and three
|
||||
Kubernetes nodes.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. You need administrator access to a [VMware Photon
|
||||
Controller](http://vmware.github.io/photon-controller/)
|
||||
deployment. (Administrator access is only required for the initial
|
||||
setup: the actual creation of the cluster can be done by anyone.)
|
||||
|
||||
2. The [Photon Controller CLI](https://github.com/vmware/photon-controller-cli)
|
||||
needs to be installed on the machine on which you'll be running kube-up. If you
|
||||
have go installed, this can be easily installed with:
|
||||
|
||||
go get github.com/vmware/photon-controller-cli/photon
|
||||
|
||||
3. `mkisofs` needs to be installed. The installation process creates a
|
||||
CD-ROM ISO image to bootstrap the VMs with cloud-init. If you are on a
|
||||
Mac, you can install this with [brew](http://brew.sh/):
|
||||
|
||||
brew install cdrtools
|
||||
|
||||
4. Several common tools need to be installed: `ssh`, `scp`, `openssl`
|
||||
|
||||
5. You should have an ssh public key installed. This will be used to
|
||||
give you access to the VM's user account, `kube`.
|
||||
|
||||
6. Get or build a [binary release](/docs/getting-started-guides/binary_release/)
|
||||
|
||||
### Download VM Image
|
||||
|
||||
Download a prebuilt Debian 8.2 VMDK that we'll use as a base image:
|
||||
|
||||
```shell
|
||||
curl --remote-name-all https://s3.amazonaws.com/photon-platform/artifacts/OS/debian/debian-8.2.vmdk
|
||||
```
|
||||
|
||||
This is a base Debian 8.2 image with the addition of:
|
||||
|
||||
* openssh-server
|
||||
* open-vm-tools
|
||||
* cloud-init
|
||||
|
||||
### Configure Photon Controller:
|
||||
|
||||
In order to deploy Kubernetes, you need to configure Photon Controller
|
||||
with:
|
||||
|
||||
* A tenant, with associated resource ticket
|
||||
* A project within that tenant
|
||||
* VM and disk flavors, to describe the VM characteristics
|
||||
* An image: we'll use the one above
|
||||
|
||||
When you do this, you'll need to configure the
|
||||
`cluster/photon-controller/config-common.sh` file with the names of
|
||||
the tenant, project, flavors, and image.
|
||||
|
||||
If you prefer, you can use the provided `cluster/photon-controller/setup-prereq.sh`
|
||||
script to create these. Assuming the IP address of your Photon
|
||||
Controller is 192.0.2.2 (change as appropriate) and the downloaded image is
|
||||
kube.vmdk, you can run:
|
||||
|
||||
```shell
|
||||
photon target set https://192.0.2.2
|
||||
photon target login ...credentials...
|
||||
cluster/photon-controller/setup-prereq.sh https://192.0.2.2 kube.vmdk
|
||||
```
|
||||
|
||||
The `setup-prereq.sh` script will create the tenant, project, flavors,
|
||||
and image based on the same configuration file used by kube-up:
|
||||
`cluster/photon-controller/config-common.sh`. Note that it will create
|
||||
a resource ticket which limits how many VMs a tenant can create. You
|
||||
will want to change the resource ticket configuration in
|
||||
`config-common.sh` based on your actual Photon Controller deployment.
|
||||
|
||||
### Configure kube-up
|
||||
|
||||
There are two files used to configure kube-up's interaction with
|
||||
Photon Controller:
|
||||
|
||||
1. `cluster/photon-controller/config-common.sh` has the most common
|
||||
parameters, including the names of the tenant, project, and image.
|
||||
|
||||
2. `cluster/photon-controller/config-default.sh` has more advanced
|
||||
parameters including the IP subnets to use, the number of nodes to
|
||||
create and which Kubernetes components to configure.
|
||||
|
||||
Both files have documentation to explain the different parameters.
|
||||
|
||||
### Creating your Kubernetes cluster
|
||||
|
||||
To create your Kubernetes cluster we will run the standard `kube-up`
|
||||
command. As described above, the parameters that control kube-up's
|
||||
interaction with Photon Controller are specified in files, not on the
|
||||
command-line.
|
||||
|
||||
The time to deploy varies based on the number of nodes you create as
|
||||
well as the specifications of your Photon Controller hosts and
|
||||
network. Times vary from 10 - 30 minutes for a ten node cluster.
|
||||
|
||||
```shell
|
||||
KUBERNETES_PROVIDER=photon-controller cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Once you have successfully reached this point, your Kubernetes cluster
|
||||
works just like any other.
|
||||
|
||||
Note that kube-up created a Kubernetes configuration file for you in
|
||||
`~/.kube/config`. This file will allow you to use the `kubectl`
|
||||
command. It contains the IP address of the Kubernetes master as well
|
||||
as the password for the `admin` user. If you wish to use the
|
||||
Kubernetes web-based user interface you will need this password. In
|
||||
the config file you'll see a section that look like the following: you
|
||||
use the password there. (Note that the output has been trimmed: the
|
||||
certificate data is much lengthier)
|
||||
|
||||
```yaml
|
||||
- name: photon-kubernetes
|
||||
user:
|
||||
client-certificate-data: Q2Vyd...
|
||||
client-key-data: LS0tL...
|
||||
password: PASSWORD-HERE
|
||||
username: admin
|
||||
```
|
||||
|
||||
### Removing your Kubernetes cluster
|
||||
|
||||
The recommended way to remove your Kubernetes cluster is with the
|
||||
`kube-down` command:
|
||||
|
||||
```shell
|
||||
KUBERNETES_PROVIDER=photon-controller cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Your Kubernetes cluster is just a set of VMs: you can manually remove
|
||||
them if you need to.
|
||||
|
||||
### Making services publicly accessible
|
||||
|
||||
There are multiple ways to make services publicly accessible in Kubernetes.
|
||||
Currently, the photon-controller support does not yet include built-in
|
||||
support for the LoadBalancer option.
|
||||
|
||||
#### Option 1: NodePort
|
||||
|
||||
One option is to use the NodePort option with a manually deployed
|
||||
balancer. Specifically:
|
||||
|
||||
Configure your service with the NodePort option. For example, this
|
||||
service uses the NodePort option. All Kubernetes nodes will listen on
|
||||
a port and forward network traffic to any pods in the service. In this
|
||||
case, Kubernetes will choose a random port, but it will be the same
|
||||
port on all nodes.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-demo-service
|
||||
labels:
|
||||
app: nginx-demo
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: nginx-demo
|
||||
```
|
||||
|
||||
Next, create a new standalone VM (or VMs, for high availability) to act
|
||||
as a load balancer. For example, if you use haproxy, you could make a
|
||||
configuration similar to the one below. Note that this example assumes there
|
||||
are three Kubernetes nodes: you would adjust the configuration to reflect the
|
||||
actual nodes you have. Also note that port 30144 should be replaced
|
||||
with whatever NodePort was assigned by Kubernetes.
|
||||
|
||||
```yaml
|
||||
frontend nginx-demo
|
||||
bind *:30144
|
||||
mode http
|
||||
default_backend nodes
|
||||
backend nodes
|
||||
mode http
|
||||
balance roundrobin
|
||||
option forwardfor
|
||||
http-request set-header X-Forwarded-Port %[dst_port]
|
||||
http-request add-header X-Forwarded-Proto https if { ssl_fc }
|
||||
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
|
||||
server web0 192.0.2.2:30144 check
|
||||
server web1 192.0.2.3:30144 check
|
||||
server web2 192.0.2.4:30144 check
|
||||
```
|
||||
|
||||
#### Option 2: Ingress Controller
|
||||
|
||||
Using an [ingress controller](/docs/concepts/services-networking/ingress/#ingress-controllers) may also be an
|
||||
appropriate solution. Note that it in a production environment it will
|
||||
also require an external load balancer. However, it may be simpler to
|
||||
manage because it will not require you to manually update the load
|
||||
balancer configuration, as above.
|
||||
|
||||
### Details
|
||||
|
||||
#### Logging into VMs
|
||||
|
||||
When the VMs are created, a `kube` user is created (using
|
||||
cloud-init). The password for the kube user is the same as the
|
||||
administrator password for your Kubernetes master and can be found in
|
||||
your Kubernetes configuration file: see above to find it. The kube user
|
||||
will also authorize your ssh public key to log in. This is used during
|
||||
installation to avoid the need for passwords.
|
||||
|
||||
The VMs do have a root user, but ssh to the root user is disabled.
|
||||
|
||||
### Networking
|
||||
|
||||
The Kubernetes cluster uses `kube-proxy` to configure the overlay
|
||||
network with iptables. Currently we do not support other overlay
|
||||
networks such as Weave or Calico.
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller) | | Community ([@alainroy](https://github.com/alainroy))
|
|
@ -119,10 +119,8 @@ These solutions are combinations of cloud providers and operating systems not co
|
|||
* [Vagrant](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel)
|
||||
* [CloudStack](/docs/getting-started-guides/cloudstack/) (uses Ansible, CoreOS and flannel)
|
||||
* [Vmware vSphere](/docs/getting-started-guides/vsphere/) (uses Debian)
|
||||
* [Vmware Photon Controller](/docs/getting-started-guides/photon-controller/) (uses Debian)
|
||||
* [Vmware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel)
|
||||
* [Vmware](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel)
|
||||
* [CoreOS on libvirt](/docs/getting-started-guides/libvirt-coreos/) (uses CoreOS)
|
||||
* [oVirt](/docs/getting-started-guides/ovirt/)
|
||||
* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel)
|
||||
|
||||
|
@ -178,7 +176,6 @@ Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/gettin
|
|||
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline/) | Community ([@jeffbean](https://github.com/jeffbean))
|
||||
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa))
|
||||
Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere/) | Community ([@imkin](https://github.com/imkin))
|
||||
Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller/) | Community ([@alainroy](https://github.com/alainroy))
|
||||
Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap))
|
||||
lxd | Juju | Ubuntu | flannel/canal | [docs](/docs/getting-started-guides/ubuntu/local/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
|
||||
AWS | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes)
|
||||
|
@ -191,7 +188,6 @@ Bare Metal | Juju | Ubuntu | flannel/calico/canal | [docs]
|
|||
AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws/) | Community ([@justinsb](https://github.com/justinsb))
|
||||
AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb))
|
||||
Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
|
||||
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos/) | Community ([@lhuard1A](https://github.com/lhuard1A))
|
||||
oVirt | | | | [docs](/docs/getting-started-guides/ovirt/) | Community ([@simon3z](https://github.com/simon3z))
|
||||
any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | Community ([@erictune](https://github.com/erictune))
|
||||
any | any | any | any | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community
|
||||
|
|
Loading…
Reference in New Issue