Merge pull request #43280 from tengqm/remove-kops-kubespray

Remove dual-hosted info about kops and kubespray
pull/44471/head
Kubernetes Prow Robot 2023-12-22 07:48:36 +01:00 committed by GitHub
commit 56309cd612
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 21 additions and 392 deletions

View File

@ -296,9 +296,8 @@ needs of your cluster's workloads:
and the
[API server](/docs/setup/production-environment/tools/kubeadm/ha-topology/).
- Choose from [kubeadm](/docs/setup/production-environment/tools/kubeadm/),
[kops](/docs/setup/production-environment/tools/kops/) or
[Kubespray](/docs/setup/production-environment/tools/kubespray/)
deployment methods.
[kops](https://kops.sigs.k8s.io/) or
[Kubespray](https://kubespray.io/) deployment methods.
- Configure user management by determining your
[Authentication](/docs/reference/access-authn-authz/authentication/) and
[Authorization](/docs/reference/access-authn-authz/authorization/) methods.

View File

@ -1,4 +1,23 @@
---
title: Installing Kubernetes with deployment tools
weight: 30
no_list: true
---
There are many methods and tools for setting up your own production Kubernetes cluster.
For example:
- [kubeadm](/docs/setup/production-environment/tools/kubeadm/)
- [kops](https://kops.sigs.k8s.io/): An automated cluster provisioning tool.
For tutorials, best practices, configuration options and information on
reaching out to the community, please check the
[`kOps` website](https://kops.sigs.k8s.io/) for details.
- [kubespray](https://kubespray.io/):
A composition of [Ansible](https://docs.ansible.com/) playbooks,
[inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory),
provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration
management tasks. You can reach out to the community on Slack channel
[#kubespray](https://kubernetes.slack.com/messages/kubespray/).

View File

@ -1,237 +0,0 @@
---
title: Installing Kubernetes with kOps
content_type: task
weight: 20
---
<!-- overview -->
This quickstart shows you how to easily install a Kubernetes cluster on AWS.
It uses a tool called [`kOps`](https://github.com/kubernetes/kops).
`kOps` is an automated provisioning system:
* Fully automated installation
* Uses DNS to identify clusters
* Self-healing: everything runs in Auto-Scaling Groups
* Multiple OS support (Amazon Linux, Debian, Flatcar, RHEL, Rocky and Ubuntu) - see the
[images.md](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md)
* High-Availability support - see the
[high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md)
* Can directly provision, or generate terraform manifests - see the
[terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
## {{% heading "prerequisites" %}}
* You must have [kubectl](/docs/tasks/tools/) installed.
* You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture.
* You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html),
generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys)
and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them.
The IAM user will need [adequate permissions](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md#setup-iam-user).
<!-- steps -->
## Creating a cluster
### (1/5) Install kops
#### Installation
Download kops from the [releases page](https://github.com/kubernetes/kops/releases)
(it is also convenient to build from source):
{{< tabs name="kops_installation" >}}
{{% tab name="macOS" %}}
Download the latest release with the command:
```shell
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
```
To download a specific version, replace the following portion of the command with the specific kops version.
```shell
$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)
```
For example, to download kops version v1.20.0 type:
```shell
curl -LO https://github.com/kubernetes/kops/releases/download/v1.20.0/kops-darwin-amd64
```
Make the kops binary executable.
```shell
chmod +x kops-darwin-amd64
```
Move the kops binary in to your PATH.
```shell
sudo mv kops-darwin-amd64 /usr/local/bin/kops
```
You can also install kops using [Homebrew](https://brew.sh/).
```shell
brew update && brew install kops
```
{{% /tab %}}
{{% tab name="Linux" %}}
Download the latest release with the command:
```shell
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
```
To download a specific version of kops, replace the following portion of the command with the specific kops version.
```shell
$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)
```
For example, to download kops version v1.20.0 type:
```shell
curl -LO https://github.com/kubernetes/kops/releases/download/v1.20.0/kops-linux-amd64
```
Make the kops binary executable
```shell
chmod +x kops-linux-amd64
```
Move the kops binary in to your PATH.
```shell
sudo mv kops-linux-amd64 /usr/local/bin/kops
```
You can also install kops using [Homebrew](https://docs.brew.sh/Homebrew-on-Linux).
```shell
brew update && brew install kops
```
{{% /tab %}}
{{< /tabs >}}
### (2/5) Create a route53 domain for your cluster
kops uses DNS for discovery, both inside the cluster and outside, so that you can reach the kubernetes API server
from clients.
kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so you will
no longer get your clusters confused, you can share clusters with your colleagues unambiguously,
and you can reach them without relying on remembering an IP address.
You can, and probably should, use subdomains to divide your clusters. As our example we will use
`useast1.dev.example.com`. The API server endpoint will then be `api.useast1.dev.example.com`.
A Route53 hosted zone can serve subdomains. Your hosted zone could be `useast1.dev.example.com`,
but also `dev.example.com` or even `example.com`. kops works with any of these, so typically
you choose for organization reasons (e.g. you are allowed to create records under `dev.example.com`,
but not under `example.com`).
Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using
the [normal process](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
with a command such as `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`.
You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here,
you would create NS records in `example.com` for `dev`. If it is a root domain name you would configure the NS
records at your domain registrar (e.g. `example.com` would need to be configured where you bought `example.com`).
Verify your route53 domain setup (it is the #1 cause of problems!). You can double-check that
your cluster is configured correctly if you have the dig tool by running:
`dig NS dev.example.com`
You should see the 4 NS records that Route53 assigned your hosted zone.
### (3/5) Create an S3 bucket to store your clusters state
kops lets you manage your clusters even after installation. To do this, it must keep track of the clusters
that you have created, along with their configuration, the keys they are using etc. This information is stored
in an S3 bucket. S3 permissions are used to control access to the bucket.
Multiple clusters can use the same S3 bucket, and you can share an S3 bucket between your colleagues that
administer the same clusters - this is much easier than passing around kubecfg files. But anyone with access
to the S3 bucket will have administrative access to all your clusters, so you don't want to share it beyond
the operations team.
So typically you have one S3 bucket for each ops team (and often the name will correspond
to the name of the hosted zone above!)
In our example, we chose `dev.example.com` as our hosted zone, so let's pick `clusters.dev.example.com` as
the S3 bucket name.
* Export `AWS_PROFILE` (if you need to select a profile for the AWS CLI to work)
* Create the S3 bucket using `aws s3 mb s3://clusters.dev.example.com`
* You can `export KOPS_STATE_STORE=s3://clusters.dev.example.com` and then kops will use this location by default.
We suggest putting this in your bash profile or similar.
### (4/5) Build your cluster configuration
Run `kops create cluster` to create your cluster configuration:
`kops create cluster --zones=us-east-1c useast1.dev.example.com`
kops will create the configuration for your cluster. Note that it _only_ creates the configuration, it does
not actually create the cloud resources - you'll do that in the next step with a `kops update cluster`. This
give you an opportunity to review the configuration or change it.
It prints commands you can use to explore further:
* List your clusters with: `kops get cluster`
* Edit this cluster with: `kops edit cluster useast1.dev.example.com`
* Edit your node instance group: `kops edit ig --name=useast1.dev.example.com nodes`
* Edit your master instance group: `kops edit ig --name=useast1.dev.example.com master-us-east-1c`
If this is your first time using kops, do spend a few minutes to try those out! An instance group is a
set of instances, which will be registered as kubernetes nodes. On AWS this is implemented via auto-scaling-groups.
You can have several instance groups, for example if you wanted nodes that are a mix of spot and on-demand instances, or
GPU and non-GPU instances.
### (5/5) Create the cluster in AWS
Run `kops update cluster` to create your cluster in AWS:
`kops update cluster useast1.dev.example.com --yes`
That takes a few seconds to run, but then your cluster will likely take a few minutes to actually be ready.
`kops update cluster` will be the tool you'll use whenever you change the configuration of your cluster; it
applies the changes you have made to the configuration to your cluster - reconfiguring AWS or kubernetes as needed.
For example, after you `kops edit ig nodes`, then `kops update cluster --yes` to apply your configuration, and
sometimes you will also have to `kops rolling-update cluster` to roll out the configuration immediately.
Without `--yes`, `kops update cluster` will show you a preview of what it is going to do. This is handy
for production clusters!
### Explore other add-ons
See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons,
including tools for logging, monitoring, network policy, visualization, and control of your Kubernetes cluster.
## Cleanup
* To delete your cluster: `kops delete cluster useast1.dev.example.com --yes`
## {{% heading "whatsnext" %}}
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/).
* Learn more about `kOps` [advanced usage](https://kops.sigs.k8s.io/) for tutorials,
best practices and advanced configuration options.
* Follow `kOps` community discussions on Slack:
[community discussions](https://kops.sigs.k8s.io/contributing/#other-ways-to-communicate-with-the-contributors).
(visit https://slack.k8s.io/ for an invitation to this Slack workspace).
* Contribute to `kOps` by addressing or raising an issue [GitHub Issues](https://github.com/kubernetes/kops/issues).

View File

@ -1,152 +0,0 @@
---
title: Installing Kubernetes with Kubespray
content_type: concept
weight: 30
---
<!-- overview -->
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack,
AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental)
or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).
Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks,
[inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory),
provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks.
Kubespray provides:
* Highly available cluster.
* Composable (Choice of the network plugin for instance).
* Supports most popular Linux distributions:
- Flatcar Container Linux by Kinvolk
- Debian Bullseye, Buster, Jessie, Stretch
- Ubuntu 16.04, 18.04, 20.04, 22.04
- CentOS/RHEL 7, 8, 9
- Fedora 35, 36
- Fedora CoreOS
- openSUSE Leap 15.x/Tumbleweed
- Oracle Linux 7, 8, 9
- Alma Linux 8, 9
- Rocky Linux 8, 9
- Kylin Linux Advanced Server V10
- Amazon Linux 2
* Continuous integration tests.
To choose a tool which best fits your use case, read
[this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
[kubeadm](/docs/reference/setup-tools/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
<!-- body -->
## Creating a cluster
### (1/5) Meet the underlay requirements
Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):
* **Minimum required version of Kubernetes is v1.22**
* **Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
* The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required See ([Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))
* The target servers are configured to allow **IPv4 forwarding**.
* If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall.
* If kubespray is run from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag or command
parameters `--become` or `-b` should be specified.
Kubespray provides the following utilities to help provision your environment:
* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:
* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)
* [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)
* [Equinix Metal](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/equinix)
### (2/5) Compose an inventory file
After you provision your servers, create an
[inventory file for Ansible](https://docs.ansible.com/ansible/latest/network/getting_started/first_inventory.html).
You can do this manually or via a dynamic inventory script. For more information,
see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
### (3/5) Plan your cluster deployment
Kubespray provides the ability to customize many aspects of the deployment:
* Choice deployment mode: kubeadm or non-kubeadm
* CNI (networking) plugins
* DNS configuration
* Choice of control plane: native/binary or containerized
* Component versions
* Calico route reflectors
* Component runtime options
* {{< glossary_tooltip term_id="docker" >}}
* {{< glossary_tooltip term_id="containerd" >}}
* {{< glossary_tooltip term_id="cri-o" >}}
* Certificate generation methods
Kubespray customizations can be made to a
[variable file](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html).
If you are getting started with Kubespray, consider using the Kubespray
defaults to deploy your cluster and explore Kubernetes.
### (4/5) Deploy a Cluster
Next, deploy your cluster:
Cluster deployment using
[ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
```shell
ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \
--private-key=~/.ssh/private_key
```
Large deployments (100+ nodes) may require
[specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md)
for best results.
### (5/5) Verify the deployment
Kubespray provides a way to verify inter-pod connectivity and DNS resolve with
[Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md).
Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each
over within the default namespace. Those pods mimic similar behavior as the rest
of the workloads and serve as cluster health indicators.
## Cluster operations
Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.
### Scale your cluster
You can add worker nodes from your cluster by running the scale playbook. For more information,
see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)".
You can remove worker nodes from your cluster by running the remove-node playbook. For more information,
see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)".
### Upgrade your cluster
You can upgrade your cluster by running the upgrade-cluster playbook. For more information,
see "[Upgrades](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)".
## Cleanup
You can reset your nodes and wipe out all components installed with Kubespray
via the [reset playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml).
{{< caution >}}
When running the reset playbook, be sure not to accidentally target your production cluster!
{{< /caution >}}
## Feedback
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/)
(You can get your invite [here](https://slack.k8s.io/)).
* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues).
## {{% heading "whatsnext" %}}
* Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md).
* Learn more about [Kubespray](https://github.com/kubernetes-sigs/kubespray).