Remove deprecated getting started guides (#16012)
parent
c82a912e29
commit
751ae6df0c
|
@ -1,5 +0,0 @@
|
|||
# See the OWNERS docs at https://go.k8s.io/owners
|
||||
|
||||
reviewers:
|
||||
- errordeveloper
|
||||
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
title: "Independent Solutions"
|
||||
weight: 50
|
||||
---
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- pwittrock
|
||||
title: Deprecated Alternatives
|
||||
---
|
||||
|
||||
# *Stop. These guides are superseded by [Minikube](../minikube/). They are only listed here for completeness.*
|
||||
|
||||
* [Using Vagrant](https://git.k8s.io/community/contributors/devel/vagrant.md)
|
||||
* *Advanced:* [Directly using Kubernetes raw binaries (Linux Only)](https://git.k8s.io/community/contributors/devel/running-locally.md)
|
|
@ -1,7 +0,0 @@
|
|||
# See the OWNERS docs at https://go.k8s.io/owners
|
||||
|
||||
reviewers:
|
||||
- aveshagarwal
|
||||
- eparis
|
||||
- thockin
|
||||
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
title: "Bare Metal"
|
||||
weight: 60
|
||||
---
|
||||
|
|
@ -1,177 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- aveshagarwal
|
||||
- eparis
|
||||
- thockin
|
||||
title: Fedora (Single Node)
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. You need 2 or more machines with Fedora installed. These can be either bare metal machines or virtual machines.
|
||||
|
||||
## Instructions
|
||||
|
||||
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/concepts/cluster-administration/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
|
||||
|
||||
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: `/etc/kubernetes`. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
|
||||
|
||||
**System Information:**
|
||||
|
||||
Hosts:
|
||||
|
||||
```conf
|
||||
fed-master = 192.168.121.9
|
||||
fed-node = 192.168.121.65
|
||||
```
|
||||
|
||||
**Prepare the hosts:**
|
||||
|
||||
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with Kubernetes-0.18 and beyond.
|
||||
* Running on AWS EC2 with RHEL 7.2, you need to enable "extras" repository for yum by editing `/etc/yum.repos.d/redhat-rhui.repo` and changing the `enable=0` to `enable=1` for extras.
|
||||
|
||||
```shell
|
||||
dnf -y install kubernetes
|
||||
```
|
||||
|
||||
* Install etcd
|
||||
|
||||
```shell
|
||||
dnf -y install etcd
|
||||
```
|
||||
|
||||
* Add master and node to `/etc/hosts` on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
|
||||
|
||||
```shell
|
||||
echo "192.168.121.9 fed-master
|
||||
192.168.121.65 fed-node" >> /etc/hosts
|
||||
```
|
||||
|
||||
* Edit `/etc/kubernetes/config` (which should be the same on all hosts) to set
|
||||
the name of the master server:
|
||||
|
||||
```shell
|
||||
# Comma separated list of nodes in the etcd cluster
|
||||
KUBE_MASTER="--master=http://fed-master:8080"
|
||||
```
|
||||
|
||||
* Disable the firewall on both the master and node, as Docker does not play well with other firewall rule managers. Please note that iptables.service does not exist on the default Fedora Server install.
|
||||
|
||||
```shell
|
||||
systemctl mask firewalld.service
|
||||
systemctl stop firewalld.service
|
||||
|
||||
systemctl disable --now iptables.service
|
||||
```
|
||||
|
||||
**Configure the Kubernetes services on the master.**
|
||||
|
||||
* Edit `/etc/kubernetes/apiserver` to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
|
||||
|
||||
```shell
|
||||
# The address on the local server to listen to.
|
||||
KUBE_API_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
# Comma separated list of nodes in the etcd cluster
|
||||
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
|
||||
|
||||
# Address range to use for services
|
||||
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
|
||||
|
||||
# Add your own!
|
||||
KUBE_API_ARGS=""
|
||||
```
|
||||
|
||||
* Edit `/etc/etcd/etcd.conf` to let etcd listen on all available IPs instead of 127.0.0.1. If you have not done this, you might see an error such as "connection refused".
|
||||
|
||||
```shell
|
||||
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
|
||||
```
|
||||
|
||||
* Start the appropriate services on master:
|
||||
|
||||
```shell
|
||||
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
|
||||
systemctl enable --now $SERVICES
|
||||
systemctl status $SERVICES
|
||||
done
|
||||
```
|
||||
|
||||
**Configure the Kubernetes services on the node.**
|
||||
|
||||
***We need to configure the kubelet on the node.***
|
||||
|
||||
* Edit `/etc/kubernetes/kubelet` to appear as such:
|
||||
|
||||
```shell
|
||||
###
|
||||
# Kubernetes kubelet (node) config
|
||||
|
||||
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
|
||||
KUBELET_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
# You may leave this blank to use the actual hostname
|
||||
KUBELET_HOSTNAME="--hostname-override=fed-node"
|
||||
|
||||
# location of the api-server
|
||||
KUBELET_ARGS="--cgroup-driver=systemd --kubeconfig=/etc/kubernetes/master-kubeconfig.yaml"
|
||||
|
||||
```
|
||||
|
||||
* Edit `/etc/kubernetes/master-kubeconfig.yaml` to contain the following information:
|
||||
|
||||
```yaml
|
||||
kind: Config
|
||||
clusters:
|
||||
- name: local
|
||||
cluster:
|
||||
server: http://fed-master:8080
|
||||
users:
|
||||
- name: kubelet
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local
|
||||
user: kubelet
|
||||
name: kubelet-context
|
||||
current-context: kubelet-context
|
||||
```
|
||||
|
||||
* Start the appropriate services on the node (fed-node).
|
||||
|
||||
```shell
|
||||
for SERVICES in kube-proxy kubelet docker; do
|
||||
systemctl enable --now $SERVICES
|
||||
systemctl status $SERVICES
|
||||
done
|
||||
```
|
||||
|
||||
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
fed-node Ready 4h
|
||||
```
|
||||
|
||||
* Deletion of nodes:
|
||||
|
||||
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
|
||||
|
||||
```shell
|
||||
kubectl delete -f ./node.json
|
||||
```
|
||||
|
||||
*You should be finished!*
|
||||
|
||||
**The cluster should be running! Launch a test pod.**
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project
|
|
@ -1,196 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- dchen1107
|
||||
- erictune
|
||||
- thockin
|
||||
title: Fedora (Multi Node)
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. Flannel on each node configures an overlay network that docker uses. Flannel runs on each node to setup a unique class-C container network.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need 2 or more machines with Fedora installed.
|
||||
|
||||
## Master Setup
|
||||
|
||||
**Perform following commands on the Kubernetes master**
|
||||
|
||||
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. Flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
|
||||
|
||||
```json
|
||||
{
|
||||
"Network": "18.16.0.0/16",
|
||||
"SubnetLen": 24,
|
||||
"Backend": {
|
||||
"Type": "vxlan",
|
||||
"VNI": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Choose an IP range that is *NOT* part of the public IP address range.
|
||||
{{< /note >}}
|
||||
|
||||
Add the configuration to the etcd server on fed-master.
|
||||
|
||||
```shell
|
||||
etcdctl set /coreos.com/network/config < flannel-config.json
|
||||
```
|
||||
|
||||
* Verify that the key exists in the etcd server on fed-master.
|
||||
|
||||
```shell
|
||||
etcdctl get /coreos.com/network/config
|
||||
```
|
||||
|
||||
## Node Setup
|
||||
|
||||
**Perform following commands on all Kubernetes nodes**
|
||||
|
||||
Install the flannel package
|
||||
|
||||
```shell
|
||||
# dnf -y install flannel
|
||||
```
|
||||
|
||||
Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
|
||||
|
||||
```shell
|
||||
# Flanneld configuration options
|
||||
|
||||
# etcd url location. Point this to the server where etcd runs
|
||||
FLANNEL_ETCD="http://fed-master:2379"
|
||||
|
||||
# etcd config key. This is the configuration key that flannel queries
|
||||
# For address range assignment
|
||||
FLANNEL_ETCD_KEY="/coreos.com/network"
|
||||
|
||||
# Any additional options that you want to pass
|
||||
FLANNEL_OPTIONS=""
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
|
||||
{{< /note >}}
|
||||
|
||||
Enable the flannel service.
|
||||
|
||||
```shell
|
||||
systemctl enable flanneld
|
||||
```
|
||||
|
||||
If docker is not running, then starting flannel service is enough and skip the next step.
|
||||
|
||||
```shell
|
||||
systemctl start flanneld
|
||||
```
|
||||
|
||||
If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
|
||||
|
||||
```shell
|
||||
systemctl stop docker
|
||||
ip link delete docker0
|
||||
systemctl start flanneld
|
||||
systemctl start docker
|
||||
```
|
||||
|
||||
|
||||
## Test the cluster and flannel configuration
|
||||
|
||||
Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
|
||||
|
||||
```shell
|
||||
# ip -4 a|grep inet
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
|
||||
inet 18.16.29.0/16 scope global flannel.1
|
||||
inet 18.16.29.1/24 scope global docker0
|
||||
```
|
||||
|
||||
From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
|
||||
|
||||
```shell
|
||||
curl -s http://fed-master:2379/v2/keys/coreos.com/network/subnets | python -mjson.tool
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"node": {
|
||||
"key": "/coreos.com/network/subnets",
|
||||
{
|
||||
"key": "/coreos.com/network/subnets/18.16.29.0-24",
|
||||
"value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}"
|
||||
},
|
||||
{
|
||||
"key": "/coreos.com/network/subnets/18.16.83.0-24",
|
||||
"value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}"
|
||||
},
|
||||
{
|
||||
"key": "/coreos.com/network/subnets/18.16.90.0-24",
|
||||
"value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
|
||||
|
||||
```shell
|
||||
# cat /run/flannel/subnet.env
|
||||
FLANNEL_SUBNET=18.16.29.1/24
|
||||
FLANNEL_MTU=1450
|
||||
FLANNEL_IPMASQ=false
|
||||
```
|
||||
|
||||
At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
|
||||
|
||||
Issue the following commands on any 2 nodes:
|
||||
|
||||
```shell
|
||||
# docker run -it fedora:latest bash
|
||||
bash-4.3#
|
||||
```
|
||||
|
||||
This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
|
||||
|
||||
```shell
|
||||
bash-4.3# dnf -y install iproute iputils
|
||||
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
|
||||
```
|
||||
|
||||
Now note the IP address on the first node:
|
||||
|
||||
```shell
|
||||
bash-4.3# ip -4 a l eth0 | grep inet
|
||||
inet 18.16.29.4/24 scope global eth0
|
||||
```
|
||||
|
||||
And also note the IP address on the other node:
|
||||
|
||||
```shell
|
||||
bash-4.3# ip a l eth0 | grep inet
|
||||
inet 18.16.90.4/24 scope global eth0
|
||||
```
|
||||
Now ping from the first node to the other node:
|
||||
|
||||
```shell
|
||||
bash-4.3# ping 18.16.90.4
|
||||
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
|
||||
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
|
||||
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
|
||||
```
|
||||
|
||||
Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
|
@ -1,41 +0,0 @@
|
|||
---
|
||||
title: Kubernetes on Ubuntu
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
There are multiple ways to run a Kubernetes cluster with Ubuntu on public and
|
||||
private clouds, as well as bare metal.
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
## The Charmed Distribution of Kubernetes(CDK)
|
||||
|
||||
[CDK](https://www.ubuntu.com/cloud/kubernetes) is a distribution of Kubernetes
|
||||
packaged as a bundle of *charms* for Juju, the open source application modeller.
|
||||
|
||||
CDK is the latest version of Kubernetes with upstream binaries, packaged in a format
|
||||
which makes it fast and easy to deploy. It supports various public
|
||||
and private clouds including AWS, GCE, Azure, Joyent, OpenStack, VMware, Bare Metal
|
||||
and localhost deployments.
|
||||
|
||||
See the [Official documentation](https://www.ubuntu.com/kubernetes/docs) for
|
||||
more information.
|
||||
|
||||
## MicroK8s
|
||||
|
||||
[MicroK8s](https://microk8s.io) is a minimal install of Kubernetes designed to run locally.
|
||||
It can be installed on Ubuntu (or any snap enabled operating system) with the command:
|
||||
|
||||
```shell
|
||||
snap install microk8s --classic
|
||||
```
|
||||
|
||||
Full documentation is available on the [MicroK8s website](https://microk8s.io/docs)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
{{% /capture %}}
|
|
@ -161,51 +161,10 @@
|
|||
/docs/federation/api-reference/federation/v1beta1/operations.html /docs/reference/generated/federation/extensions/v1beta1/operations/ 301
|
||||
/docs/federation/api-reference/README/ /docs/reference/generated/federation/ 301
|
||||
|
||||
/docs/getting-started-guide/* /docs/setup/ 301
|
||||
/docs/getting-started-guides/ /docs/setup/pick-right-solution/ 301
|
||||
/docs/getting-started-guides/binary_release/ /docs/setup/release/building-from-source/ 301
|
||||
/docs/getting-started-guides/cloudstack/ /docs/setup/on-premises-vm/cloudstack/ 301
|
||||
/docs/getting-started-guides/dcos/ /docs/setup/on-premises-vm/dcos/ 301
|
||||
/docs/getting-started-guides/ovirt/ /docs/setup/on-premises-vm/ovirt/ 301
|
||||
/docs/getting-started-guides/coreos/ /docs/setup/custom-cloud/coreos/ 301
|
||||
/docs/getting-started-guides/kops/ /docs/setup/custom-cloud/kops/ 301
|
||||
/docs/getting-started-guides/kubespray/ /docs/setup/custom-cloud/kubespray/ 301
|
||||
/docs/getting-started-guides/coreos/azure/ /docs/getting-started-guides/coreos/ 301
|
||||
/docs/getting-started-guides/coreos/bare_metal_calico/ /docs/getting-started-guides/coreos/ 301
|
||||
/docs/getting-started-guides/docker-multinode/* /docs/setup/independent/create-cluster-kubeadm/ 301
|
||||
/docs/getting-started-guides/juju/ /docs/getting-started-guides/ubuntu/installation/ 301
|
||||
/docs/getting-started-guides/kargo/ /docs/getting-started-guides/kubespray/ 301
|
||||
/docs/getting-started-guides/kubeadm/ /docs/setup/independent/create-cluster-kubeadm/ 301
|
||||
/docs/getting-started-guides/kubectl/ /docs/reference/kubectl/overview/ 301
|
||||
/docs/getting-started-guides/logging/ /docs/concepts/cluster-administration/logging/ 301
|
||||
/docs/getting-started-guides/logging-elasticsearch/ /docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ 301
|
||||
/docs/getting-started-guides/meanstack/ https://medium.com/google-cloud/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d/ 301
|
||||
/docs/getting-started-guides/minikube/ /docs/setup/minikube/ 301
|
||||
/docs/getting-started-guides/network-policy/calico/ /docs/tasks/administer-cluster/calico-network-policy/ 301
|
||||
/docs/getting-started-guides/network-policy/romana/ /docs/tasks/administer-cluster/romana-network-policy/ 301
|
||||
/docs/getting-started-guides/network-policy/walkthrough/ /docs/tasks/administer-cluster/declare-network-policy/ 301
|
||||
/docs/getting-started-guides/network-policy/weave/ /docs/tasks/administer-cluster/weave-network-policy/ 301
|
||||
/docs/getting-started-guides/rackspace/ /docs/setup/pick-right-solution/ 301
|
||||
/docs/getting-started-guides/running-cloud-controller/ /docs/tasks/administer-cluster/running-cloud-controller/ 301
|
||||
/docs/getting-started-guides/scratch/ /docs/setup/release/building-from-source/ 301
|
||||
/docs/getting-started-guides/ubuntu-calico/ /docs/getting-started-guides/ubuntu/ 301
|
||||
/docs/getting-started-guides/ubuntu/automated/ /docs/getting-started-guides/ubuntu/ 301
|
||||
/docs/getting-started-guides/ubuntu/calico/ /docs/getting-started-guides/ubuntu/ 301
|
||||
/docs/getting-started-guides/vagrant/ /docs/getting-started-guides/alternatives/ 301
|
||||
/docs/getting-started-guides/vsphere/ https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/ 301
|
||||
/docs/getting-started-guides/windows/While/ /docs/setup/windows/ 301
|
||||
/docs/getting-started-guides/windows/ /docs/setup/windows/ 301
|
||||
/docs/getting-started-guides/centos/* /docs/setup/independent/create-cluster-kubeadm/ 301
|
||||
/docs/getting-started-guides/alibaba-cloud/ /docs/setup/turnkey/alibaba-cloud/ 301
|
||||
/docs/getting-started-guides/aws/ /docs/setup/turnkey/aws/ 301
|
||||
/docs/getting-started-guides/azure/ /docs/setup/turnkey/azure/ 301
|
||||
/docs/getting-started-guides/clc/ /docs/setup/turnkey/clc/ 301
|
||||
/docs/getting-started-guides/gce/ /docs/setup/turnkey/gce/ 301
|
||||
/docs/getting-started-guides/stackpoint/ /docs/setup/turnkey/stackpoint/ 301
|
||||
/docs/getting-started-guides/* /docs/setup/ 301
|
||||
|
||||
/docs/hellonode/ /docs/tutorials/stateless-application/hello-minikube/ 301
|
||||
/docs/home/contribute/stage-documentation-changes/ /docs/home/contribute/create-pull-request/ 301
|
||||
/docs/home/coreos/ /docs/getting-started-guides/coreos/ 301
|
||||
/docs/home/deprecation-policy/ /docs/reference/using-api/deprecation-policy/ 301
|
||||
|
||||
/docs/home/contribute/ /docs/contribute/ 301
|
||||
|
@ -344,13 +303,6 @@
|
|||
/docs/tutorials/federation/set-up-cluster-federation-kubefed.md /docs/tasks/federation/set-up-cluster-federation-kubefed/ 301
|
||||
/docs/tutorials/federation/set-up-coredns-provider-federation/ /docs/tasks/federation/set-up-coredns-provider-federation/ 301
|
||||
/docs/tutorials/federation/set-up-placement-policies-federation/ /docs/tasks/federation/set-up-placement-policies-federation/ 301
|
||||
/docs/tutorials/getting-started/cluster-intro/ /docs/tutorials/kubernetes-basics/cluster-intro/ 301
|
||||
/docs/tutorials/getting-started/create-cluster/ /docs/tutorials/kubernetes-basics/cluster-intro/ 301
|
||||
/docs/tutorials/getting-started/expose-intro/ /docs/tutorials/kubernetes-basics/expose-intro/ 301
|
||||
/docs/tutorials/getting-started/scale-app/ /docs/tutorials/kubernetes-basics/scale-interactive/ 301
|
||||
/docs/tutorials/getting-started/scale-intro/ /docs/tutorials/kubernetes-basics/scale-intro/ 301
|
||||
/docs/tutorials/getting-started/update-interactive/ /docs/tutorials/kubernetes-basics/update-interactive/ 301
|
||||
/docs/tutorials/getting-started/update-intro/ /docs/tutorials/kubernetes-basics/ 301
|
||||
/docs/tutorials/kubernetes-basics/cluster-interactive/ /docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/ 301
|
||||
/docs/tutorials/kubernetes-basics/cluster-intro/ /docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/ 301
|
||||
/docs/tutorials/kubernetes-basics/deploy-interactive/ /docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/ 301
|
||||
|
@ -418,7 +370,6 @@
|
|||
/docs/user-guide/federation/secrets/ /docs/tasks/administer-federation/secret/ 301
|
||||
/docs/user-guide/garbage-collection/ /docs/concepts/workloads/controllers/garbage-collection/ 301
|
||||
/docs/user-guide/garbage-collector/* /docs/concepts/workloads/controllers/garbage-collection/ 301
|
||||
/docs/user-guide/getting-into-containers/ /docs/tasks/debug-application-cluster/get-shell-running-container/ 301
|
||||
/docs/user-guide/gpus/ /docs/tasks/manage-gpus/scheduling-gpus/ 301
|
||||
/docs/user-guide/horizontal-pod-autoscaler/* /docs/tasks/run-application/horizontal-pod-autoscale/ 301
|
||||
/docs/user-guide/horizontal-pod-autoscaling/ /docs/tasks/run-application/horizontal-pod-autoscale/ 301
|
||||
|
@ -501,7 +452,6 @@
|
|||
|
||||
/docs/whatisk8s/ /docs/concepts/overview/what-is-kubernetes/ 301
|
||||
/events/ /docs/community 301
|
||||
/gettingstarted/ /docs/home/ 301
|
||||
/horizontal-pod-autoscaler/ /docs/tasks/run-application/horizontal-pod-autoscale/ 301
|
||||
/kubernetes/ /docs/ 301
|
||||
/kubernetes-bootcamp/* /docs/tutorials/kubernetes-basics/ 301
|
||||
|
@ -515,7 +465,6 @@
|
|||
/swagger-spec/* https://github.com/kubernetes/kubernetes/tree/master/api/swagger-spec/ 301
|
||||
/third_party/swagger-ui/* /docs/reference/ 301
|
||||
/v1.1/docs/admin/networking.html /docs/concepts/cluster-administration/networking/ 301
|
||||
/v1.1/docs/getting-started-guides/ /docs/tutorials/kubernetes-basics/ 301
|
||||
|
||||
https://kubernetes-io-v1-7.netlify.com/* https://v1-7.docs.kubernetes.io/:splat 301
|
||||
|
||||
|
@ -587,5 +536,5 @@ https://kubernetes-io-v1-7.netlify.com/* https://v1-7.docs.kubernetes.io/:spl
|
|||
/docs/setup/multiple-zones/ /docs/setup/best-practices/multiple-zones/ 301
|
||||
/docs/setup/cluster-large/ /docs/setup/best-practices/cluster-large/ 301
|
||||
/docs/setup/node-conformance/ /docs/setup/best-practices/node-conformance/ 301
|
||||
/docs/setup/certificates/ /docs/setup/best-practices/certificates/ 301
|
||||
/docs/setup/certificates/ /docs/setup/best-practices/certificates/ 301
|
||||
|
||||
|
|
Loading…
Reference in New Issue