Replace redirected links with the real targets

For some links that are invalid forever, this PR drops them.
pull/22598/head
Qiming Teng 2020-07-20 16:17:37 +08:00
parent 1943aaafd6
commit c4add100ff
13 changed files with 43 additions and 70 deletions

View File

@ -20,7 +20,7 @@ At {{< param "version" >}}, Kubernetes supports clusters with up to 5000 nodes.
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)).
Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)).
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
@ -80,7 +80,7 @@ On AWS, master node sizes are currently set at cluster startup time and do not c
### Addon Resources
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://pr.k8s.io/10653/files) and [#10778](https://pr.k8s.io/10778/files)).
For example:
@ -94,28 +94,26 @@ For example:
memory: 200Mi
```
Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](https://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
* Scale memory and CPU limits for each of the following addons, if used, as you scale up the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
* [InfluxDB and Grafana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [kubedns, dnsmasq, and sidecar](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
* [Kibana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
* [InfluxDB and Grafana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [kubedns, dnsmasq, and sidecar](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
* [Kibana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
* [elasticsearch](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
* [elasticsearch](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
* [FluentD with GCP Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
* [FluentD with ElasticSearch Plugin](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
* [FluentD with GCP Plugin](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185)
and [#22940](http://issue.k8s.io/22940)). If you find that Heapster is running
out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting).
In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster.
We welcome PRs that implement those features.
For directions on how to detect if addon containers are hitting resource limits, see the
[Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-resources-containers/#troubleshooting).
### Allowing minor node failure at startup
@ -126,3 +124,4 @@ running `kube-up.sh` set the environment variable `ALLOWED_NOTREADY_NODES` to wh
with. This will allow `kube-up.sh` to succeed with fewer than `NUM_NODES` coming up. Depending on the
reason for the failure, those additional nodes may join later or the cluster may remain at a size of
`NUM_NODES - ALLOWED_NOTREADY_NODES`.

View File

@ -78,7 +78,7 @@ federation support).
a single master node by default. While services are highly
available and can tolerate the loss of a zone, the control plane is
located in a single zone. Users that want a highly available control
plane should follow the [high availability](/docs/admin/high-availability) instructions.
plane should follow the [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) instructions.
### Volume limitations
The following limitations are addressed with [topology-aware volume binding](/docs/concepts/storage/storage-classes/#volume-binding-mode).

View File

@ -198,7 +198,7 @@ This brief demo guides you on how to start, use, and delete Minikube locally. Fo
The `minikube start` command can be used to start your cluster.
This command creates and configures a Virtual Machine that runs a single-node Kubernetes cluster.
This command also configures your [kubectl](/docs/user-guide/kubectl-overview/) installation to communicate with this cluster.
This command also configures your [kubectl](/docs/reference/kubectl/overview/) installation to communicate with this cluster.
{{< note >}}
If you are behind a web proxy, you need to pass this information to the `minikube start` command:
@ -514,6 +514,6 @@ For more information about Minikube, see the [proposal](https://git.k8s.io/commu
## Community
Contributions, questions, and comments are all welcomed and encouraged! Minikube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".
Contributions, questions, and comments are all welcomed and encouraged! Minikube developers hang out on [Slack](https://kubernetes.slack.com) in the `#minikube` channel (get an invitation [here](https://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".

View File

@ -9,12 +9,10 @@ content_type: concept
[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
[CoreOS](https://coreos.com) templates for CloudStack are built [nightly](https://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](https://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
<!-- body -->
## Prerequisites
@ -112,10 +110,7 @@ e9af8293... <node #2 IP> role=node
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/production-environment/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/))

View File

@ -140,7 +140,7 @@ you choose for organization reasons (e.g. you are allowed to create records unde
but not under `example.com`).
Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using
the [normal process](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
the [normal process](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
with a command such as `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`.
You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here,
@ -231,9 +231,8 @@ See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to expl
## {{% heading "whatsnext" %}}
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/overview/).
* Learn more about `kops` [advanced usage](https://kops.sigs.k8s.io/) for tutorials, best practices and advanced configuration options.
* Follow `kops` community discussions on Slack: [community discussions](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors)
* Contribute to `kops` by addressing or raising an issue [GitHub Issues](https://github.com/kubernetes/kops/issues)

View File

@ -284,7 +284,7 @@ tracker instead of the kubeadm or kubernetes issue trackers.
{{< /note >}}
Several external projects provide Kubernetes Pod networks using CNI, some of which also
support [Network Policy](/docs/concepts/services-networking/networkpolicies/).
support [Network Policy](/docs/concepts/services-networking/network-policies/).
See the list of available
[networking and network policy add-ons](/docs/concepts/cluster-administration/addons/#networking-and-network-policy).
@ -578,9 +578,9 @@ options.
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
for details about upgrading your cluster using `kubeadm`.
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/overview/).
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
of Pod network add-ons.
of Pod network add-ons.
* <a id="other-addons" />See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
explore other add-ons, including tools for logging, monitoring, network policy, visualization &amp;
control of your Kubernetes cluster.

View File

@ -22,7 +22,7 @@ and environment. [This comparison topic](/docs/setup/production-environment/tool
If you encounter issues with setting up the HA cluster, please provide us with feedback
in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15).
See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/).
{{< caution >}}
This page does not address running your cluster on a cloud provider. In a cloud
@ -30,8 +30,6 @@ environment, neither approach documented here works with Service objects of type
LoadBalancer, or with dynamic PersistentVolumes.
{{< /caution >}}
## {{% heading "prerequisites" %}}
@ -51,8 +49,6 @@ For the external etcd cluster only, you also need:
- Three additional machines for etcd members
<!-- steps -->
## First steps for both methods

View File

@ -13,14 +13,12 @@ weight: 100
kubeadm allows you to experimentally create a _self-hosted_ Kubernetes control
plane. This means that key components such as the API server, controller
manager, and scheduler run as [DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/)
configured via the Kubernetes API instead of [static pods](/docs/tasks/administer-cluster/static-pod/)
configured via the Kubernetes API instead of [static pods](/docs/tasks/configure-pod-container/static-pod/)
configured in the kubelet via static files.
To create a self-hosted cluster see the
[kubeadm alpha selfhosting pivot](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-selfhosting) command.
<!-- body -->
#### Caveats

View File

@ -15,11 +15,10 @@ If your problem is not listed below, please follow the following steps:
- Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues.
- If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template.
- If you are unsure about how kubeadm works, you can ask on [Slack](http://slack.k8s.io/) in #kubeadm, or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include
- If you are unsure about how kubeadm works, you can ask on [Slack](https://slack.k8s.io/) in `#kubeadm`,
or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include
relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.
<!-- body -->
## Not possible to join a v1.18 Node to a v1.17 cluster due to missing RBAC

View File

@ -8,7 +8,7 @@ weight: 30
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).
Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
* a highly available cluster
* composable attributes
@ -21,9 +21,8 @@ Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [in
* openSUSE Leap 15
* continuous integration tests
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
<!-- body -->
@ -50,7 +49,7 @@ Kubespray provides the following utilities to help provision your environment:
### (2/5) Compose an inventory file
After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
### (3/5) Plan your cluster deployment
@ -68,7 +67,7 @@ Kubespray provides the ability to customize many aspects of the deployment:
* {{< glossary_tooltip term_id="cri-o" >}}
* Certificate generation methods
Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
### (4/5) Deploy a Cluster
@ -110,11 +109,9 @@ When running the reset playbook, be sure not to accidentally target your product
## Feedback
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](http://slack.k8s.io/))
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/))
* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues)
## {{% heading "whatsnext" %}}

View File

@ -48,7 +48,7 @@ export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/)
An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/reference/kubectl/kubectl/)
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
For more information, please read [kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
@ -63,7 +63,8 @@ For more complete applications, please look in the [examples directory](https://
## Scaling the cluster
Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the [Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation.
Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the
[Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation.
## Tearing down the cluster
@ -80,13 +81,8 @@ cluster/kube-down.sh
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ----------------------------
AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community
AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/docs/getting-started-guides/ubuntu) | 100% | Commercial, Community
AWS | CoreOS | CoreOS | flannel | - | | Community
AWS | Juju | Ubuntu | flannel, calico, canal | - | 100% | Commercial, Community
AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | [docs](https://github.com/kubermatic/kubeone) | 100% | Commercial, Community
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.

View File

@ -72,7 +72,7 @@ cluster/kube-up.sh
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on [troubleshooting](/docs/setup/production-environment/turnkey/gce/#troubleshooting), post to the
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on [Slack](/docs/troubleshooting/#slack).
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on `#gke` Slack channel.
The next few steps will show you:
@ -85,7 +85,7 @@ The next few steps will show you:
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
The [kubectl](/docs/user-guide/kubectl/) tool controls the Kubernetes cluster
The [kubectl](/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster
manager. It lets you inspect your cluster resources, create, delete, and update
components, and much more. You will use it to look at your new cluster and bring
up example apps.
@ -98,7 +98,7 @@ gcloud components install kubectl
{{< note >}}
The kubectl version bundled with `gcloud` may be older than the one
downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/kubectl/install/)
downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/tools/install-kubectl/)
document to see how you can set up the latest `kubectl` on your workstation.
{{< /note >}}
@ -112,7 +112,7 @@ Once `kubectl` is in your path, you can use it to look at your cluster. E.g., ru
kubectl get --all-namespaces services
```
should show a set of [services](/docs/user-guide/services) that look something like this:
should show a set of [services](/docs/concepts/services-networking/service/) that look something like this:
```shell
NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
@ -122,7 +122,7 @@ kube-system kube-ui ClusterIP 10.0.0.3 <none>
...
```
Similarly, you can take a look at the set of [pods](/docs/user-guide/pods) that were created during cluster startup.
Similarly, you can take a look at the set of [pods](/docs/concepts/workloads/pods/pod/) that were created during cluster startup.
You can do this via the
```shell
@ -149,7 +149,7 @@ Some of the pods may take a few seconds to start up (during this time they'll sh
### Run some examples
Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
Then, see [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough.
@ -221,9 +221,3 @@ IaaS Provider | Config. Mgmt | OS | Networking | Docs
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/production-environment/turnkey/gce/) | | Project
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.

View File

@ -21,7 +21,7 @@ Specific cluster deployment tools may place additional restrictions on version s
## Supported versions
Kubernetes versions are expressed as **x.y.z**,
where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](http://semver.org/) terminology.
where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology.
For more information, see [Kubernetes Release Versioning](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning).
The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}).