Restructure the left navigation pane of setup (#14826)

* restructure left nav

* Restructure setup left navigation

* Update _redirects

* Incorporated all the changes suggested

* removed the Thumbs.db file
pull/14826/merge
Rajakavitha1 2019-06-12 17:27:29 +05:30 committed by Kubernetes Prow Robot
parent f60947b370
commit 55ac801bc4
65 changed files with 250 additions and 244 deletions

View File

@ -170,7 +170,7 @@ users in the event of a cluster failure), then you need to have `R * (U + 1)` cl
Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
you may need even more clusters. Kubernetes v1.3 supports clusters up to 1000 nodes in size. Kubernetes v1.8 supports
clusters up to 5000 nodes. See [Building Large Clusters](/docs/setup/cluster-large/) for more guidance.
clusters up to 5000 nodes. See [Building Large Clusters](/docs/setup/best-practices/cluster-large/) for more guidance.
{{% /capture %}}

View File

@ -115,7 +115,7 @@ to the behavior when the RuntimeClass feature is disabled.
### CRI Configuration
For more details on setting up CRI runtimes, see [CRI installation](/docs/setup/cri/).
For more details on setting up CRI runtimes, see [CRI installation](/docs/setup/production-environment/container-runtimes/).
#### dockershim

View File

@ -136,7 +136,7 @@ For information about enabling IPVS mode with kubeadm see:
### Passing custom flags to control plane components {#control-plane-flags}
For information about passing flags to control plane components see:
- [control-plane-flags](/docs/setup/independent/control-plane-flags/)
- [control-plane-flags](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
### Using custom images {#custom-images}
@ -231,7 +231,7 @@ using an external CRI implementation.
### Setting the node name
By default, `kubeadm` assigns a node name based on a machine's host address. You can override this setting with the `--node-name`flag.
The flag passes the appropriate [`--hostname-override`](/docs/reference/command-line-tools-reference/kubelet/#options)
The flag passes the appropriate [`--hostname-override`](/docs/reference/command-line-tools-reference/kubelet/#options)
to the kubelet.
Be aware that overriding the hostname can [interfere with cloud providers](https://github.com/kubernetes/website/pull/8873).
@ -304,7 +304,7 @@ don't require an `-${ARCH}` suffix.
### Automating kubeadm
Rather than copying the token you obtained from `kubeadm init` to each node, as
in the [basic kubeadm tutorial](/docs/setup/independent/create-cluster-kubeadm/), you can parallelize the
in the [basic kubeadm tutorial](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), you can parallelize the
token distribution for easier automation. To implement this automation, you must
know the IP address that the control-plane node will have after it is started.

View File

@ -14,9 +14,9 @@ Kubernetes contains several built-in tools to help you work with the Kubernetes
[`kubectl`](/docs/tasks/tools/install-kubectl/) is the command line tool for Kubernetes. It controls the Kubernetes cluster manager.
## Kubeadm
## Kubeadm
[`kubeadm`](/docs/setup/independent/install-kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha).
[`kubeadm`](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha).
## Kubefed
@ -29,10 +29,10 @@ to help you administrate your federated clusters.
easy to run a single-node Kubernetes cluster locally on your workstation for
development and testing purposes.
## Dashboard
## Dashboard
[`Dashboard`](/docs/tasks/access-application-cluster/web-ui-dashboard/), the web-based user interface of Kubernetes, allows you to deploy containerized applications
to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself.
to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself.
## Helm

View File

@ -87,13 +87,13 @@ no less than:**
* **Beta: 9 months or 3 releases (whichever is longer)**
* **Alpha: 0 releases**
This covers the [maximum supported version skew of 2 releases](/docs/setup/version-skew-policy/).
This covers the [maximum supported version skew of 2 releases](/docs/setup/release/version-skew-policy/).
{{< note >}}
Until [#52185](https://github.com/kubernetes/kubernetes/issues/52185) is
resolved, no API versions that have been persisted to storage may be removed.
Serving REST endpoints for those versions may be disabled (subject to the
deprecation timelines in this document), but the API server must remain capable
resolved, no API versions that have been persisted to storage may be removed.
Serving REST endpoints for those versions may be disabled (subject to the
deprecation timelines in this document), but the API server must remain capable
of decoding/converting previously persisted data from storage.
{{< /note >}}
@ -365,54 +365,54 @@ This applies only to significant, user-visible behaviors which impact the
correctness of applications running on Kubernetes or that impact the
administration of Kubernetes clusters, and which are being removed entirely.
An exception to the above rule is _feature gates_. Feature gates are key=value
An exception to the above rule is _feature gates_. Feature gates are key=value
pairs that allow for users to enable/disable experimental features.
Feature gates are intended to cover the development life cycle of a feature - they
are not intended to be long-term APIs. As such, they are expected to be deprecated
and removed after a feature becomes GA or is dropped.
Feature gates are intended to cover the development life cycle of a feature - they
are not intended to be long-term APIs. As such, they are expected to be deprecated
and removed after a feature becomes GA or is dropped.
As a feature moves through the stages, the associated feature gate evolves.
As a feature moves through the stages, the associated feature gate evolves.
The feature life cycle matched to its corresponding feature gate is:
* Alpha: the feature gate is disabled by default and can be enabled by the user.
* Beta: the feature gate is enabled by default and can be disabled by the user.
* GA: the feature gate is deprecated (see ["Deprecation"](#deprecation)) and becomes
* GA: the feature gate is deprecated (see ["Deprecation"](#deprecation)) and becomes
non-operational.
* GA, deprecation window complete: the feature gate is removed and calls to it are
* GA, deprecation window complete: the feature gate is removed and calls to it are
no longer accepted.
### Deprecation
Features can be removed at any point in the life cycle prior to GA. When features are
Features can be removed at any point in the life cycle prior to GA. When features are
removed prior to GA, their associated feature gates are also deprecated.
When an invocation tries to disable a non-operational feature gate, the call fails in order
When an invocation tries to disable a non-operational feature gate, the call fails in order
to avoid unsupported scenarios that might otherwise run silently.
In some cases, removing pre-GA features requires considerable time. Feature gates can remain
operational until their associated feature is fully removed, at which point the feature gate
itself can be deprecated.
In some cases, removing pre-GA features requires considerable time. Feature gates can remain
operational until their associated feature is fully removed, at which point the feature gate
itself can be deprecated.
When removing a feature gate for a GA feature also requires considerable time, calls to
feature gates may remain operational if the feature gate has no effect on the feature,
When removing a feature gate for a GA feature also requires considerable time, calls to
feature gates may remain operational if the feature gate has no effect on the feature,
and if the feature gate causes no errors.
Features intended to be disabled by users should include a mechanism for disabling the
Features intended to be disabled by users should include a mechanism for disabling the
feature in the associated feature gate.
Versioning for feature gates is different from the previously discussed components,
therefore the rules for deprecation are as follows:
**Rule #8: Feature gates must be deprecated when the corresponding feature they control
**Rule #8: Feature gates must be deprecated when the corresponding feature they control
transitions a lifecycle stage as follows. Feature gates must function for no less than:**
* **Beta feature to GA: 6 months or 2 releases (whichever is longer)**
* **Beta feature to EOL: 3 months or 1 release (whichever is longer)**
* **Alpha feature to EOL: 0 releases**
**Rule #9: Deprecated feature gates must respond with a warning when used. When a feature gate
is deprecated it must be documented in both in the release notes and the corresponding CLI help.
**Rule #9: Deprecated feature gates must respond with a warning when used. When a feature gate
is deprecated it must be documented in both in the release notes and the corresponding CLI help.
Both warnings and documentation must indicate whether a feature gate is non-operational.**
## Exceptions

View File

@ -4,9 +4,9 @@ reviewers:
- erictune
- mikedanese
no_issue: true
title: Setup
title: Getting started
main_menu: true
weight: 30
weight: 20
content_template: templates/concept
card:
name: setup
@ -40,7 +40,7 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b
|Community |Ecosystem |
| ------------ | -------- |
| [Minikube](/docs/setup/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) |
| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) |
| [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) | [Docker Desktop](https://www.docker.com/products/docker-desktop)|
| [Kubernetes IN Docker](https://github.com/kubernetes-sigs/kind) | [Minishift](https://docs.okd.io/latest/minishift/)|
| | [MicroK8s](https://microk8s.io/)|

View File

@ -0,0 +1,4 @@
---
title: Best practices
weight: 40
---

View File

@ -1,8 +1,9 @@
---
title: PKI Certificates and Requirements
title: PKI certificates and requirements
reviewers:
- sig-cluster-lifecycle
- sig-cluster-lifecycle
content_template: templates/concept
weight: 40
---
{{% capture overview %}}
@ -45,7 +46,7 @@ If you don't want kubeadm to generate the required certificates, you can create
### Single root CA
You can create a single root CA, controlled by an administrator. This root CA can then create multiple intermediate CAs, and delegate all further creation to Kubernetes itself.
You can create a single root CA, controlled by an administrator. This root CA can then create multiple intermediate CAs, and delegate all further creation to Kubernetes itself.
Required CAs:
@ -57,7 +58,7 @@ Required CAs:
### All certificates
If you don't wish to copy these private keys to your API servers, you can generate all certificates yourself.
If you don't wish to copy these private keys to your API servers, you can generate all certificates yourself.
Required certificates:
@ -104,7 +105,7 @@ Certificates should be placed in a recommended path (as used by [kubeadm][kubead
## Configure certificates for user accounts
You must manually configure these administrator account and service accounts:
You must manually configure these administrator account and service accounts:
| filename | credential name | Default CN | O (in Subject) |
|-------------------------|----------------------------|--------------------------------|----------------|

View File

@ -2,8 +2,8 @@
reviewers:
- davidopp
- lavalamp
title: Building Large Clusters
weight: 80
title: Building large clusters
weight: 20
---
## Support

View File

@ -3,8 +3,8 @@ reviewers:
- jlowdermilk
- justinsb
- quinton-hoole
title: Running in Multiple Zones
weight: 90
title: Running in multiple zones
weight: 10
content_template: templates/concept
---

View File

@ -1,7 +1,8 @@
---
reviewers:
- Random-Liu
title: Validate Node Setup
title: Validate node setup
weight: 30
---
{{< toc >}}

View File

@ -1,4 +0,0 @@
---
title: Custom Cloud Solutions
weight: 50
---

View File

@ -1,5 +0,0 @@
---
title: "Bootstrapping Clusters with kubeadm"
weight: 30
---

View File

@ -0,0 +1,4 @@
---
title: Learning environment
weight: 20
---

View File

@ -3,7 +3,7 @@ reviewers:
- dlorenc
- balopat
- aaron-prindle
title: Running Kubernetes Locally via Minikube
title: Installing Kubernetes with Minikube
content_template: templates/concept
---
@ -50,7 +50,7 @@ This brief demo guides you on how to start, use, and delete Minikube locally. Fo
For more information on starting your cluster on a specific Kubernetes version, VM, or container runtime, see [Starting a Cluster](#starting-a-cluster).
2. Now, you can interact with your cluster using kubectl. For more information, see [Interacting with Your Cluster](#interacting-with-your-cluster).
Lets create a Kubernetes Deployment using an existing image named `echoserver`, which is a simple HTTP server and expose it on port 8080 using `--port`.
```shell
kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
@ -64,7 +64,7 @@ This brief demo guides you on how to start, use, and delete Minikube locally. Fo
kubectl expose deployment hello-minikube --type=NodePort
```
The option `--type=NodePort` specifies the type of the Service.
The output is similar to this:
```
service/hello-minikube exposed
@ -90,7 +90,7 @@ This brief demo guides you on how to start, use, and delete Minikube locally. Fo
minikube service hello-minikube --url
```
6. To view the details of your local cluster, copy and paste the URL you got as the output, on your browser.
The output is similar to this:
```
Hostname: hello-minikube-7c77b68cff-8wdzq
@ -155,7 +155,7 @@ This brief demo guides you on how to start, use, and delete Minikube locally. Fo
The "minikube" cluster has been deleted.
```
For more information, see [Deleting a cluster](#deleting-a-cluster).
## Managing your Cluster
### Starting a Cluster

View File

@ -0,0 +1,4 @@
---
title: Production environment
weight: 30
---

View File

@ -2,9 +2,9 @@
reviewers:
- vincepri
- bart0sh
title: CRI installation
title: Container runtimes
content_template: templates/concept
weight: 100
weight: 10
---
{{% capture overview %}}
{{< feature-state for_k8s_version="v1.6" state="stable" >}}
@ -283,8 +283,7 @@ systemctl restart containerd
To use the `systemd` cgroup driver, set `plugins.cri.systemd_cgroup = true` in `/etc/containerd/config.toml`.
When using kubeadm, manually configure the
[cgroup driver for kubelet](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
as well.
[cgroup driver for kubelet](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
## Other CRI runtimes: frakti

View File

@ -1,4 +1,4 @@
---
title: On-Premises VMs
weight: 60
weight: 40
---

View File

@ -115,7 +115,7 @@ e9af8293... <node #2 IP> role=node
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/))
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/production-environment/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/))
{{% /capture %}}

View File

@ -66,7 +66,7 @@ This short screencast demonstrates how the oVirt Cloud Provider can be used to d
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z))
oVirt | | | | [docs](/docs/setup/production-environment/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z))
{{% /capture %}}

View File

@ -0,0 +1,4 @@
---
title: Installing Kubernetes with deployment tools
weight: 30
---

View File

@ -1,6 +1,7 @@
---
title: Installing Kubernetes on AWS with kops
title: Installing Kubernetes with kops
content_template: templates/concept
weight: 20
---
{{% capture overview %}}

View File

@ -1,12 +1,13 @@
---
title: Installing Kubernetes with Digital Rebar Provision (DRP) via KRIB
title: Installing Kubernetes with KRIB
krib-version: 2.4
author: Rob Hirschfeld (zehicle)
weight: 20
---
## Overview
This guide helps to install a Kubernetes cluster hosted on bare metal with [Digital Rebar Provision](https://github.com/digitalrebar/provision) using only its Content packages and *kubeadm*.
This guide helps to install a Kubernetes cluster hosted on bare metal with [Digital Rebar Provision](https://github.com/digitalrebar/provision) using only its Content packages and *kubeadm*.
Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While [DRP can be used to invoke](https://provision.readthedocs.io/en/tip/doc/integrations/ansible.html) [kubespray](/docs/setup/custom-cloud/kubespray), it also offers a self-contained Kubernetes installation known as [KRIB (Kubernetes Rebar Integrated Bootstrap)](https://github.com/digitalrebar/provision-content/tree/master/krib).
@ -65,7 +66,7 @@ During the installation, KRIB writes cluster configuration data back into the cl
### (5/5) Access your cluster
The cluster is available for access via *kubectl* once the `krib/cluster-admin-conf` Param has been set. This Param contains the `kubeconfig` information necessary to access the cluster.
The cluster is available for access via *kubectl* once the `krib/cluster-admin-conf` Param has been set. This Param contains the `kubeconfig` information necessary to access the cluster.
For example, if you named the cluster Profile `krib` then the following commands would allow you to connect to the installed cluster from your local terminal.

View File

@ -0,0 +1,4 @@
---
title: "Bootstrapping clusters with kubeadm"
weight: 10
---

View File

@ -23,7 +23,7 @@ The `extraArgs` field consist of `key: value` pairs. To override a flag for a co
2. Add the flags to override to the field.
For more details on each field in the configuration you can navigate to our
[API reference pages](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1).
[API reference pages](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1#ClusterConfiguration#ClusterConfiguration).
{{% /capture %}}

View File

@ -8,11 +8,11 @@ weight: 30
{{% capture overview %}}
<img src="https://raw.githubusercontent.com/cncf/artwork/master/projects/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster
lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
<img src="https://raw.githubusercontent.com/cncf/artwork/master/projects/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster
lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
Because you can install kubeadm on various types of machine (e.g. laptop, server,
Raspberry Pi, etc.), it's well suited for integration with provisioning systems
Because you can install kubeadm on various types of machine (e.g. laptop, server,
Raspberry Pi, etc.), it's well suited for integration with provisioning systems
such as Terraform or Ansible.
kubeadm's simplicity means it can serve a wide range of use cases:
@ -81,14 +81,14 @@ timeframe; which also applies to `kubeadm`.
- 2 CPUs or more on the master
- Full network connectivity among all machines in the cluster. A public or
private network is fine.
{{% /capture %}}
{{% capture steps %}}
## Objectives
* Install a single master Kubernetes cluster or [high availability cluster](/docs/setup/independent/high-availability/)
* Install a single master Kubernetes cluster or [high availability cluster](/docs/setup/production-environment/tools/kubeadm/high-availability/)
* Install a Pod network on the cluster so that your Pods can
talk to each other
@ -96,14 +96,14 @@ timeframe; which also applies to `kubeadm`.
### Installing kubeadm on your hosts
See ["Installing kubeadm"](/docs/setup/independent/install-kubeadm/).
See ["Installing kubeadm"](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
{{< note >}}
If you have already installed kubeadm, run `apt-get update &&
apt-get upgrade` or `yum update` to get the latest version of kubeadm.
When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
kubeadm to tell it what to do. This crashloop is expected and normal.
kubeadm to tell it what to do. This crashloop is expected and normal.
After you initialize your master, the kubelet runs normally.
{{< /note >}}
@ -113,26 +113,26 @@ The master is the machine where the control plane components run, including
etcd (the cluster database) and the API server (which the kubectl CLI
communicates with).
1. Choose a pod network add-on, and verify whether it requires any arguments to
1. Choose a pod network add-on, and verify whether it requires any arguments to
be passed to kubeadm initialization. Depending on which
third-party provider you choose, you might need to set the `--pod-network-cidr` to
a provider-specific value. See [Installing a pod network add-on](#pod-network).
1. (Optional) Since version 1.14, kubeadm will try to detect the container runtime on Linux
by using a list of well known domain socket paths. To use different container runtime or
if there are more than one installed on the provisioned node, specify the `--cri-socket`
argument to `kubeadm init`. See [Installing runtime](/docs/setup/independent/install-kubeadm/#installing-runtime).
1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
with the default gateway to advertise the master's IP. To use a different
network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
argument to `kubeadm init`. See [Installing runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
with the default gateway to advertise the master's IP. To use a different
network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
connectivity to gcr.io registries.
Now run:
```bash
kubeadm init <args>
kubeadm init <args>
```
### More information
@ -151,7 +151,7 @@ components do not currently support multi-architecture.
`kubeadm init` first runs a series of prechecks to ensure that the machine
is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init`
then downloads and installs the cluster control plane components. This may take several minutes.
then downloads and installs the cluster control plane components. This may take several minutes.
The output should look like:
```none
@ -259,8 +259,8 @@ each other.
kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).**
Several projects provide Kubernetes pod networks using CNI, some of which also
support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons.
- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons.
- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
- [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in Kubernetes version 1.9.
Note that kubeadm sets up a more secure cluster by default and enforces use of [RBAC](/docs/reference/access-authn-authz/rbac/).
@ -426,8 +426,7 @@ Once a pod network has been installed, you can confirm that it is working by
checking that the CoreDNS pod is Running in the output of `kubectl get pods --all-namespaces`.
And once the CoreDNS pod is up and running, you can continue by joining your nodes.
If your network is not working or CoreDNS is not in the Running state, check
out our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
If your network is not working or CoreDNS is not in the Running state, checkout our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
### Control plane node isolation
@ -641,8 +640,8 @@ v1.8.
These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components:
* Kubernetes [version and version-skew policy](/docs/setup/version-skew-policy/)
* Kubeadm-specific [installation guide](/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
* Kubernetes [version and version-skew policy](/docs/setup/release/version-skew-policy/)
* Kubeadm-specific [installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
## kubeadm works on multiple platforms {#multi-platform}
@ -673,4 +672,4 @@ addressed in due course.
## Troubleshooting {#troubleshooting}
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).

View File

@ -1,7 +1,7 @@
---
reviewers:
- sig-cluster-lifecycle
title: Options for Highly Available Topology
title: Options for Highly Available topology
content_template: templates/concept
weight: 50
---
@ -34,7 +34,7 @@ Each control plane node creates a local etcd member and this etcd member communi
the `kube-apiserver` of this node. The same applies to the local `kube-controller-manager`
and `kube-scheduler` instances.
This topology couples the control planes and etcd members on the same nodes. It is simpler to set up than a cluster
This topology couples the control planes and etcd members on the same nodes. It is simpler to set up than a cluster
with external etcd nodes, and simpler to manage for replication.
However, a stacked cluster runs the risk of failed coupling. If one node goes down, both an etcd member and a control
@ -66,6 +66,6 @@ A minimum of three hosts for control plane nodes and three hosts for etcd nodes
{{% capture whatsnext %}}
- [Set up a highly available cluster with kubeadm](/docs/setup/independent/high-availability/)
- [Set up a highly available cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/)
{{% /capture %}}
{{% /capture %}}

View File

@ -1,7 +1,7 @@
---
reviewers:
- sig-cluster-lifecycle
title: Creating Highly Available Clusters with kubeadm
title: Creating Highly Available clusters with kubeadm
content_template: templates/task
weight: 60
---
@ -17,7 +17,7 @@ and control plane nodes are co-located.
control plane nodes and etcd members are separated.
Before proceeding, you should carefully consider which approach best meets the needs of your applications
and environment. [This comparison topic](/docs/setup/independent/ha-topology/) outlines the advantages and disadvantages of each.
and environment. [This comparison topic](/docs/setup/production-environment/tools/kubeadm/ha-topology/) outlines the advantages and disadvantages of each.
You should also be aware that setting up HA clusters with kubeadm is still experimental and will be further
simplified in future versions. You might encounter issues with upgrading your clusters, for example.
@ -38,12 +38,10 @@ LoadBalancer, or with dynamic PersistentVolumes.
For both methods you need this infrastructure:
- Three machines that meet [kubeadm's minimum
requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for
- Three machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for
the masters
- Three machines that meet [kubeadm's minimum
requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for
the workers
requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers
- Full network connectivity between all machines in the cluster (public or
private network)
- sudo privileges on all machines
@ -118,8 +116,7 @@ option. Your cluster requirements may need a different configuration.
{{< note >}}
Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and
some like Weave do not. See the [CNI network
documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network).
some like Weave do not. See the [CNI network documentation](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network).
To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under
the `networking` object of `ClusterConfiguration`.
{{< /note >}}
@ -135,12 +132,12 @@ the `networking` object of `ClusterConfiguration`.
certificate distribution](#manual-certs) section bellow.
After the command completes you should see something like so:
```sh
...
You can now join any number of control-plane node by running the following command on each as a root:
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.
@ -167,10 +164,7 @@ As stated in the command output, the certificate-key gives access to cluster sen
{{< /caution >}}
1. Apply the CNI plugin of your choice:
[Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install
the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm
configuration file if applicable.
[Follow these instructions](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network) to install the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm configuration file if applicable.
In this example we are using Weave Net:
@ -211,8 +205,7 @@ in the kubeadm config file.
### Set up the etcd cluster
1. Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/)
to set up the etcd cluster.
1. Follow [these instructions](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) to set up the etcd cluster.
1. Setup SSH as described [here](#manual-certs).
@ -371,5 +364,4 @@ the creation of additional nodes could fail due to a lack of required SANs.
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
```
{{% /capture %}}

View File

@ -1,7 +1,7 @@
---
title: Installing kubeadm
content_template: templates/task
weight: 20
weight: 10
card:
name: setup
weight: 20
@ -11,8 +11,7 @@ card:
{{% capture overview %}}
<img src="https://raw.githubusercontent.com/cncf/artwork/master/projects/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">This page shows how to install the `kubeadm` toolbox.
For information how to create a cluster with kubeadm once you have performed this installation process,
see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/) page.
For information how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
{{% /capture %}}
@ -134,6 +133,8 @@ kubelet and the control plane is supported, but the kubelet version may never ex
server version. For example, kubelets running 1.7.0 should be fully compatible with a 1.8.0 API server,
but not vice versa.
For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/install-kubectl/).
{{< warning >}}
These instructions exclude all Kubernetes packages from any system upgrades.
This is because kubeadm and Kubernetes require
@ -142,8 +143,8 @@ This is because kubeadm and Kubernetes require
For more information on version skews, see:
* Kubernetes [version and version-skew policy](/docs/setup/version-skew-policy/)
* Kubeadm-specific [version skew policy](/docs/setup/independent/create-cluster-kubeadm/#version-skew-policy)
* Kubernetes [version and version-skew policy](/docs/setup/release/version-skew-policy/)
* Kubeadm-specific [version skew policy](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy)
{{< tabs name="k8s_install" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
@ -266,12 +267,13 @@ systemctl daemon-reload
systemctl restart kubelet
```
## Troubleshooting
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
{{% capture whatsnext %}}
* [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/)
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
{{% /capture %}}

View File

@ -26,7 +26,7 @@ when using kubeadm to set up a kubernetes cluster.
* Some infrastructure to copy files between hosts. For example `ssh` and `scp`
can satisfy this requirement.
[toolbox]: /docs/setup/independent/install-kubeadm/
[toolbox]: /docs/setup/production-environment/tools/kubeadm/install-kubeadm/
{{% /capture %}}
@ -259,8 +259,6 @@ this example.
Once you have a working 3 member etcd cluster, you can continue setting up a
highly available control plane using the [external etcd method with
kubeadm](/docs/setup/independent/high-availability/).
kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/).
{{% /capture %}}

View File

@ -1,7 +1,7 @@
---
title: Troubleshooting kubeadm
content_template: templates/concept
weight: 90
weight: 20
---
{{% capture overview %}}
@ -58,10 +58,10 @@ This may be caused by a number of problems. The most common are:
There are two common ways to fix the cgroup driver problem:
1. Install Docker again following instructions
[here](/docs/setup/cri/#docker).
[here](/docs/setup/production-environment/container-runtimes/#docker).
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
[Configure cgroup driver used by kubelet on Master Node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
for detailed instructions.
[Configure cgroup driver used by kubelet on Master Node](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
@ -219,7 +219,8 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
If you have nodes that are running SELinux with an older version of Docker you might experience a scenario
where the `coredns` pods are not starting. To solve that you can try one of the following options:
- Upgrade to a [newer version of Docker](/docs/setup/cri/#docker).
- Upgrade to a [newer version of Docker](/docs/setup/production-environment/container-runtimes/#docker).
- [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux).
- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`:
@ -277,8 +278,7 @@ Alternatively, you can try separating the `key=value` pairs like so:
`--apiserver-extra-args "enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists"`
but this will result in the key `enable-admission-plugins` only having the value of `NamespaceExists`.
A known workaround is to use the kubeadm
[configuration file](/docs/setup/independent/control-plane-flags/#apiserver-flags).
A known workaround is to use the kubeadm [configuration file](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#apiserver-flags).
## kube-proxy scheduled before node is initialized by cloud-controller-manager

View File

@ -1,6 +1,7 @@
---
title: Installing Kubernetes On-premises/Cloud Providers with Kubespray
title: Installing Kubernetes with Kubespray
content_template: templates/concept
weight: 30
---
{{% capture overview %}}

View File

@ -1,4 +1,4 @@
---
title: Turnkey Cloud Solutions
weight: 40
weight: 30
---

View File

@ -9,7 +9,7 @@ title: Running Kubernetes on Alibaba Cloud
The [Alibaba Cloud Container Service](https://www.alibabacloud.com/product/container-service) lets you run and manage Docker applications on a cluster of Alibaba Cloud ECS instances. It supports the popular open source container orchestrators: Docker Swarm and Kubernetes.
To simplify cluster deployment and management, use [Kubernetes Support for Alibaba Cloud Container Service](https://www.alibabacloud.com/product/kubernetes). You can get started quickly by following the [Kubernetes walk-through](https://www.alibabacloud.com/help/doc-detail/86737.htm), and there are some [tutorials for Kubernetes Support on Alibaba Cloud](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1) in Chinese.
To simplify cluster deployment and management, use [Kubernetes Support for Alibaba Cloud Container Service](https://www.alibabacloud.com/product/kubernetes). You can get started quickly by following the [Kubernetes walk-through](https://www.alibabacloud.com/help/doc-detail/86737.htm), and there are some [tutorials for Kubernetes Support on Alibaba Cloud](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1) in Chinese.
To use custom binaries or open source Kubernetes, follow the instructions below.

View File

@ -70,7 +70,7 @@ cluster/kube-up.sh
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on [troubleshooting](/docs/setup/turnkey/gce/#troubleshooting), post to the
If you run into trouble, please see the section on [troubleshooting](/docs/setup/production-environment/turnkey/gce/#troubleshooting), post to the
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on [Slack](/docs/troubleshooting/#slack).
The next few steps will show you:
@ -217,7 +217,7 @@ field values:
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | | Project
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/production-environment/turnkey/gce/) | | Project
## Further reading

View File

@ -1,4 +1,4 @@
---
title: "Windows in Kubernetes"
weight: 65
weight: 50
---

View File

@ -9,7 +9,7 @@ weight: 65
{{% capture overview %}}
Windows applications constitute a large portion of the services and applications that run in many organizations. [Windows containers](https://aka.ms/windowscontainers) provide a modern way to encapsulate processes and package dependencies, making it easier to use DevOps practices and follow cloud native patterns for Windows applications. Kubernetes has become the defacto standard container orchestrator, and the release of Kubernetes 1.14 includes production support for scheduling Windows containers on Windows nodes in a Kubernetes cluster, enabling a vast ecosystem of Windows applications to leverage the power of Kubernetes. Organizations with investments in Windows-based applications and Linux-based applications don't have to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments, regardless of operating system.
Windows applications constitute a large portion of the services and applications that run in many organizations. [Windows containers](https://aka.ms/windowscontainers) provide a modern way to encapsulate processes and package dependencies, making it easier to use DevOps practices and follow cloud native patterns for Windows applications. Kubernetes has become the defacto standard container orchestrator, and the release of Kubernetes 1.14 includes production support for scheduling Windows containers on Windows nodes in a Kubernetes cluster, enabling a vast ecosystem of Windows applications to leverage the power of Kubernetes. Organizations with investments in Windows-based applications and Linux-based applications don't have to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments, regardless of operating system.
{{% /capture %}}
@ -26,7 +26,7 @@ The Kubernetes control plane, including the [master components](/docs/concepts/o
{{< /note >}}
{{< note >}}
In this document, when we talk about Windows containers we mean Windows containers with process isolation. Windows containers with [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) is planned for a future release.
In this document, when we talk about Windows containers we mean Windows containers with process isolation. Windows containers with [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) is planned for a future release.
{{< /note >}}
## Supported Functionality and Limitations
@ -45,7 +45,7 @@ Let's start with the operating system version. Refer to the following table for
| *Kubernetes v1.14* | Not Supported | Not Supported| Supported for Windows Server containers Builds 17763.* with Docker EE-basic 18.09 |
{{< note >}}
We don't expect all Windows customers to update the operating system for their apps frequently. Upgrading your applications is what dictates and necessitates upgrading or introducing new nodes to the cluster. For the customers that chose to upgrade their operating system for containers running on Kubernetes, we will offer guidance and step-by-step instructions when we add support for a new operating system version. This guidance will include recommended upgrade procedures for upgrading user applications together with cluster nodes. Windows nodes adhere to Kubernetes [version-skew policy](/docs/setup/version-skew-policy/) (node to control plane versioning) the same way as Linux nodes do today.
We don't expect all Windows customers to update the operating system for their apps frequently. Upgrading your applications is what dictates and necessitates upgrading or introducing new nodes to the cluster. For the customers that chose to upgrade their operating system for containers running on Kubernetes, we will offer guidance and step-by-step instructions when we add support for a new operating system version. This guidance will include recommended upgrade procedures for upgrading user applications together with cluster nodes. Windows nodes adhere to Kubernetes [version-skew policy](/docs/setup/release/version-skew-policy/) (node to control plane versioning) the same way as Linux nodes do today.
{{< /note >}}
{{< note >}}
The Windows Server Host Operating System is subject to the [Windows Server ](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) licensing. The Windows Container images are subject to the [Supplemental License Terms for Windows containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/images-eula).
@ -61,7 +61,7 @@ Key Kubernetes elements work the same way in Windows as they do in Linux. In thi
A Pod is the basic building block of Kubernetesthe smallest and simplest unit in the Kubernetes object model that you create or deploy. The following Pod capabilities, properties and events are supported with Windows containers:
* Single or multiple containers per Pod with process isolation and volume sharing
* Pod status fields
* Pod status fields
* Readiness and Liveness probes
* postStart & preStop container lifecycle events
* ConfigMap, Secrets: as environment variables or volumes
@ -122,7 +122,7 @@ Networking for Windows containers is exposed through [CNI plugins](/docs/concept
The following service spec types are supported:
* NodePort
* NodePort
* ClusterIP
* LoadBalancer
* ExternalName
@ -142,7 +142,7 @@ As outlined above, the [Flannel](https://github.com/coreos/flannel) CNI [meta pl
For the node, pod, and service objects, the following network flows are supported for TCP/UDP traffic:
* Pod -> Pod (IP)
* Pod -> Pod (Name)
* Pod -> Pod (Name)
* Pod -> Service (Cluster IP)
* Pod -> Service (PQDN, but only if there are no ".")
* Pod -> Service (FQDN)
@ -233,7 +233,7 @@ The following networking functionality is not supported on Windows nodes
* Host networking mode is not available for Windows pods
* Local NodePort access from the node itself fails (works for other nodes or external clients)
* Accessing service VIPs from nodes will be available with a future release of Windows Server
* Accessing service VIPs from nodes will be available with a future release of Windows Server
* Overlay networking support in kube-proxy is an alpha release. In addition, it requires [KB4482887](https://support.microsoft.com/en-us/help/4482887/windows-10-update-kb4482887) to be installed on Windows Server 2019
* `kubectl port-forward`
* Local Traffic Policy and DSR mode
@ -266,13 +266,13 @@ Secrets are written in clear text on the node's volume (as compared to tmpfs/in-
[RunAsUser ](/docs/concepts/policy/pod-security-policy/#users-and-groups)is not currently supported on Windows. The workaround is to create local accounts before packaging the container. The RunAsUsername capability may be added in a future release.
Linux specific pod security context privileges such as SELinux, AppArmor, Seccomp, Capabilities (POSIX Capabilities), and others are not supported.
Linux specific pod security context privileges such as SELinux, AppArmor, Seccomp, Capabilities (POSIX Capabilities), and others are not supported.
In addition, as mentioned already, privileged containers are not supported on Windows.
#### API
There are no differences in how most of the Kubernetes APIs work for Windows. The subtleties around what's different come down to differences in the OS and container runtime. In certain situations, some properties on workload APIs such as Pod or Container were designed with an assumption that they are implemented on Linux, failing to run on Windows.
There are no differences in how most of the Kubernetes APIs work for Windows. The subtleties around what's different come down to differences in the OS and container runtime. In certain situations, some properties on workload APIs such as Pod or Container were designed with an assumption that they are implemented on Linux, failing to run on Windows.
At a high level, these OS concepts are different:
@ -288,7 +288,7 @@ Exit Codes follow the same convention where 0 is success, nonzero is failure. Th
##### V1.Container
* V1.Container.ResourceRequirements.limits.cpu and V1.Container.ResourceRequirements.limits.memory - Windows doesn't use hard limits for CPU allocations. Instead, a share system is used. The existing fields based on millicores are scaled into relative shares that are followed by the Windows scheduler. [see: kuberuntime/helpers_windows.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/helpers_windows.go), [see: resource controls in Microsoft docs](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/resource-controls)
* V1.Container.ResourceRequirements.limits.cpu and V1.Container.ResourceRequirements.limits.memory - Windows doesn't use hard limits for CPU allocations. Instead, a share system is used. The existing fields based on millicores are scaled into relative shares that are followed by the Windows scheduler. [see: kuberuntime/helpers_windows.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/helpers_windows.go), [see: resource controls in Microsoft docs](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/resource-controls)
* Huge pages are not implemented in the Windows container runtime, and are not available. They require [asserting a user privilege](https://docs.microsoft.com/en-us/windows/desktop/Memory/large-page-support) that's not configurable for containers.
* V1.Container.ResourceRequirements.requests.cpu and V1.Container.ResourceRequirements.requests.memory - Requests are subtracted from node available resources, so they can be used to avoid overprovisioning a node. However, they cannot be used to guarantee resources in an overprovisioned node. They should be applied to all containers as a best practice if the operator wants to avoid overprovisioning entirely.
* V1.Container.SecurityContext.allowPrivilegeEscalation - not possible on Windows, none of the capabilities are hooked up
@ -298,7 +298,7 @@ Exit Codes follow the same convention where 0 is success, nonzero is failure. Th
* V1.Container.SecurityContext.readOnlyRootFilesystem - not possible on Windows, write access is required for registry & system processes to run inside the container
* V1.Container.SecurityContext.runAsGroup - not possible on Windows, no GID support
* V1.Container.SecurityContext.runAsNonRoot - Windows does not have a root user. The closest equivalent is ContainerAdministrator which is an identity that doesn't exist on the node.
* V1.Container.SecurityContext.runAsUser - not possible on Windows, no UID support as int.
* V1.Container.SecurityContext.runAsUser - not possible on Windows, no UID support as int.
* V1.Container.SecurityContext.seLinuxOptions - not possible on Windows, no SELinux
* V1.Container.terminationMessagePath - this has some limitations in that Windows doesn't support mapping single files. The default value is /dev/termination-log, which does work because it does not exist on Windows by default.
@ -365,26 +365,26 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
1. Using nssm.exe
You can also always use alternative service managers like [nssm.exe](https://nssm.cc/) to run these processes (flanneld, kubelet & kube-proxy) in the background for you. You can use this [sample script](https://github.com/Microsoft/SDN/tree/master/Kubernetes/flannel/register-svc.ps1), leveraging nssm.exe to register kubelet, kube-proxy, and flanneld.exe to run as Windows services in the background.
```powershell
register-svc.ps1 -NetworkMode <Network mode> -ManagementIP <Windows Node IP> -ClusterCIDR <Cluster subnet> -KubeDnsServiceIP <Kube-dns Service IP> -LogDir <Directory to place logs>
# NetworkMode = The network mode l2bridge (flannel host-gw, also the default value) or overlay (flannel vxlan) chosen as a network solution
# ManagementIP = The IP address assigned to the Windows node. You can use ipconfig to find this
# ClusterCIDR = The cluster subnet range. (Default value 10.244.0.0/16)
# KubeDnsServiceIP = The Kubernetes DNS service IP (Default value 10.96.0.10)
# LogDir = The directory where kubelet and kube-proxy logs are redirected into their respective output files (Default value C:\k)
```
If the above referenced script is not suitable, you can manually configure nssm.exe using the following examples.
```powershell
# Register flanneld.exe
nssm install flanneld C:\flannel\flanneld.exe
nssm set flanneld AppParameters --kubeconfig-file=c:\k\config --iface=<ManagementIP> --ip-masq=1 --kube-subnet-mgr=1
nssm set flanneld AppEnvironmentExtra NODE_NAME=<hostname>
nssm set flanneld AppDirectory C:\flannel
nssm set flanneld AppDirectory C:\flannel
nssm start flanneld
# Register kubelet.exe
# Microsoft releases the pause infrastructure container at mcr.microsoft.com/k8s/core/pause:1.0.0
# For more info search for "pause" in the "Guide for adding Windows Nodes in Kubernetes"
@ -392,7 +392,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
nssm set kubelet AppParameters --hostname-override=<hostname> --v=6 --pod-infra-container-image=mcr.microsoft.com/k8s/core/pause:1.0.0 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns=<DNS-service-IP> --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir=<log directory> --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config
nssm set kubelet AppDirectory C:\k
nssm start kubelet
# Register kube-proxy.exe (l2bridge / host-gw)
nssm install kube-proxy C:\k\kube-proxy.exe
nssm set kube-proxy AppDirectory c:\k
@ -400,7 +400,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
nssm.exe set kube-proxy AppEnvironmentExtra KUBE_NETWORK=cbr0
nssm set kube-proxy DependOnService kubelet
nssm start kube-proxy
# Register kube-proxy.exe (overlay / vxlan)
nssm install kube-proxy C:\k\kube-proxy.exe
nssm set kube-proxy AppDirectory c:\k
@ -408,8 +408,8 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
nssm set kube-proxy DependOnService kubelet
nssm start kube-proxy
```
For initial troubleshooting, you can use the following flags in [nssm.exe](https://nssm.cc/) to redirect stdout and stderr to a output file:
```powershell
@ -498,7 +498,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
1. My Pods are stuck at "Container Creating" or restarting over and over
Check that your pause image is compatible with your OS version. The [instructions](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/deploying-resources) assume that both the OS and the containers are version 1803. If you have a later version of Windows, such as an Insider build, you need to adjust the images accordingly. Please refer to the Microsoft's [Docker repository](https://hub.docker.com/u/microsoft/) for images. Regardless, both the pause image Dockerfile and the sample service expect the image to be tagged as :latest.
Starting with Kubernetes v1.14, Microsoft releases the pause infrastructure container at `mcr.microsoft.com/k8s/core/pause:1.0.0`. For more information search for "pause" in the [Guide for adding Windows Nodes in Kubernetes](../user-guide-windows-nodes).
1. DNS resolution is not properly working

View File

@ -21,7 +21,8 @@ The Kubernetes platform can now be used to run both Linux and Windows containers
## Before you begin
* Obtain a [Windows Server license](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) in order to configure the Windows node that hosts Windows containers. You can use your organization's licenses for the cluster, or acquire one from Microsoft, a reseller, or via the major cloud providers such as GCP, AWS, and Azure by provisioning a virtual machine running Windows Server through their marketplaces. A [time-limited trial](https://www.microsoft.com/en-us/cloud-platform/windows-server-trial) is also available.
* Build a Linux-based Kubernetes cluster in which you have access to the control plane (some examples include [Getting Started from Scratch](/docs/setup/release/building-from-source/), [kubeadm](/docs/setup/independent/create-cluster-kubeadm/), [AKS Engine](/docs/setup/turnkey/azure/), [GCE](/docs/setup/turnkey/gce/), [AWS](/docs/setup/turnkey/aws/)).
* Build a Linux-based Kubernetes cluster in which you have access to the control plane (some examples include [Getting Started from Scratch](https://github.com/kubernetes/kubernetes/tree/master/build/), [kubeadm/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), [AKS Engine](/docs/setup/production-environment/turnkey/azure/), [GCE](/docs/setup/production-environment/turnkey/gce/), [AWS](/docs/setup/production-environment/turnkey/aws/).
## Getting Started: Adding a Windows Node to Your Cluster
@ -96,8 +97,8 @@ Once you have a Linux-based Kubernetes master node you are ready to choose a net
* VNI 4096 is set in the backend
* Port 4789 is set in the backend
2. In the `cni-conf.json` section of your `kube-flannel.yml`, change the network name to `vxlan0`.
Your `cni-conf.json` should look as follows:
```json
@ -190,7 +191,7 @@ All code snippets in Windows sections are to be run in a PowerShell environment
{{< note >}}
The "pause" (infrastructure) image is hosted on Microsoft Container Registry (MCR). You can access it using "docker pull mcr.microsoft.com/k8s/core/pause:1.0.0". The DOCKERFILE is available at https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/Dockerfile.
{{< /note >}}
1. Prepare a Windows directory for Kubernetes
Create a "Kubernetes for Windows" directory to store Kubernetes binaries as well as any deployment scripts and config files.

View File

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 80 KiB

View File

@ -1,5 +1,4 @@
---
title: "Downloading Kubernetes"
weight: 20
title: "Release notes and version skew"
weight: 10
---

View File

@ -1,33 +0,0 @@
---
reviewers:
- david-mcmahon
- jbeda
title: Building a release
content_template: templates/concept
card:
name: download
weight: 20
title: Building a release
---
{{% capture overview %}}
You can either build a release from source or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest using a pre-built version of the current release, which can be found in the [Release Notes](/docs/setup/release/notes/).
The Kubernetes source code can be downloaded from the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo.
{{% /capture %}}
{{% capture body %}}
## Building from source
If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container.
Building a release is simple.
```shell
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make release
```
For more details on the release process see the kubernetes/kubernetes [`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/) directory.
{{% /capture %}}

View File

@ -1,8 +1,9 @@
---
title: v1.14 Release Notes
weight: 10
card:
name: download
weight: 10
weight: 20
anchors:
- anchor: "#"
title: Current Release Notes
@ -628,7 +629,7 @@ filename | sha512 hash
* Introduce a RuntimeClass v1beta1 API. This new beta API renames `runtimeHandler` to `handler`, makes it a required field, and cuts out the spec (handler is a top-level field).
* Transition CSINodeInfo and CSIDriver alpha CRDs to in-tree CSINode and CSIDriver core storage v1beta1 APIs. ([#74283](https://github.com/kubernetes/kubernetes/pull/74283), [@xing-yang](https://github.com/xing-yang))
* ACTION REQUIRED: the alpha CRDs are no longer used and drivers will need to be updated to use the beta APIs.
* The support for `_` in the CSI driver name will be dropped as the CSI Spec does not allow that.
* The support for `_` in the CSI driver name will be dropped as the CSI Spec does not allow that.
### Other notable changes
@ -659,7 +660,7 @@ filename | sha512 hash
* kube-apiserver now serves OpenAPI specs for registered CRDs with defined ([#71192](https://github.com/kubernetes/kubernetes/pull/71192), [@roycaihw](https://github.com/roycaihw))
* validation schemata as an alpha feature, to be enabled via the "CustomResourcePublishOpenAPI" feature gate. Kubectl will validate client-side using those. Note that in
* future, client-side validation in 1.14 kubectl against a 1.15 cluster will reject
* unknown fields for CRDs with validation schema defined.
* unknown fields for CRDs with validation schema defined.
* Fix kubelet start failure issue on Azure Stack due to InstanceMetadata setting ([#74936](https://github.com/kubernetes/kubernetes/pull/74936), [@rjaini](https://github.com/rjaini))
* add subcommand `kubectl create cronjob` ([#71651](https://github.com/kubernetes/kubernetes/pull/71651), [@Pingan2017](https://github.com/Pingan2017))
* The CSIBlockVolume feature gate is now beta, and defaults to enabled. ([#74909](https://github.com/kubernetes/kubernetes/pull/74909), [@bswartz](https://github.com/bswartz))
@ -716,7 +717,7 @@ filename | sha512 hash
* reflector_short_watches_total
* reflector_watch_duration_seconds
* reflector_watches_total
* While this is a backwards-incompatible change, it would have been impossible to setup reliable monitoring around these metrics since the labels were not stable.
* While this is a backwards-incompatible change, it would have been impossible to setup reliable monitoring around these metrics since the labels were not stable.
* Add a configuration field to shorten the timeout of validating/mutating admission webhook call. The timeout value must be between 1 and 30 seconds. Default to 30 seconds when unspecified. ([#74562](https://github.com/kubernetes/kubernetes/pull/74562), [@roycaihw](https://github.com/roycaihw))
* client-go: PortForwarder.GetPorts() now contain correct local port if no local port was initially specified when setting up the port forwarder ([#73676](https://github.com/kubernetes/kubernetes/pull/73676), [@martin-helmich](https://github.com/martin-helmich))
* # Apply resources from a directory containing kustomization.yaml ([#74140](https://github.com/kubernetes/kubernetes/pull/74140), [@Liujingfang1](https://github.com/Liujingfang1))
@ -733,7 +734,7 @@ filename | sha512 hash
* Image garbage collection no longer fails for images with only one tag but more than one repository associated. ([#70647](https://github.com/kubernetes/kubernetes/pull/70647), [@corvus-ch](https://github.com/corvus-ch))
* - Fix liveness probe in fluentd-gcp cluster addon ([#74522](https://github.com/kubernetes/kubernetes/pull/74522), [@Pluies](https://github.com/Pluies))
* The new test ``[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]`` will validate the host entries set in the ``/etc/hosts`` file (pod's FQDN and hostname), which should be managed by Kubelet. ([#72729](https://github.com/kubernetes/kubernetes/pull/72729), [@bclau](https://github.com/bclau))
* The test has the tag ``[LinuxOnly]`` because individual files cannot be mounted in Windows Containers, which means that it cannot pass using Windows nodes.
* The test has the tag ``[LinuxOnly]`` because individual files cannot be mounted in Windows Containers, which means that it cannot pass using Windows nodes.
@ -903,11 +904,11 @@ filename | sha512 hash
* Custom apiservers built with the latest apiserver library will have the 100MB limit on the body of resource requests as well. The limit can be altered via ServerRunOptions.MaxRequestBodyBytes.
* The body size limit does not apply to subresources like pods/proxy that proxy request content to another server.
* Kustomize is developed in its own repo https://github.com/kubernetes-sigs/kustomize ([#73033](https://github.com/kubernetes/kubernetes/pull/73033), [@Liujingfang1](https://github.com/Liujingfang1))
* This PR added a new subcommand `kustomize` in kubectl.
* This PR added a new subcommand `kustomize` in kubectl.
* kubectl kustomize <somedir> has the same effect as kustomize build <somedir>
* To build API resources from somedir with a kustomization.yaml file
* kubectl kustomize <somedir>
* This command can be piped to apply or delete
* This command can be piped to apply or delete
* kubectl kustomize <somedir> | kubectl apply -f -
* kubectl kustomize <somedir> | kubectl delete -f -
* kubeadm: all master components are now exclusively relying on the `PriorityClassName` pod spec for annotating them as cluster critical components. Since `scheduler.alpha.kubernetes.io/critical-pod` annotation is no longer supported by Kubernetes 1.14 this annotation is no longer added to master components. ([#73857](https://github.com/kubernetes/kubernetes/pull/73857), [@ereslibre](https://github.com/ereslibre))

View File

@ -6,9 +6,9 @@ reviewers:
- sig-cluster-lifecycle
- sig-node
- sig-release
title: Kubernetes Version and Version Skew Support Policy
title: Kubernetes version and version skew support policy
content_template: templates/concept
weight: 70
weight: 30
---
{{% capture overview %}}
@ -37,7 +37,7 @@ Minor releases occur approximately every 3 months, so each minor release branch
### kube-apiserver
In [highly-available (HA) clusters](/docs/setup/independent/high-availability/), the newest and oldest `kube-apiserver` instances must be within one minor version.
In [highly-available (HA) clusters](/docs/setup/production-environment/tools/independent/high-availability/), the newest and oldest `kube-apiserver` instances must be within one minor version.
Example:
@ -113,14 +113,14 @@ Pre-requisites:
* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that communicate with this server are at version **1.n** (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version)
* `kubelet` instances on all nodes are at version **1.n** or **1.(n-1)** (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version)
* Registered admission webhooks are able to handle the data the new `kube-apiserver` instance will send them:
* `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include any new versions of REST resources added in **1.(n+1)**
* The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in **1.(n+1)**
* `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include any new versions of REST resources added in **1.(n+1)**
* The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in **1.(n+1)**
Upgrade `kube-apiserver` to **1.(n+1)**
{{< note >}}
Project policies for [API deprecation](/docs/reference/using-api/deprecation-policy/) and
[API change guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md)
Project policies for [API deprecation](/docs/reference/using-api/deprecation-policy/) and
[API change guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md)
require `kube-apiserver` to not skip minor versions when upgrading, even in single-instance clusters.
{{< /note >}}

View File

@ -107,7 +107,7 @@ KUBE_GCE_ZONE=replica-zone KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up
* Try to place master replicas in different zones. During a zone failure, all masters placed inside the zone will fail.
To survive zone failure, also place nodes in multiple zones
(see [multiple-zones](/docs/setup/multiple-zones/) for details).
(see [multiple-zones](/docs/setup/best-practices/multiple-zones/) for details).
* Do not use a cluster with two master replicas. Consensus on a two-replica cluster requires both replicas running when changing persistent state.
As a result, both replicas are needed and a failure of any replica turns cluster into majority failure state.

View File

@ -15,7 +15,7 @@ This page explains how to manage certificates manually with kubeadm.
These are advanced topics for users who need to integrate their organization's certificate infrastructure into a kubeadm-built cluster. If kubeadm with the default configuration satisfies your needs, you should let kubeadm manage certificates instead.
You should be familiar with [PKI certificates and requirements in Kubernetes](/docs/setup/certificates/).
You should be familiar with [PKI certificates and requirements in Kubernetes](/docs/setup/best-practices/certificates/).
{{% /capture %}}
@ -23,7 +23,7 @@ You should be familiar with [PKI certificates and requirements in Kubernetes](/d
## Renew certificates with the certificates API
The Kubernetes certificates normally reach their expiration date after one year.
The Kubernetes certificates normally reach their expiration date after one year.
Kubeadm can renew certificates with the `kubeadm alpha certs renew` commands; you should run these commands on control-plane nodes only.
@ -36,9 +36,9 @@ With kubeadm, you can use this API by running `kubeadm alpha certs renew --use-a
## Set up a signer
The Kubernetes Certificate Authority does not work out of the box.
You can configure an external signer such as [cert-manager][cert-manager-issuer], or you can use the build-in signer.
The built-in signer is part of [`kube-controller-manager`][kcm].
The Kubernetes Certificate Authority does not work out of the box.
You can configure an external signer such as [cert-manager][cert-manager-issuer], or you can use the build-in signer.
The built-in signer is part of [`kube-controller-manager`][kcm].
To activate the build-in signer, you pass the `--cluster-signing-cert-file` and `--cluster-signing-key-file` arguments.
You pass these arguments in any of the following ways:
@ -67,7 +67,7 @@ You pass these arguments in any of the following ways:
### Approve requests
If you set up an external signer such as [cert-manager][cert-manager], certificate signing requests (CSRs) are automatically approved.
Otherwise, you must manually approve certificates with the [`kubectl certificate`][certs] command.
Otherwise, you must manually approve certificates with the [`kubectl certificate`][certs] command.
The following kubeadm command outputs the name of the certificate to approve, then blocks and waits for approval to occur:
```shell
@ -91,28 +91,28 @@ You can view a list of pending certificates with `kubectl get csr`.
## Certificate requests with kubeadm
To better integrate with external CAs, kubeadm can also produce certificate signing requests (CSRs).
A CSR represents a request to a CA for a signed certificate for a client.
To better integrate with external CAs, kubeadm can also produce certificate signing requests (CSRs).
A CSR represents a request to a CA for a signed certificate for a client.
In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced as a CSR instead. A CA, however, cannot be produced as a CSR.
You can create an individual CSR with `kubeadm init phase certs apiserver --csr-only`.
The `--csr-only` flag can be applied only to individual phases. After [all certificates are in place][certs], you can run `kubeadm init --external-ca`.
You can pass in a directory with `--csr-dir` to output the CSRs to the specified location.
You can pass in a directory with `--csr-dir` to output the CSRs to the specified location.
If `--csr-dir` is not specified, the default certificate directory (`/etc/kubernetes/pki`) is used.
Both the CSR and the accompanying private key are given in the output. After a certificate is signed, the certificate and the private key must be copied to the PKI directory (by default `/etc/kubernetes/pki`).
### Renew certificates
Certificates can be renewed with `kubeadm alpha certs renew --csr-only`.
As with `kubeadm init`, an output directory can be specified with the `--csr-dir` flag.
As with `kubeadm init`, an output directory can be specified with the `--csr-dir` flag.
To use the new certificates, copy the signed certificate and private key into the PKI directory (by default `/etc/kubernetes/pki`)
## Cert usage
A CSR contains a certificate's name, domains, and IPs, but it does not specify usages.
A CSR contains a certificate's name, domains, and IPs, but it does not specify usages.
It is the responsibility of the CA to specify [the correct cert usages][cert-table] when issuing a certificate.
* In `openssl` this is done with the [`openssl ca` command][openssl-ca].
* In `cfssl` you specify [usages in the config file][cfssl-usages]
@ -122,8 +122,8 @@ Kubeadm sets up [three CAs][cert-cas] by default. Make sure to sign the CSRs wit
[openssl-ca]: https://superuser.com/questions/738612/openssl-ca-keyusage-extension
[cfssl-usages]: https://github.com/cloudflare/cfssl/blob/master/doc/cmd/cfssl.txt#L170
[certs]: /docs/setup/certificates
[cert-cas]: /docs/setup/certificates/#single-root-ca
[cert-table]: /docs/setup/certificates/#all-certificates
[certs]: /docs/setup/best-practices/certificates/
[cert-cas]: /docs/setup/best-practices/certificates/#single-root-ca
[cert-table]: /docs/setup/best-practices/certificates/#all-certificates
{{% /capture %}}

View File

@ -10,7 +10,7 @@ content_template: templates/task
{{% capture overview %}}
This page explains how to upgrade a highly available (HA) Kubernetes cluster created with `kubeadm` from version 1.11.x to version 1.12.x. In addition to upgrading, you must also follow the instructions in [Creating HA clusters with kubeadm](/docs/setup/independent/high-availability/).
This page explains how to upgrade a highly available (HA) Kubernetes cluster created with `kubeadm` from version 1.11.x to version 1.12.x. In addition to upgrading, you must also follow the instructions in [Creating HA clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/).
{{% /capture %}}

View File

@ -9,7 +9,7 @@ content_template: templates/task
{{% capture overview %}}
This page explains how to upgrade a highly available (HA) Kubernetes cluster created with `kubeadm` from version 1.12.x to version 1.13.y. In addition to upgrading, you must also follow the instructions in [Creating HA clusters with kubeadm](/docs/setup/independent/high-availability/).
This page explains how to upgrade a highly available (HA) Kubernetes cluster created with `kubeadm` from version 1.12.x to version 1.13.y. In addition to upgrading, you must also follow the instructions in [Creating HA clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/).
{{% /capture %}}

View File

@ -44,7 +44,7 @@ it to [support other log format](/docs/tasks/debug-application-cluster/monitor-n
## Enable/Disable in GCE cluster
Node problem detector is [running as a cluster addon](/docs/setup/cluster-large/#addon-resources) enabled by default in the
Node problem detector is [running as a cluster addon](/docs/setup/best-practices/cluster-large/#addon-resources) enabled by default in the
gce cluster.
You can enable/disable it by setting the environment variable

View File

@ -25,8 +25,8 @@ This document walks you through an example of enabling Horizontal Pod Autoscaler
This example requires a running Kubernetes cluster and kubectl, version 1.2 or later.
[metrics-server](https://github.com/kubernetes-incubator/metrics-server/) monitoring needs to be deployed in the cluster
to provide metrics via the resource metrics API, as Horizontal Pod Autoscaler uses this API to collect metrics. The instructions for deploying this are on the GitHub repository of [metrics-server](https://github.com/kubernetes-incubator/metrics-server/), if you followed [getting started on GCE guide](/docs/setup/turnkey/gce/),
metrics-server monitoring will be turned-on by default.
to provide metrics via the resource metrics API, as Horizontal Pod Autoscaler uses this API to collect metrics. The instructions for deploying this are on the GitHub repository of [metrics-server](https://github.com/kubernetes-incubator/metrics-server/), if you followed [getting started on GCE guide](/docs/setup/production-environment/turnkey/gce/),
metrics-server monitoring will be turned-on by default.
To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster
and kubectl at version 1.6 or later. Furthermore, in order to make use of custom metrics, your cluster

View File

@ -179,7 +179,7 @@ To install Minikube manually on Windows, download [`minikube-windows-amd64`](htt
{{% capture whatsnext %}}
* [Running Kubernetes Locally via Minikube](/docs/setup/minikube/)
* [Running Kubernetes Locally via Minikube](/docs/setup/learning-environment/minikube/)
{{% /capture %}}

View File

@ -43,7 +43,7 @@ Minikube can be installed locally, and runs a simple, single-node Kubernetes clu
* {{< link text="Install kubectl" url="/docs/tasks/tools/install-kubectl/" >}}. ({{< glossary_tooltip text="What is kubectl?" term_id="kubectl" >}})
* *(Optional)* {{< link text="Install Docker" url="/docs/setup/cri/#docker" >}} if you plan to run your Minikube cluster as part of a local development environment.
* *(Optional)* {{< link text="Install Docker" url="/docs/setup/production-environment/container-runtimes/#docker" >}} if you plan to run your Minikube cluster as part of a local development environment.
Minikube includes a Docker daemon, but if you're developing applications locally, you'll want an independent Docker instance to support your workflow. This allows you to create {{< glossary_tooltip text="containers" term_id="container" >}} and push them to a container registry.

View File

@ -30,8 +30,8 @@ If you have not already done so, start your understanding by reading through [Wh
Kubernetes is quite flexible, and a cluster can be run in a wide variety of places. You can interact with Kubernetes entirely on your own laptop or local development machine with it running within a virtual machine. Kubernetes can also run on virtual machines hosted either locally or in a cloud provider, and you can run a Kubernetes cluster on bare metal.
A cluster is made up of one or more [Nodes](/docs/concepts/architecture/nodes/); where a node is a physical or virtual machine.
If there is more than one node in your cluster then the nodes are connected with a [cluster network](/docs/concepts/cluster-administration/networking/).
A cluster is made up of one or more [Nodes](/docs/concepts/architecture/nodes/); where a node is a physical or virtual machine.
If there is more than one node in your cluster then the nodes are connected with a [cluster network](/docs/concepts/cluster-administration/networking/).
Regardless of how many nodes, all Kubernetes clusters generally have the same components, which are described in [Kubernetes Components](/docs/concepts/overview/components).
@ -66,7 +66,7 @@ These resources are covered in a number of articles within the Kubernetes docume
As a cluster operator you may not need to use all these resources, although you should be familiar with them to understand how the cluster is being used.
There are a number of additional resources that you should be aware of, some listed under [Intermediate Resources](/docs/user-journeys/users/cluster-operator/intermediate#section-1).
You should also be familiar with [how to manage kubernetes resources](/docs/concepts/cluster-administration/manage-deployment/)
and [supported versions and version skew between cluster components](/docs/setup/version-skew-policy/).
and [supported versions and version skew between cluster components](/docs/setup/release/version-skew-policy/).
## Get information about your cluster
@ -94,5 +94,3 @@ Some additional resources for getting information about your cluster and how it
* [Expose an External IP address to access an application](/docs/tutorials/stateless-application/expose-external-ip-address/)
{{% /capture %}}

View File

@ -1061,7 +1061,7 @@
type: 3,
name: 'Weaveworks',
logo: 'weave_works',
link: '/docs/setup/independent/create-cluster-kubeadm/',
link: '/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/',
blurb: 'Weaveworks - kubeadm'
},
{

View File

@ -131,7 +131,7 @@ toc:
path: /docs/admin/cluster-large/
- title: Running in Multiple Zones
path: /docs/setup/multiple-zones/
path: /docs/setup/best-practices/multiple-zones/
- title: Building High-Availability Clusters
path: /docs/admin/high-availability/building/

View File

@ -553,3 +553,37 @@ https://kubernetes-io-v1-7.netlify.com/* https://v1-7.docs.kubernetes.io/:spl
/docs/admin/high-availability/building/ /docs/setup/independent/high-availability/ 301
/code-of-conduct/ /community/code-of-conduct/ 301
/docs/setup/version-skew-policy/ /docs/setup/release/version-skew-policy/ 301
/docs/setup/minikube/ /docs/setup/learning-environment/minikube/ 301
/docs/setup/cri/ /docs/setup/production-environment/container-runtimes/ 301
/docs/setup/independent/install-kubeadm/ /docs/setup/production-environment/tools/kubeadm/install-kubeadm/ 301
/docs/setup/independent/troubleshooting-kubeadm/ /docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/ 301
/docs/setup/independent/create-cluster-kubeadm/ /docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ 301
/docs/setup/independent/control-plane-flags/ /docs/setup/production-environment/tools/kubeadm/control-plane-flags/ 301
/docs/setup/independent/ha-topology/ /docs/setup/production-environment/tools/kubeadm/ha-topology/ 301
/docs/setup/independent/high-availability/ /docs/setup/production-environment/tools/kubeadm/high-availability/ 301
/docs/setup/independent/setup-ha-etcd-with-kubeadm/ /docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/ 301
/docs/setup/independent/kubelet-integration/ /docs/setup/production-environment/tools/kubeadm/kubelet-integration/ 301
/docs/setup/custom-cloud/kops/ /docs/setup/production-environment/tools/kops/ 301
/docs/setup/custom-cloud/kubespray/ /docs/setup/production-environment/tools/kubespray/ 301
/docs/setup/on-premises-metal/krib/ /docs/setup/production-environment/tools/krib/ 301
/docs/setup/turnkey/aws/ /docs/setup/production-environment/turnkey/aws/ 301
/docs/setup/turnkey/alibaba-cloud/ /docs/setup/production-environment/turnkey/alibaba-cloud/ 301
/docs/setup/turnkey/azure/ /docs/setup/production-environment/turnkey/azure/ 301
/docs/setup/turnkey/clc/ /docs/setup/production-environment/turnkey/clc/ 301
/docs/setup/turnkey/gce/ /docs/setup/production-environment/turnkey/gce/ 301
/docs/setup/turnkey/icp/ /docs/setup/production-environment/turnkey/icp/ 301
/docs/setup/turnkey/stackpoint/ /docs/setup/production-environment/turnkey/stackpoint/ 301
/docs/setup/on-premises-vm/cloudstack/ /docs/setup/production-environment/on-premises-vm/cloudstack/ 301
/docs/setup/on-premises-vm/dcos/ /docs/setup/production-environment/on-premises-vm/dcos/ 301
/docs/setup/on-premises-vm/ovirt/ /docs/setup/production-environment/on-premises-vm/ovirt/ 301
/docs/setup/windows/intro-windows-in-kubernetes/ /docs/setup/production-environment/windows/intro-windows-in-kubernetes/ 301
/docs/setup/windows/user-guide-windows-nodes/ /docs/setup/production-environment/windows/user-guide-windows-nodes/ 301
/docs/setup/windows/user-guide-windows-containers/ /docs/setup/production-environment/windows/user-guide-windows-containers/ 301
/docs/setup/multiple-zones/ /docs/setup/best-practices/multiple-zones/ 301
/docs/setup/cluster-large/ /docs/setup/best-practices/cluster-large/ 301
/docs/setup/node-conformance/ /docs/setup/best-practices/node-conformance/ 301
/docs/setup/certificates/ /docs/setup/best-practices/certificates/ 301