Point old documents to their new Hugo location
parent
294f880106
commit
755995207e
|
@ -1,21 +1 @@
|
|||
# Contributing guidelines
|
||||
|
||||
## Filing issues
|
||||
|
||||
File issues using the standard Github issue tracker for the repo.
|
||||
|
||||
## How to become a contributor and submit your own code
|
||||
|
||||
### Contributor License Agreements
|
||||
|
||||
We'd love to accept your patches! Before we can take them, we have to jump a couple of legal hurdles.
|
||||
|
||||
[Please fill out either the individual or corporate Contributor License Agreement (CLA)](http://git.k8s.io/community/CLA.md)
|
||||
|
||||
### Contributing A Patch
|
||||
|
||||
1. Submit an issue describing your proposed change to the repo in question.
|
||||
1. The repo owner will respond to your issue promptly.
|
||||
1. If your proposed change is accepted, and you haven't already done so, sign a Contributor License Agreement (see details above).
|
||||
1. Fork the desired repo, develop and test your code changes.
|
||||
1. Submit a pull request.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/contributing/guide/
|
||||
|
|
|
@ -1,11 +1 @@
|
|||
# Accessing Host Resources From Inside A Pod
|
||||
|
||||
## When you have a VirtualBox driver
|
||||
|
||||
In order to access host resources from inside a pod, run the following command to determine the host IP you can use:
|
||||
|
||||
```shell
|
||||
ip addr
|
||||
```
|
||||
|
||||
The IP address under `vboxnet1` is the IP that you need to access the host from within a pod.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/accessing-host-resources/
|
|
@ -1,50 +1 @@
|
|||
# Add-ons
|
||||
|
||||
Minikube has a set of built in addons that can be used enabled, disabled, and opened inside of the local k8s environment. Below is an example of this functionality for the `heapster` addon:
|
||||
|
||||
```shell
|
||||
$ minikube addons list
|
||||
- registry: disabled
|
||||
- registry-creds: disabled
|
||||
- freshpod: disabled
|
||||
- addon-manager: enabled
|
||||
- dashboard: enabled
|
||||
- heapster: disabled
|
||||
- efk: disabled
|
||||
- ingress: disabled
|
||||
- default-storageclass: enabled
|
||||
- storage-provisioner: enabled
|
||||
- storage-provisioner-gluster: disabled
|
||||
- nvidia-driver-installer: disabled
|
||||
- nvidia-gpu-device-plugin: disabled
|
||||
|
||||
# minikube must be running for these commands to take effect
|
||||
$ minikube addons enable heapster
|
||||
heapster was successfully enabled
|
||||
|
||||
$ minikube addons open heapster # This will open grafana (interacting w/ heapster) in the browser
|
||||
Waiting, endpoint for service is not ready yet...
|
||||
Waiting, endpoint for service is not ready yet...
|
||||
Created new window in existing browser session.
|
||||
```
|
||||
|
||||
The currently supported addons include:
|
||||
|
||||
* [Kubernetes Dashboard](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard)
|
||||
* [Heapster](https://github.com/kubernetes/heapster): [Troubleshooting Guide](https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md) Note:You will need to login to Grafana as admin/admin in order to access the console
|
||||
* [EFK](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch)
|
||||
* [Registry](https://github.com/kubernetes/minikube/tree/master/deploy/addons/registry)
|
||||
* [Registry Credentials](https://github.com/upmc-enterprises/registry-creds)
|
||||
* [Ingress](https://github.com/kubernetes/ingress-nginx)
|
||||
* [Freshpod](https://github.com/GoogleCloudPlatform/freshpod)
|
||||
* [nvidia-driver-installer](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/nvidia-driver-installer/minikube)
|
||||
* [nvidia-gpu-device-plugin](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu)
|
||||
* [logviewer](https://github.com/ivans3/minikube-log-viewer)
|
||||
* [gvisor](../deploy/addons/gvisor/README.md)
|
||||
* [storage-provisioner-gluster](../deploy/addons/storage-provisioner-gluster/README.md)
|
||||
|
||||
If you would like to have minikube properly start/restart custom addons, place the addon(s) you wish to be launched with minikube in the `.minikube/addons` directory. Addons in this folder will be moved to the minikube VM and launched each time minikube is started/restarted.
|
||||
|
||||
If you have a request for an addon in minikube, please open an issue with the name and preferably a link to the addon with a description of its purpose and why it should be added. You can also attempt to add the addon to minikube by following the guide at [Adding an Addon](contributors/adding_an_addon.md)
|
||||
|
||||
**Note:** If you want to have a look at the default configuration for the addons, see [deploy/addons](https://github.com/kubernetes/minikube/tree/master/deploy/addons).
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/addons/
|
||||
|
|
|
@ -1,41 +1 @@
|
|||
# Alternative runtimes
|
||||
|
||||
## Using CRI-O
|
||||
|
||||
To use [CRI-O](https://github.com/kubernetes-sigs/cri-o) as the container runtime, run:
|
||||
|
||||
```shell
|
||||
$ minikube start --container-runtime=cri-o
|
||||
```
|
||||
|
||||
Or you can use the extended version:
|
||||
|
||||
```shell
|
||||
$ minikube start --container-runtime=cri-o \
|
||||
--network-plugin=cni \
|
||||
--enable-default-cni \
|
||||
--cri-socket=/var/run/crio/crio.sock \
|
||||
--extra-config=kubelet.container-runtime=remote \
|
||||
--extra-config=kubelet.container-runtime-endpoint=unix:///var/run/crio/crio.sock \
|
||||
--extra-config=kubelet.image-service-endpoint=unix:///var/run/crio/crio.sock
|
||||
```
|
||||
|
||||
## Using containerd
|
||||
|
||||
To use [containerd](https://github.com/containerd/containerd) as the container runtime, run:
|
||||
|
||||
```shell
|
||||
$ minikube start --container-runtime=containerd
|
||||
```
|
||||
|
||||
Or you can use the extended version:
|
||||
|
||||
```shell
|
||||
$ minikube start --container-runtime=containerd \
|
||||
--network-plugin=cni \
|
||||
--enable-default-cni \
|
||||
--cri-socket=/run/containerd/containerd.sock \
|
||||
--extra-config=kubelet.container-runtime=remote \
|
||||
--extra-config=kubelet.container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--extra-config=kubelet.image-service-endpoint=unix:///run/containerd/containerd.sock
|
||||
```
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/runtimes/
|
||||
|
|
|
@ -1,31 +1 @@
|
|||
# Building images within the VM
|
||||
|
||||
When using a single VM of Kubernetes it's really handy to build inside the VM; as this means you don't have to build on your host machine and push the image into a docker registry - you can just build inside the same machine as minikube which speeds up local experiments.
|
||||
|
||||
## Docker (containerd)
|
||||
|
||||
For Docker, you can either set up your host docker client to communicate by [reusing the docker daemon](reusing_the_docker_daemon.md).
|
||||
|
||||
Or you can use `minikube ssh` to connect to the virtual machine, and run the `docker build` there:
|
||||
|
||||
```shell
|
||||
docker build
|
||||
```
|
||||
|
||||
For more information on the `docker build` command, read the [Docker documentation](https://docs.docker.com/engine/reference/commandline/build/) (docker.com).
|
||||
|
||||
## Podman (cri-o)
|
||||
|
||||
For Podman, there is no daemon running. The processes are started by the user, monitored by `conmon`.
|
||||
|
||||
So you need to use `minikube ssh`, and you will also make sure to run the command as the root user:
|
||||
|
||||
```shell
|
||||
sudo -E podman build
|
||||
```
|
||||
|
||||
For more information on the `podman build` command, read the [Podman documentation](https://github.com/containers/libpod/blob/master/docs/podman-build.1.md) (podman.io).
|
||||
|
||||
## Build context
|
||||
|
||||
For the build context you can use any directory on the virtual machine, or any address on the network.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/building_within/
|
||||
|
|
|
@ -1,20 +1 @@
|
|||
# Caching Images
|
||||
|
||||
Minikube supports caching non-minikube images using the `minikube cache` command. Images can be added to the cache by running `minikube cache add <img>`, and deleted by running `minikube cache delete <img>`.
|
||||
|
||||
Images in the cache will be loaded on `minikube start`. If you want to list all available cached images, you can use `minikube cache list` command to list. Below is an example of this functionality:
|
||||
|
||||
```shell
|
||||
# cache a image into $HOME/.minikube/cache/images
|
||||
$ minikube cache add ubuntu:16.04
|
||||
$ minikube cache add redis:3
|
||||
|
||||
# list cached images
|
||||
$ minikube cache list
|
||||
redis:3
|
||||
ubuntu:16.04
|
||||
|
||||
# delete cached images
|
||||
$ minikube cache delete ubuntu:16.04
|
||||
$ minikube cache delete $(minikube cache list)
|
||||
```
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/caching
|
||||
|
|
|
@ -1,340 +1 @@
|
|||
# minikube CLI Commands
|
||||
This document serves as a reference to all the commands, flags and their accepted arguments
|
||||
|
||||
## Global Flags
|
||||
These flags can be used globally with any command on the CLI. Following are the global flags -
|
||||
```
|
||||
--alsologtostderr log to standard error as well as files
|
||||
-b, --bootstrapper string The name of the cluster bootstrapper that will set up the kubernetes cluster. (default "kubeadm")
|
||||
-h, --help help for minikube
|
||||
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
|
||||
--log_dir string If non-empty, write log files in this directory
|
||||
--logtostderr log to standard error instead of files
|
||||
-p, --profile string The name of the minikube VM being used.
|
||||
This can be modified to allow for multiple minikube instances to be run independently (default "minikube")
|
||||
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
|
||||
-v, --v Level log level for V logs
|
||||
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
|
||||
```
|
||||
|
||||
## Commands
|
||||
In this section, all commands which are accepted by the `minikube` CLI are described. To get help about any command, you can also type in `minikube help <command>`
|
||||
|
||||
---
|
||||
### addons
|
||||
**Description -** Modifies minikube addons files using subcommands like `minikube addons enable heapster`
|
||||
**Usage -**
|
||||
```
|
||||
minikube addons SUBCOMMAND [flags]
|
||||
minikube addons [command]
|
||||
```
|
||||
**Available Subcommands -**
|
||||
```
|
||||
configure Configures the addon w/ADDON_NAME within minikube (example: minikube addons configure registry-creds). For a list of available addons use: minikube addons list
|
||||
disable Disables the addon w/ADDON_NAME within minikube (example: minikube addons disable dashboard). For a list of available addons use: minikube addons list
|
||||
enable Enables the addon w/ADDON_NAME within minikube (example: minikube addons enable dashboard). For a list of available addons use: minikube addons list
|
||||
list Lists all available minikube addons as well as their current statuses (enabled/disabled)
|
||||
open Opens the addon w/ADDON_NAME within minikube (example: minikube addons open dashboard). For a list of available addons use: minikube addons list
|
||||
```
|
||||
|
||||
---
|
||||
### cache
|
||||
**Description -** Add or delete an image from the local cache.
|
||||
**Usage -** `minikube cache [command]`
|
||||
**Available Subcommands-**
|
||||
```
|
||||
add Add an image to local cache.
|
||||
delete Delete an image from the local cache.
|
||||
list List all available images from the local cache.
|
||||
```
|
||||
|
||||
---
|
||||
### completion
|
||||
**Description -**
|
||||
|
||||
> Outputs minikube shell completion for the given shell (bash or zsh)
|
||||
>
|
||||
> This depends on the bash-completion binary. Example installation instructions:
|
||||
> OS X:
|
||||
> $ brew install bash-completion
|
||||
> $ source $(brew --prefix)/etc/bash_completion
|
||||
> $ minikube completion bash > ~/.minikube-completion # for bash users
|
||||
> $ minikube completion zsh > ~/.minikube-completion # for zsh users
|
||||
> $ source ~/.minikube-completion
|
||||
> Ubuntu:
|
||||
> $ apt-get install bash-completion
|
||||
> $ source /etc/bash-completion
|
||||
> $ source <(minikube completion bash) # for bash users
|
||||
> $ source <(minikube completion zsh) # for zsh users
|
||||
>
|
||||
> Additionally, you may want to output the completion to a file and source in your .bashrc
|
||||
>
|
||||
> Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2
|
||||
**Usage -** `minikube completion SHELL`
|
||||
|
||||
---
|
||||
### config
|
||||
**Description -** config modifies minikube config files using subcommands like `minikube config set vm-driver kvm`
|
||||
Configurable fields:
|
||||
* vm-driver
|
||||
* feature-gates
|
||||
* v
|
||||
* cpus
|
||||
* disk-size
|
||||
* host-only-cidr
|
||||
* memory
|
||||
* log_dir
|
||||
* kubernetes-version
|
||||
* iso-url
|
||||
* WantUpdateNotification
|
||||
* ReminderWaitPeriodInHours
|
||||
* WantReportError
|
||||
* WantReportErrorPrompt
|
||||
* WantKubectlDownloadMsg
|
||||
* WantNoneDriverWarning
|
||||
* profile
|
||||
* bootstrapper
|
||||
* ShowDriverDeprecationNotification
|
||||
* ShowBootstrapperDeprecationNotification
|
||||
* dashboard
|
||||
* addon-manager
|
||||
* default-storageclass
|
||||
* heapster
|
||||
* efk
|
||||
* ingress
|
||||
* registry
|
||||
* registry-creds
|
||||
* freshpod
|
||||
* default-storageclass
|
||||
* storage-provisioner
|
||||
* storage-provisioner-gluster
|
||||
* metrics-server
|
||||
* nvidia-driver-installer
|
||||
* nvidia-gpu-device-plugin
|
||||
* logviewer
|
||||
* gvisor
|
||||
* hyperv-virtual-switch
|
||||
* disable-driver-mounts
|
||||
* cache
|
||||
* embed-certs
|
||||
|
||||
**Usage -**
|
||||
```
|
||||
minikube config SUBCOMMAND [flags]
|
||||
minikube config [command]
|
||||
```
|
||||
**Available Subcommands-**
|
||||
```
|
||||
get Gets the value of PROPERTY_NAME from the minikube config file
|
||||
set Sets an individual value in a minikube config file
|
||||
unset unsets an individual value in a minikube config file
|
||||
view Display values currently set in the minikube config file
|
||||
```
|
||||
|
||||
---
|
||||
### dashboard
|
||||
**Description -** Access the kubernetes dashboard running within the minikube cluster
|
||||
**Usage -** `minikube dashboard [flags]`
|
||||
**Available Flags -**
|
||||
```
|
||||
-h, --help help for dashboard
|
||||
--url Display dashboard URL instead of opening a browser
|
||||
```
|
||||
|
||||
---
|
||||
### delete
|
||||
**Description -** Deletes a local kubernetes cluster. This command deletes the VM, and removes all
|
||||
associated files.
|
||||
**Usage -** `minikube delete`
|
||||
|
||||
---
|
||||
### docker-env
|
||||
**Description -** Sets up docker env variables; similar to '$(docker-machine env)'.
|
||||
**Usage -** `minikube docker-env [flags]`
|
||||
**Available Flags -**
|
||||
```
|
||||
-h, --help help for docker-env
|
||||
--no-proxy Add machine IP to NO_PROXY environment variable
|
||||
--shell string Force environment to be configured for a specified shell: [fish, cmd, powershell, tcsh, bash, zsh], default is auto-detect
|
||||
-u, --unset Unset variables instead of setting them
|
||||
```
|
||||
|
||||
---
|
||||
### help
|
||||
**Description -** Help provides help for any command in the application. Simply type minikube help [path to command] for full details.
|
||||
**Usage -** `minikube help [command] [flags]`
|
||||
|
||||
---
|
||||
### ip
|
||||
**Description -** Retrieves the IP address of the running cluster, and writes it to STDOUT.
|
||||
**Usage -** `minikube ip`
|
||||
|
||||
---
|
||||
### kubectl
|
||||
**Description -** Run the kubernetes client, download it if necessary.
|
||||
**Usage -** `minikube kubectl`
|
||||
|
||||
---
|
||||
### logs
|
||||
**Description -** Gets the logs of the running instance, used for debugging minikube, not user code.
|
||||
**Usage -** `minikube logs [flags]`
|
||||
**Available Flags -**
|
||||
```
|
||||
-f, --follow Show only the most recent journal entries, and continuously print new entries as they are appended to the journal.
|
||||
-h, --help help for logs
|
||||
-n, --length int Number of lines back to go within the log (default 50)
|
||||
--problems Show only log entries which point to known problems
|
||||
```
|
||||
|
||||
---
|
||||
### mount
|
||||
**Description -** Mounts the specified directory into minikube.
|
||||
**Usage -** `minikube mount [flags] <source directory>:<target directory>`
|
||||
**Available Flags -**
|
||||
```
|
||||
--9p-version string Specify the 9p version that the mount should use (default "9p2000.L")
|
||||
--gid string Default group id used for the mount (default "docker")
|
||||
-h, --help help for mount
|
||||
--ip string Specify the ip that the mount should be setup on
|
||||
--kill Kill the mount process spawned by minikube start
|
||||
--mode uint File permissions used for the mount (default 493)
|
||||
--msize int The number of bytes to use for 9p packet payload (default 262144)
|
||||
--options strings Additional mount options, such as cache=fscache
|
||||
--type string Specify the mount filesystem type (supported types: 9p) (default "9p")
|
||||
--uid string Default user id used for the mount (default "docker")
|
||||
```
|
||||
|
||||
---
|
||||
### profile
|
||||
**Description -** Sets the current minikube profile, or gets the current profile if no arguments are provided. This is used to run and manage multiple minikube instance. You can return to the default minikube profile by running `minikube profile default`
|
||||
**Usage -**
|
||||
```
|
||||
minikube profile [MINIKUBE_PROFILE_NAME]. You can return to the default minikube profile by running `minikube profile default` [flags]
|
||||
```
|
||||
|
||||
---
|
||||
### service
|
||||
**Description -** Gets the kubernetes URL(s) for the specified service in your local cluster. In the case of multiple URLs they will be printed one at a time.
|
||||
**Usage -**
|
||||
```
|
||||
minikube service [flags] SERVICE
|
||||
minikube service [command]
|
||||
```
|
||||
**Available Commands -**
|
||||
```
|
||||
list Lists the URLs for the services in your local cluster
|
||||
```
|
||||
**Available Flags -**
|
||||
```
|
||||
--format string Format to output service URL in. This format will be applied to each url individually and they will be printed one at a time. (default "http://{{.IP}}:{{.Port}}")
|
||||
-h, --help help for service
|
||||
--https Open the service URL with https instead of http
|
||||
--interval int The time interval for each check that wait performs in seconds (default 20)
|
||||
-n, --namespace string The service namespace (default "default")
|
||||
--url Display the kubernetes service URL in the CLI instead of opening it in the default browser
|
||||
--wait int Amount of time to wait for a service in seconds (default 20)
|
||||
```
|
||||
|
||||
---
|
||||
### ssh
|
||||
**Description -** Log into or run a command on a machine with SSH; similar to 'docker-machine ssh'.
|
||||
**Usage -** `minikube ssh`
|
||||
|
||||
---
|
||||
### ssh-key
|
||||
**Description -** Retrieve the ssh identity key path of the specified cluster.
|
||||
**Usage -** `minikube ssh-key`
|
||||
|
||||
---
|
||||
### start
|
||||
**Description -** Starts a local kubernetes cluster.
|
||||
**Usage -** `minikube start [flags]`
|
||||
**Available Flags -**
|
||||
```
|
||||
--apiserver-ips ipSlice A set of apiserver IP Addresses which are used in the generated certificate for kubernetes. This can be used if you want to make the apiserver available from outside the machine (default [])
|
||||
--apiserver-name string The apiserver name which is used in the generated certificate for kubernetes. This can be used if you want to make the apiserver available from outside the machine (default "minikubeCA")
|
||||
--apiserver-names stringArray A set of apiserver names which are used in the generated certificate for kubernetes. This can be used if you want to make the apiserver available from outside the machine
|
||||
--apiserver-port int The apiserver listening port (default 8443)
|
||||
--cache-images If true, cache docker images for the current bootstrapper and load them into the machine. Always false with --vm-driver=none. (default true)
|
||||
--container-runtime string The container runtime to be used (docker, crio, containerd) (default "docker")
|
||||
--cpus int Number of CPUs allocated to the minikube VM (default 2)
|
||||
--cri-socket string The cri socket path to be used
|
||||
--disable-driver-mounts Disables the filesystem mounts provided by the hypervisors (vboxfs)
|
||||
--disk-size string Disk size allocated to the minikube VM (format: <number>[<unit>], where unit = b, k, m or g) (default "20000mb")
|
||||
--dns-domain string The cluster dns domain name used in the kubernetes cluster (default "cluster.local")
|
||||
--docker-env stringArray Environment variables to pass to the Docker daemon. (format: key=value)
|
||||
--docker-opt stringArray Specify arbitrary flags to pass to the Docker daemon. (format: key=value)
|
||||
--download-only If true, only download and cache files for later use - don't install or start anything.
|
||||
--enable-default-cni Enable the default CNI plugin (/etc/cni/net.d/k8s.conf). Used in conjunction with "--network-plugin=cni"
|
||||
--extra-config ExtraOption A set of key=value pairs that describe configuration that may be passed to different components.
|
||||
The key should be '.' separated, and the first part before the dot is the component to apply the configuration to.
|
||||
Valid components are: kubelet, kubeadm, apiserver, controller-manager, etcd, proxy, scheduler
|
||||
Valid kubeadm parameters: ignore-preflight-errors, dry-run, kubeconfig, kubeconfig-dir, node-name, cri-socket, experimental-upload-certs, certificate-key, rootfs, pod-network-cidr
|
||||
--feature-gates string A set of key=value pairs that describe feature gates for alpha/experimental features.
|
||||
--gpu Enable experimental NVIDIA GPU support in minikube (works only with kvm2 driver on Linux)
|
||||
-h, --help help for start
|
||||
--hidden Hide the hypervisor signature from the guest in minikube (works only with kvm2 driver on Linux)
|
||||
--host-only-cidr string The CIDR to be used for the minikube VM (only supported with Virtualbox driver) (default "192.168.99.1/24")
|
||||
--hyperkit-vpnkit-sock string Location of the VPNKit socket used for networking. If empty, disables Hyperkit VPNKitSock, if 'auto' uses Docker for Mac VPNKit connection, otherwise uses the specified VSock.
|
||||
--hyperkit-vsock-ports strings List of guest VSock ports that should be exposed as sockets on the host (Only supported on with hyperkit now).
|
||||
--hyperv-virtual-switch string The hyperv virtual switch name. Defaults to first found. (only supported with HyperV driver)
|
||||
--image-mirror-country string Country code of the image mirror to be used. Leave empty to use the global one. For Chinese mainland users, set it to cn
|
||||
--image-repository string Alternative image repository to pull docker images from. This can be used when you have limited access to gcr.io. Set it to "auto" to let minikube decide one for you. For Chinese mainland users, you may use local gcr.io mirrors such as registry.cn-hangzhou.aliyuncs.com/google_containers
|
||||
--insecure-registry strings Insecure Docker registries to pass to the Docker daemon. The default service CIDR range will automatically be added.
|
||||
--iso-url string Location of the minikube iso (default "https://storage.googleapis.com/minikube/iso/minikube-v1.2.0.iso")
|
||||
--keep-context This will keep the existing kubectl context and will create a minikube context.
|
||||
--kubernetes-version string The kubernetes version that the minikube VM will use (ex: v1.2.3) (default "v1.15.0")
|
||||
--kvm-network string The KVM network name. (only supported with KVM driver) (default "default")
|
||||
--memory string Amount of RAM allocated to the minikube VM (format: <number>[<unit>], where unit = b, k, m or g) (default "2000mb")
|
||||
--mount This will start the mount daemon and automatically mount files into minikube
|
||||
--mount-string string The argument to pass the minikube mount command on start (default "C:\\Users\\Pranav.Jituri:/minikube-host")
|
||||
--network-plugin string The name of the network plugin
|
||||
--nfs-share strings Local folders to share with Guest via NFS mounts (Only supported on with hyperkit now)
|
||||
--nfs-shares-root string Where to root the NFS Shares (defaults to /nfsshares, only supported with hyperkit now) (default "/nfsshares")
|
||||
--no-vtx-check Disable checking for the availability of hardware virtualization before the vm is started (virtualbox)
|
||||
--registry-mirror strings Registry mirrors to pass to the Docker daemon
|
||||
--service-cluster-ip-range string The CIDR to be used for service cluster IPs. (default "10.96.0.0/12")
|
||||
--uuid string Provide VM UUID to restore MAC address (only supported with Hyperkit driver).
|
||||
--vm-driver string VM driver is one of: [virtualbox parallels vmwarefusion kvm hyperv hyperkit kvm2 vmware none] (default "virtualbox")
|
||||
```
|
||||
|
||||
---
|
||||
### status
|
||||
**Description -** Gets the status of a local kubernetes cluster. Exit status contains the status of minikube's VM, cluster and kubernetes encoded on it's bits in this order from right to left.
|
||||
Eg: 7 meaning: 1 (for minikube NOK) + 2 (for cluster NOK) + 4 (for kubernetes NOK)
|
||||
**Usage -** `minikube status [flags]`
|
||||
**Available Flags -**
|
||||
```
|
||||
--format string Go template format string for the status output. The format for Go templates can be found here: https://golang.org/pkg/text/template/
|
||||
For the list accessible variables for the template, see the struct values here: https://godoc.org/k8s.io/minikube/cmd/minikube/cmd#Status (default "host: {{.Host}}\nkubelet: {{.Kubelet}}\napiserver: {{.APIServer}}\nkubectl: {{.Kubeconfig}}\n")
|
||||
```
|
||||
|
||||
---
|
||||
### stop
|
||||
**Description -** Stops a local kubernetes cluster running in Virtualbox. This command stops the VM
|
||||
itself, leaving all files intact. The cluster can be started again with the `start` command.
|
||||
**Usage -** `minikube stop`
|
||||
|
||||
---
|
||||
### tunnel
|
||||
**Description -** Creates a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP
|
||||
**Usage -** `minikube tunnel [flags]`
|
||||
**Available Flags -**
|
||||
```
|
||||
-c, --cleanup call with cleanup=true to remove old tunnels
|
||||
```
|
||||
|
||||
---
|
||||
### update-check
|
||||
**Description -** Print current and latest version number.
|
||||
**Usage -** `minikube update-check`
|
||||
|
||||
---
|
||||
### update-context
|
||||
**Description -** Retrieves the IP address of the running cluster, checks it with IP in kubeconfig, and corrects kubeconfig if incorrect.
|
||||
**Usage -** `minikube update-context`
|
||||
|
||||
---
|
||||
### version
|
||||
**Description -** Print the version of minikube.
|
||||
**Usage -** `minikube version`
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/commands/
|
||||
|
|
|
@ -1,43 +1 @@
|
|||
# Configuring Kubernetes
|
||||
|
||||
Minikube has a "configurator" feature that allows users to configure the Kubernetes components with arbitrary values.
|
||||
To use this feature, you can use the `--extra-config` flag on the `minikube start` command.
|
||||
|
||||
This flag is repeated, so you can pass it several times with several different values to set multiple options.
|
||||
|
||||
## Selecting a Kubernetes version
|
||||
|
||||
minikube defaults to the latest stable version of Kubernetes. You may select a different Kubernetes release by using the `--kubernetes-version` flag, for example:
|
||||
|
||||
`minikube start --kubernetes-version=v1.10.13`
|
||||
|
||||
minikube follows the [Kubernetes Version and Version Skew Support Policy](https://kubernetes.io/docs/setup/version-skew-policy/), so we guarantee support for the latest build for the last 3 minor Kubernetes releases. When practical, minikube extends this policy three additional minor releases so that users can emulate legacy environments.
|
||||
|
||||
As of August 2019, this means that minikube supports and actively tests against the latest builds of:
|
||||
|
||||
* v1.15.x (default)
|
||||
* v1.14.x
|
||||
* v1.13.x
|
||||
* v1.12.x
|
||||
* v1.11.x (best effort)
|
||||
* v1.10.x (best effort)
|
||||
|
||||
For more up to date information, see `OldestKubernetesVersion` and `NewestKubernetesVersion` in [constants.go](https://github.com/kubernetes/minikube/blob/master/pkg/minikube/constants/constants.go)
|
||||
|
||||
## kubeadm
|
||||
|
||||
The kubeadm bootstrapper can be configured by the `--extra-config` flag on the `minikube start` command. It takes a string of the form `component.key=value` where `component` is one of the strings
|
||||
|
||||
* kubeadm
|
||||
* kubelet
|
||||
* apiserver
|
||||
* controller-manager
|
||||
* scheduler
|
||||
|
||||
and `key=value` is a flag=value pair for the component being configured. For example,
|
||||
|
||||
```shell
|
||||
minikube start --extra-config=apiserver.v=10 --extra-config=kubelet.max-pods=100
|
||||
|
||||
minikube start --extra-config=kubeadm.ignore-preflight-errors=SystemVerification # allows any version of docker
|
||||
```
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/
|
||||
|
|
|
@ -1,21 +1 @@
|
|||
# Contributing
|
||||
|
||||
* **New contributors** ([contributors.md](https://github.com/kubernetes/minikube/blob/master/CONTRIBUTING.md)): Process for new contributors, CLA instructions
|
||||
|
||||
* **Roadmap** ([roadmap.md](roadmap.md)): The roadmap for future minikube development
|
||||
|
||||
## New Features and Dependencies
|
||||
|
||||
* **Adding a new addon** ([adding_an_addon.md](adding_an_addon.md)): How to add a new addon to minikube for `minikube addons`
|
||||
|
||||
* **Adding a new driver** ([adding_driver.md](adding_driver.md)): How to add a new driver to minikube for `minikube create --vm-driver=<driver>`
|
||||
|
||||
## Building and Releasing
|
||||
|
||||
* **Build Guide** ([build_guide.md](build_guide.md)): How to build minikube from source
|
||||
|
||||
* **ISO Build Guide** ([minikube_iso.md](minikube_iso.md)): How to build and hack on the ISO image that minikube uses
|
||||
|
||||
* **CI Builds** ([ci_builds.md](./ci_builds.md)): Accessing CI build artifacts from Jenkins
|
||||
|
||||
* **Releasing minikube** ([releasing_minikube.md](releasing_minikube.md)): Steps to release a new version of minikube
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/contributing/
|
||||
|
|
|
@ -1,56 +1 @@
|
|||
# Adding a New Addon
|
||||
|
||||
To add a new addon to minikube the following steps are required:
|
||||
|
||||
* For the new addon's .yaml file(s):
|
||||
* Put the required .yaml files for the addon in the `minikube/deploy/addons` directory.
|
||||
* Add the `kubernetes.io/minikube-addons: <NEW_ADDON_NAME>` label to each piece of the addon (ReplicationController, Service, etc.)
|
||||
* Also, `addonmanager.kubernetes.io/mode` annotation is needed so that your resources are picked up by the `addon-manager` minikube addon.
|
||||
* In order to have `minikube addons open <NEW_ADDON_NAME>` work properly, the `kubernetes.io/minikube-addons-endpoint: <NEW_ADDON_NAME>` label must be added to the appropriate endpoint service (what the user would want to open/interact with). This service must be of type NodePort.
|
||||
|
||||
* To add the addon into minikube commands/VM:
|
||||
* Add the addon with appropriate fields filled into the `Addon` dictionary, see this [commit](https://github.com/kubernetes/minikube/commit/41998bdad0a5543d6b15b86b0862233e3204fab6#diff-e2da306d559e3f019987acc38431a3e8R133) and example.
|
||||
|
||||
```go
|
||||
// cmd/minikube/cmd/config/config.go
|
||||
var settings = []Setting{
|
||||
...,
|
||||
// add other addon setting
|
||||
{
|
||||
name: "efk",
|
||||
set: SetBool,
|
||||
validations: []setFn{IsValidAddon},
|
||||
callbacks: []setFn{EnableOrDisableAddon},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
* Add the addon to settings list, see this [commit](https://github.com/kubernetes/minikube/commit/41998bdad0a5543d6b15b86b0862233e3204fab6#diff-07ad0c54f98b231e68537d908a214659R89) and example.
|
||||
|
||||
```go
|
||||
// pkg/minikube/assets/addons.go
|
||||
var Addons = map[string]*Addon{
|
||||
...,
|
||||
// add other addon asset
|
||||
"efk": NewAddon([]*BinAsset{
|
||||
MustBinAsset(
|
||||
"deploy/addons/efk/efk-configmap.yaml",
|
||||
constants.AddonsPath,
|
||||
"efk-configmap.yaml",
|
||||
"0640"),
|
||||
MustBinAsset(
|
||||
"deploy/addons/efk/efk-rc.yaml",
|
||||
constants.AddonsPath,
|
||||
"efk-rc.yaml",
|
||||
"0640"),
|
||||
MustBinAsset(
|
||||
"deploy/addons/efk/efk-svc.yaml",
|
||||
constants.AddonsPath,
|
||||
"efk-svc.yaml",
|
||||
"0640"),
|
||||
}, false, "efk"),
|
||||
}
|
||||
```
|
||||
|
||||
* Rebuild minikube using make out/minikube. This will put the addon's .yaml binary files into the minikube binary using go-bindata.
|
||||
* Test addon using `minikube addons enable <NEW_ADDON_NAME>` command to start service.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/contributing/addons/
|
|
@ -1,100 +1 @@
|
|||
# Adding new driver (Deprecated)
|
||||
|
||||
New drivers should be added into <https://github.com/machine-drivers>
|
||||
|
||||
Minikube relies on docker machine drivers to manage machines. This document talks about how to
|
||||
add an existing docker machine driver into minikube registry, so that minikube can use the driver
|
||||
by `minikube create --vm-driver=<new_driver>`. This document is not going to talk about how to
|
||||
create a new docker machine driver.
|
||||
|
||||
## Understand your driver
|
||||
|
||||
First of all, before started, you need to understand your driver in terms of:
|
||||
|
||||
- Which operating system is your driver running on?
|
||||
- Is your driver builtin the minikube binary or triggered through RPC?
|
||||
- How to translate minikube config to driver config?
|
||||
- If builtin, how to instantiate the driver instance?
|
||||
|
||||
Builtin basically means whether or not you need separate driver binary in your `$PATH` for minikube to
|
||||
work. For instance, `hyperkit` is not builtin, because you need `docker-machine-driver-hyperkit` in your
|
||||
`$PATH`. `vmwarefusion` is builtin, because you don't need anything.
|
||||
|
||||
## Understand registry
|
||||
|
||||
Registry is what minikube uses to register all the supported drivers. The driver author registers
|
||||
their drivers in registry, and minikube runtime will look at the registry to find a driver and use the
|
||||
driver metadata to determine what workflow to apply while those drivers are being used.
|
||||
|
||||
The godoc of registry is available here: <https://godoc.org/k8s.io/minikube/pkg/minikube/registry>
|
||||
|
||||
[DriverDef](https://godoc.org/k8s.io/minikube/pkg/minikube/registry#DriverDef) is the main
|
||||
struct to define a driver metadata. Essentially, you need to define 4 things at most, which is
|
||||
pretty simple once you understand your driver well:
|
||||
|
||||
- Name: unique name of the driver, it will be used as the unique ID in registry and as
|
||||
`--vm-driver` option in minikube command
|
||||
|
||||
- Builtin: `true` if the driver is builtin minikube binary. `false` otherwise.
|
||||
|
||||
- ConfigCreator: how to translate a minikube config to driver config. The driver config will be persistent
|
||||
on your `$USER/.minikube` directory. Most likely the driver config is the driver itself.
|
||||
|
||||
- DriverCreator: Only needed when driver is builtin, to instantiate the driver instance.
|
||||
|
||||
## An example
|
||||
|
||||
All drivers are located in `k8s.io/minikube/pkg/minikube/drivers`. Take `vmwarefusion` as an example:
|
||||
|
||||
```golang
|
||||
// +build darwin
|
||||
|
||||
package vmwarefusion
|
||||
|
||||
import (
|
||||
"github.com/docker/machine/drivers/vmwarefusion"
|
||||
"github.com/docker/machine/libmachine/drivers"
|
||||
cfg "k8s.io/minikube/pkg/minikube/config"
|
||||
"k8s.io/minikube/pkg/minikube/constants"
|
||||
"k8s.io/minikube/pkg/minikube/registry"
|
||||
)
|
||||
|
||||
func init() {
|
||||
registry.Register(registry.DriverDef{
|
||||
Name: "vmwarefusion",
|
||||
Builtin: true,
|
||||
ConfigCreator: createVMwareFusionHost,
|
||||
DriverCreator: func() drivers.Driver {
|
||||
return vmwarefusion.NewDriver("", "")
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func createVMwareFusionHost(config cfg.MachineConfig) interface{} {
|
||||
d := vmwarefusion.NewDriver(cfg.GetMachineName(), constants.GetMinipath()).(*vmwarefusion.Driver)
|
||||
d.Boot2DockerURL = config.Downloader.GetISOFileURI(config.MinikubeISO)
|
||||
d.Memory = config.Memory
|
||||
d.CPU = config.CPUs
|
||||
d.DiskSize = config.DiskSize
|
||||
d.SSHPort = 22
|
||||
d.ISO = d.ResolveStorePath("boot2docker.iso")
|
||||
return d
|
||||
}
|
||||
```
|
||||
|
||||
- In init function, register a `DriverDef` in registry. Specify the metadata in the `DriverDef`. As mentioned
|
||||
earlier, it's builtin, so you also need to specify `DriverCreator` to tell minikube how to create a `drivers.Driver`.
|
||||
- Another important thing is `vmwarefusion` only runs on MacOS. You need to add a build tag on top so it only
|
||||
runs on MacOS, so that the releases on Windows and Linux won't have this driver in registry.
|
||||
- Last but not least, import the driver in `pkg/minikube/cluster/default_drivers.go` to include it in build.
|
||||
|
||||
## Summary
|
||||
|
||||
In summary, the process includes the following steps:
|
||||
|
||||
1. Add the driver under `k8s.io/minikube/pkg/minikube/drivers`
|
||||
- Add build tag for supported operating system
|
||||
- Define driver metadata in `DriverDef`
|
||||
2. Add import in `pkg/minikube/cluster/default_drivers.go`
|
||||
|
||||
Any Questions: please ping your friend [@anfernee](https://github.com/anfernee)
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/contributing/drivers/
|
||||
|
|
|
@ -1,114 +1 @@
|
|||
# Build Guide
|
||||
|
||||
## Build Requirements
|
||||
|
||||
* A recent Go distribution (>=1.12)
|
||||
* If you're not on Linux, you'll need a Docker installation
|
||||
* minikube requires at least 4GB of RAM to compile, which can be problematic when using docker-machine
|
||||
|
||||
### Prerequisites for different GNU/Linux distributions
|
||||
|
||||
#### Fedora
|
||||
|
||||
On Fedora you need to install _glibc-static_
|
||||
```shell
|
||||
$ sudo dnf install -y glibc-static
|
||||
```
|
||||
|
||||
### Building from Source
|
||||
|
||||
Clone and build minikube:
|
||||
```shell
|
||||
$ git clone https://github.com/kubernetes/minikube.git
|
||||
$ cd minikube
|
||||
$ make
|
||||
```
|
||||
|
||||
Note: Make sure that you uninstall any previous versions of minikube before building
|
||||
from the source.
|
||||
|
||||
### Building from Source in Docker (using Debian stretch image with golang)
|
||||
|
||||
Clone minikube:
|
||||
```shell
|
||||
$ git clone https://github.com/kubernetes/minikube.git
|
||||
```
|
||||
|
||||
Build (cross compile for linux / OS X and Windows) using make:
|
||||
```shell
|
||||
$ cd minikube
|
||||
$ MINIKUBE_BUILD_IN_DOCKER=y make cross
|
||||
```
|
||||
|
||||
Check "out" directory:
|
||||
```shell
|
||||
$ ls out/
|
||||
minikube-darwin-amd64 minikube-linux-amd64 minikube-windows-amd64.exe
|
||||
```
|
||||
|
||||
You can also build platform specific executables like below:
|
||||
1. `make windows` will build the binary for Windows platform
|
||||
2. `make linux` will build the binary for Linux platform
|
||||
3. `make darwin` will build the binary for Darwin/Mac platform
|
||||
|
||||
### Run Instructions
|
||||
|
||||
Start the cluster using your built minikube with:
|
||||
|
||||
```shell
|
||||
$ ./out/minikube start
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Unit tests are run on Travis before code is merged. To run as part of a development cycle:
|
||||
|
||||
```shell
|
||||
make test
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
|
||||
Integration tests are currently run manually.
|
||||
To run them, build the binary and run the tests:
|
||||
|
||||
```shell
|
||||
make integration
|
||||
```
|
||||
|
||||
You may find it useful to set various options to test only a particular test against a non-default driver. For instance:
|
||||
|
||||
```shell
|
||||
env TEST_ARGS="-minikube-start-args=--vm-driver=hyperkit -test.run TestStartStop" make integration
|
||||
```
|
||||
|
||||
### Conformance Tests
|
||||
|
||||
These are Kubernetes tests that run against an arbitrary cluster and exercise a wide range of Kubernetes features.
|
||||
You can run these against minikube by following these steps:
|
||||
|
||||
* Clone the Kubernetes repo somewhere on your system.
|
||||
* Run `make quick-release` in the k8s repo.
|
||||
* Start up a minikube cluster with: `minikube start`.
|
||||
* Set following two environment variables:
|
||||
|
||||
```shell
|
||||
export KUBECONFIG=$HOME/.kube/config
|
||||
export KUBERNETES_CONFORMANCE_TEST=y
|
||||
```
|
||||
|
||||
* Run the tests (from the k8s repo):
|
||||
|
||||
```shell
|
||||
go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\]" --check-version-skew=false
|
||||
```
|
||||
|
||||
To run a specific conformance test, you can use the `ginkgo.focus` flag to filter the set using a regular expression.
|
||||
The `hack/e2e.go` wrapper and the `e2e.sh` wrappers have a little trouble with quoting spaces though, so use the `\s` regular expression character instead.
|
||||
For example, to run the test `should update annotations on modification [Conformance]`, use following command:
|
||||
|
||||
```shell
|
||||
go run hack/e2e.go -v --test --test_args="--ginkgo.focus=should\supdate\sannotations\son\smodification" --check-version-skew=false
|
||||
```
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/contributing/building/
|
|
@ -1,11 +1 @@
|
|||
# CI Builds
|
||||
|
||||
We publish CI builds of minikube, built at every Pull Request. Builds are available at (substitute in the relevant PR number):
|
||||
|
||||
- <https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-darwin-amd64>
|
||||
- <https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-linux-amd64>
|
||||
- <https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-windows-amd64.exe>
|
||||
|
||||
We also publish CI builds of minikube-iso, built at every Pull Request that touches deploy/iso/minikube-iso. Builds are available at:
|
||||
|
||||
- <https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-testing.iso>
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/contributing/building/
|
||||
|
|
|
@ -1,78 +1 @@
|
|||
# minikube ISO image
|
||||
|
||||
This includes the configuration for an alternative bootable ISO image meant to be used in conjunction with minikube.
|
||||
|
||||
It includes:
|
||||
|
||||
- systemd as the init system
|
||||
- docker
|
||||
- CRI-O
|
||||
|
||||
## Hacking
|
||||
|
||||
### Requirements
|
||||
|
||||
* Linux
|
||||
|
||||
```shell
|
||||
sudo apt-get install build-essential gnupg2 p7zip-full git wget cpio python \
|
||||
unzip bc gcc-multilib automake libtool locales
|
||||
```
|
||||
|
||||
Either import your private key or generate a sign-only key using `gpg2 --gen-key`.
|
||||
Also be sure to have an UTF-8 locale set up in order to build the ISO.
|
||||
|
||||
### Build instructions
|
||||
|
||||
```shell
|
||||
$ git clone https://github.com/kubernetes/minikube.git
|
||||
$ cd minikube
|
||||
$ make buildroot-image
|
||||
$ make out/minikube.iso
|
||||
```
|
||||
|
||||
The build will occur inside a docker container. If you want to do this on
|
||||
baremetal, replace `make out/minikube.iso` with `IN_DOCKER=1 make out/minikube.iso`.
|
||||
The bootable ISO image will be available in `out/minikube.iso`.
|
||||
|
||||
### Testing local minikube-iso changes
|
||||
|
||||
```shell
|
||||
$ ./out/minikube start --iso-url=file://$(pwd)/out/minikube.iso
|
||||
```
|
||||
|
||||
### Buildroot configuration
|
||||
|
||||
To change the buildroot configuration, execute:
|
||||
|
||||
```shell
|
||||
$ cd out/buildroot
|
||||
$ make menuconfig
|
||||
$ make
|
||||
```
|
||||
|
||||
To save any buildroot configuration changes made with `make menuconfig`, execute:
|
||||
|
||||
```shell
|
||||
$ cd out/buildroot
|
||||
$ make savedefconfig
|
||||
```
|
||||
|
||||
The changes will be reflected in the `minikube-iso/configs/minikube_defconfig` file.
|
||||
|
||||
```shell
|
||||
$ git status
|
||||
## master
|
||||
M deploy/iso/minikube-iso/configs/minikube_defconfig
|
||||
```
|
||||
|
||||
### Saving buildroot/kernel configuration changes
|
||||
|
||||
To make any kernel configuration changes and save them, execute:
|
||||
|
||||
```shell
|
||||
$ make linux-menuconfig
|
||||
```
|
||||
|
||||
This will open the kernel configuration menu, and then save your changes to our
|
||||
iso directory after they've been selected.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/contributing/iso/
|
||||
|
|
|
@ -1,25 +1 @@
|
|||
# Principles of Minikube
|
||||
|
||||
The primary goal of minikube is to make it simple to run Kubernetes locally, for day-to-day development workflows and learning purposes. Here are the guiding principles for minikube, in rough priority order:
|
||||
|
||||
1. User-friendly and accessible
|
||||
2. Inclusive and community-driven
|
||||
3. Cross-platform
|
||||
4. Support all Kubernetes features
|
||||
5. High-fidelity
|
||||
6. Compatible with all supported Kubernetes releases
|
||||
7. Support for all Kubernetes-friendly container runtimes
|
||||
8. Stable and easy to debug
|
||||
|
||||
Here are some specific minikube features that align with our goal:
|
||||
|
||||
* Single command setup and teardown UX
|
||||
* Support for local storage, networking, auto-scaling, load balancing, etc.
|
||||
* Unified UX across operating systems
|
||||
* Minimal dependencies on third party software
|
||||
* Minimal resource overhead
|
||||
|
||||
## Non-Goals
|
||||
|
||||
* Simplifying Kubernetes production deployment experience
|
||||
* Supporting all possible deployment configurations of Kubernetes, such as storage, networking, etc.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/concepts/principles/
|
||||
|
|
|
@ -1,109 +1 @@
|
|||
# Steps to Release Minikube
|
||||
|
||||
## Preparation
|
||||
|
||||
* Announce release intent on #minikube
|
||||
* Pause merge requests so that they are not accidentally left out of the ISO or release notes
|
||||
|
||||
## Build a new ISO
|
||||
|
||||
Major releases always get a new ISO. Minor bugfixes may or may not require it: check for changes in the `deploy/iso` folder.
|
||||
To check, run `git log -- deploy/iso` from the root directory and see if there has been a commit since the most recent release.
|
||||
|
||||
Note: you can build the ISO using the `hack/jenkins/build_iso.sh` script locally.
|
||||
|
||||
* navigate to the minikube ISO jenkins job
|
||||
* Ensure that you are logged in (top right)
|
||||
* Click "▶️ Build with Parameters" (left)
|
||||
* For `ISO_VERSION`, type in the intended release version (same as the minikube binary's version)
|
||||
* For `ISO_BUCKET`, type in `minikube/iso`
|
||||
* Click *Build*
|
||||
|
||||
The build will take roughly 50 minutes.
|
||||
|
||||
## Update Makefile
|
||||
|
||||
Edit the minikube `Makefile`, updating the version number values at the top:
|
||||
|
||||
* `VERSION_MAJOR`, `VERSION_MINOR`, `VERSION_BUILD` as necessary
|
||||
* `ISO_VERSION` - defaults to MAJOR.MINOR.0 - update if point release requires a new ISO to be built.
|
||||
|
||||
Make sure the integration tests run against this PR, once the new ISO is built.
|
||||
|
||||
## Ad-Hoc testing of other platforms
|
||||
|
||||
If there are supported platforms which do not have functioning Jenkins workers (Windows), you may use the following to build a sanity check:
|
||||
|
||||
```shell
|
||||
env BUILD_IN_DOCKER=y make cross checksum
|
||||
```
|
||||
|
||||
## Send out Makefile PR
|
||||
|
||||
Once submitted, HEAD will use the new ISO. Please pay attention to test failures, as this is our integration test across platforms. If there are known acceptable failures, please add a PR comment linking to the appropriate issue.
|
||||
|
||||
## Update Release Notes
|
||||
|
||||
Run the following script to update the release notes:
|
||||
|
||||
```shell
|
||||
hack/release_notes.sh
|
||||
```
|
||||
|
||||
Merge the output into CHANGELOG.md. See [PR#3175](https://github.com/kubernetes/minikube/pull/3175) as an example. Then get the PR submitted.
|
||||
|
||||
## Tag the Release
|
||||
|
||||
```shell
|
||||
sh hack/tag_release.sh 1.<minor>.<patch>
|
||||
```
|
||||
|
||||
## Build the Release
|
||||
|
||||
This step uses the git tag to publish new binaries to GCS and create a github release:
|
||||
|
||||
* navigate to the minikube "Release" jenkins job
|
||||
* Ensure that you are logged in (top right)
|
||||
* Click "▶️ Build with Parameters" (left)
|
||||
* `VERSION_MAJOR`, `VERSION_MINOR`, and `VERSION_BUILD` should reflect the values in your Makefile
|
||||
* For `ISO_SHA256`, run: `gsutil cat gs://minikube/iso/minikube-v<version>.iso.sha256`
|
||||
* Click *Build*
|
||||
|
||||
## Check the release logs
|
||||
|
||||
After job completion, click "Console Output" to verify that the release completed without errors. This is typically where one will see brew automation fail, for instance.
|
||||
|
||||
## Check releases.json
|
||||
|
||||
This file is used for auto-update notifications, but is not active until releases.json is copied to GCS.
|
||||
|
||||
minikube-bot will send out a PR to update the release checksums at the top of `deploy/minikube/releases.json`. You should merge this PR.
|
||||
|
||||
## Package managers which include minikube
|
||||
|
||||
These are downstream packages that are being maintained by others and how to upgrade them to make sure they have the latest versions
|
||||
|
||||
| Package Manager | URL | TODO |
|
||||
| --- | --- | --- |
|
||||
| Arch Linux AUR | <https://aur.archlinux.org/packages/minikube-bin/> | "Flag as package out-of-date"
|
||||
| Brew Cask | <https://github.com/Homebrew/homebrew-cask/blob/master/Casks/minikube.rb> | The release job creates a new PR in [Homebrew/homebrew-cask](https://github.com/Homebrew/homebrew-cask) with an updated version and SHA256, double check that it's created.
|
||||
|
||||
WARNING: The Brew cask automation is error-prone. please ensure that a PR was created.
|
||||
|
||||
## Verification
|
||||
|
||||
Verify release checksums by running`make check-release`
|
||||
|
||||
## Update minikube frontpage
|
||||
|
||||
We document the last 3 releases on our frontpage. Please add it to the list: <https://github.com/kubernetes/minikube/blob/master/README.md>
|
||||
|
||||
## Update official Kubernetes docs
|
||||
|
||||
If there are major changes, please send a PR to update <https://kubernetes.io/docs/setup/minikube/>
|
||||
|
||||
## Announce!
|
||||
|
||||
- #minikube on Slack
|
||||
- minikube-dev, minikube-users mailing list
|
||||
- Twitter (optional!)
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/contributing/releasing/
|
||||
|
|
|
@ -1,50 +1 @@
|
|||
# minikube roadmap (2019)
|
||||
|
||||
This roadmap is a living document outlining the major technical improvements which we would like to see in minikube during 2019, divided by how they apply to our [guiding principles](principles.md)
|
||||
|
||||
Please send a PR to suggest any improvements to it.
|
||||
|
||||
## (#1) User-friendly and accessible
|
||||
|
||||
- [ ] Creation of a user-centric minikube website for installation & documentation [#4388](https://github.com/kubernetes/minikube/issues/4388)
|
||||
- [ ] Localized output to 5+ written languages [#4186](https://github.com/kubernetes/minikube/issues/4186) [#4185](https://github.com/kubernetes/minikube/issues/4185)
|
||||
- [x] Make minikube usable in environments with challenging connectivity requirements
|
||||
- [ ] Support lightweight deployment methods for environments where VM's are impractical [#4389](https://github.com/kubernetes/minikube/issues/4389) [#4390](https://github.com/kubernetes/minikube/issues/4390)
|
||||
- [x] Add offline support
|
||||
|
||||
## (#2) Inclusive and community-driven
|
||||
|
||||
- [x] Increase community involvement in planning and decision making
|
||||
- [ ] Make the continuous integration and release infrastructure publicly available [#3256](https://github.com/kubernetes/minikube/issues/4390)
|
||||
- [x] Double the number of active maintainers
|
||||
|
||||
## (#3) Cross-platform
|
||||
|
||||
- [ ] Users should never need to separately install supporting binaries [#3975](https://github.com/kubernetes/minikube/issues/3975) [#4391](https://github.com/kubernetes/minikube/issues/4391)
|
||||
- [ ] Simplified installation process across all supported platforms
|
||||
|
||||
## (#4) Support all Kubernetes features
|
||||
|
||||
- [ ] Add multi-node support [#94](https://github.com/kubernetes/minikube/issues/94)
|
||||
|
||||
## (#5) High-fidelity
|
||||
|
||||
- [ ] Reduce guest VM overhead by 50% [#3207](https://github.com/kubernetes/minikube/issues/3207)
|
||||
- [x] Disable swap in the guest VM
|
||||
|
||||
## (#6) Compatible with all supported Kubernetes releases
|
||||
|
||||
- [x] Continuous Integration testing across all supported Kubernetes releases
|
||||
- [ ] Automatic PR generation for updating the default Kubernetes release minikube uses [#4392](https://github.com/kubernetes/minikube/issues/4392)
|
||||
|
||||
## (#7) Support for all Kubernetes-friendly container runtimes
|
||||
|
||||
- [x] Run all integration tests across all supported container runtimes
|
||||
- [ ] Support for Kata Containers [#4347](https://github.com/kubernetes/minikube/issues/4347)
|
||||
|
||||
## (#8) Stable and easy to debug
|
||||
|
||||
- [x] Pre-flight error checks for common connectivity and configuration errors
|
||||
- [ ] Improve the `minikube status` command so that it can diagnose common issues
|
||||
- [ ] Mark all features not covered by continuous integration as `experimental`
|
||||
- [x] Stabilize and improve profiles support (AKA multi-cluster)
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/contributing/roadmap/
|
||||
|
|
|
@ -1,37 +1 @@
|
|||
# Dashboard
|
||||
|
||||
Minikube supports the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard) out of the box.
|
||||
|
||||
## Accessing the UI
|
||||
|
||||
To access the dashboard:
|
||||
|
||||
```shell
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
This will enable the dashboard add-on, and open the proxy in the default web browser.
|
||||
|
||||
To stop the proxy (leaves the dashboard running), abort the started process (`Ctrl+C`).
|
||||
|
||||
## Individual steps
|
||||
|
||||
If the automatic command doesn't work for you for some reason, here are the steps:
|
||||
|
||||
```console
|
||||
$ minikube addons enable dashboard
|
||||
✅ dashboard was successfully enabled
|
||||
```
|
||||
|
||||
If you have your kubernetes client configured for minikube, you can start the proxy:
|
||||
|
||||
```console
|
||||
$ kubectl --context minikube proxy
|
||||
Starting to serve on 127.0.0.1:8001
|
||||
```
|
||||
|
||||
Access the dashboard at:
|
||||
|
||||
<http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/>
|
||||
|
||||
For additional information, see [this page](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/).
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/dashboard/
|
||||
|
|
|
@ -1,17 +1 @@
|
|||
# Debugging Issues With Minikube
|
||||
|
||||
To debug issues with minikube (not *Kubernetes* but **minikube** itself), you can use the `-v` flag to see debug level info. The specified values for `-v` will do the following (the values are all encompassing in that higher values will give you all lower value outputs as well):
|
||||
|
||||
* `--v=0` will output **INFO** level logs
|
||||
* `--v=1` will output **WARNING** level logs
|
||||
* `--v=2` will output **ERROR** level logs
|
||||
* `--v=3` will output *libmachine* logging
|
||||
* `--v=7` will output *libmachine --debug* level logging
|
||||
|
||||
Example:
|
||||
`minikube start --v=1` Will start minikube and output all warnings to stdout.
|
||||
|
||||
If you need to access additional tools for debugging, minikube also includes the [CoreOS toolbox](https://github.com/coreos/toolbox)
|
||||
|
||||
You can ssh into the toolbox and access these additional commands using:
|
||||
`minikube ssh toolbox`
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/debug/
|
||||
|
|
264
docs/drivers.md
264
docs/drivers.md
|
@ -1,263 +1 @@
|
|||
# VM Driver plugin installation
|
||||
|
||||
Minikube uses Docker Machine to manage the Kubernetes VM so it benefits from the
|
||||
driver plugin architecture that Docker Machine uses to provide a consistent way to
|
||||
manage various VM providers. Minikube embeds VirtualBox and VMware Fusion drivers
|
||||
so there are no additional steps to use them. However, other drivers require an
|
||||
extra binary to be present in the host PATH.
|
||||
|
||||
The following drivers currently require driver plugin binaries to be present in
|
||||
the host PATH:
|
||||
|
||||
* [KVM2](#kvm2-driver)
|
||||
* [Hyperkit](#hyperkit-driver)
|
||||
* [Hyper-V](#hyper-v-driver)
|
||||
* [VMware](#vmware-unified-driver)
|
||||
* [Parallels](#parallels-driver)
|
||||
|
||||
## KVM2 driver
|
||||
|
||||
### KVM2 install
|
||||
|
||||
To install the KVM2 driver, first install and configure the prerequisites, namely libvirt 1.3.1 or higher, and qemu-kvm:
|
||||
|
||||
* Debian or Ubuntu 18.x: `sudo apt install libvirt-clients libvirt-daemon-system qemu-kvm`
|
||||
* Ubuntu 16.x or older: `sudo apt install libvirt-bin libvirt-daemon-system qemu-kvm`
|
||||
* Fedora/CentOS/RHEL: `sudo yum install libvirt libvirt-daemon-kvm qemu-kvm`
|
||||
* openSUSE/SLES: `sudo zypper install libvirt qemu-kvm`
|
||||
|
||||
Check your installed virsh version:
|
||||
|
||||
`virsh --version`
|
||||
|
||||
If your version of virsh is newer than 1.3.1 (January 2016), you may download our pre-built driver:
|
||||
|
||||
```shell
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
|
||||
&& sudo install docker-machine-driver-kvm2 /usr/local/bin/
|
||||
```
|
||||
|
||||
If your version of virsh is older than 1.3.1 (Januarry 2016), you may build your own driver binary if you have go 1.12+ installed.
|
||||
|
||||
```console
|
||||
$ sudo apt install libvirt-dev
|
||||
$ git clone https://github.com/kubernetes/minikube.git
|
||||
$ cd minikube
|
||||
$ make out/docker-machine-driver-kvm2
|
||||
$ sudo install out/docker-machine-driver-kvm2 /usr/local/bin
|
||||
$
|
||||
```
|
||||
|
||||
To finish the kvm installation, start and verify the `libvirtd` service
|
||||
|
||||
```shell
|
||||
sudo systemctl enable libvirtd.service
|
||||
sudo systemctl start libvirtd.service
|
||||
sudo systemctl status libvirtd.service
|
||||
```
|
||||
|
||||
Add your user to `libvirt` group (older distributions may use `libvirtd` instead)
|
||||
|
||||
```shell
|
||||
sudo usermod -a -G libvirt $(whoami)
|
||||
```
|
||||
|
||||
Join the `libvirt` group with your current shell session:
|
||||
|
||||
```shell
|
||||
newgrp libvirt
|
||||
```
|
||||
|
||||
To use the kvm2 driver:
|
||||
|
||||
```shell
|
||||
minikube start --vm-driver kvm2
|
||||
```
|
||||
|
||||
or, to use kvm2 as a default driver for `minikube start`:
|
||||
|
||||
```shell
|
||||
minikube config set vm-driver kvm2
|
||||
```
|
||||
|
||||
### KVM2 upgrade
|
||||
|
||||
```shell
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
|
||||
&& sudo install docker-machine-driver-kvm2 /usr/local/bin/
|
||||
```
|
||||
|
||||
### KVM2 troubleshoot
|
||||
|
||||
If minikube can't start, check if the kvm default network exists.
|
||||
|
||||
```shell
|
||||
virsh net-list
|
||||
Name State Autostart Persistent
|
||||
----------------------------------------------------------
|
||||
default active yes yes
|
||||
```
|
||||
|
||||
In case the default network doesn't exist you can define it.
|
||||
|
||||
```shell
|
||||
curl https://raw.githubusercontent.com/libvirt/libvirt/master/src/network/default.xml > kvm-default.xml
|
||||
virsh net-define kvm-default.xml
|
||||
virsh net-start default
|
||||
```
|
||||
|
||||
Make sure you are running the lastest version of your driver.
|
||||
|
||||
```shell
|
||||
docker-machine-driver-kvm2 version
|
||||
```
|
||||
|
||||
## Hyperkit driver
|
||||
|
||||
Install the [hyperkit](http://github.com/moby/hyperkit) VM manager using [brew](https://brew.sh):
|
||||
|
||||
```shell
|
||||
brew install hyperkit
|
||||
```
|
||||
|
||||
Then install the most recent version of minikube's fork of the hyperkit driver:
|
||||
|
||||
```shell
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit \
|
||||
&& sudo install -o root -g wheel -m 4755 docker-machine-driver-hyperkit /usr/local/bin/
|
||||
```
|
||||
|
||||
If you are using [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) in your setup and cluster creation fails (stuck at kube-dns initialization) you might need to add `listen-address=192.168.64.1` to `dnsmasq.conf`.
|
||||
|
||||
*Note: If `dnsmasq.conf` contains `listen-address=127.0.0.1` kubernetes discovers dns at 127.0.0.1:53 and tries to use it using bridge ip address, but dnsmasq replies only to requests from 127.0.0.1*
|
||||
|
||||
To use the driver:
|
||||
|
||||
```shell
|
||||
minikube start --vm-driver hyperkit
|
||||
```
|
||||
|
||||
or, to use hyperkit as a default driver for minikube:
|
||||
|
||||
```shell
|
||||
minikube config set vm-driver hyperkit
|
||||
```
|
||||
|
||||
### Hyperkit upgrade
|
||||
|
||||
```shell
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit \
|
||||
&& sudo install -o root -g wheel -m 4755 docker-machine-driver-hyperkit /usr/local/bin/
|
||||
```
|
||||
|
||||
### Hyperkit troubleshoot
|
||||
|
||||
Make sure you are running the lastest version of your driver.
|
||||
|
||||
```shell
|
||||
docker-machine-driver-hyperkit version
|
||||
```
|
||||
|
||||
## Hyper-V driver
|
||||
|
||||
Hyper-V users will need to create a new external network switch as described [here](https://docs.docker.com/machine/drivers/hyper-v/). This step may prevent a problem in which `minikube start` hangs indefinitely, unable to ssh into the minikube virtual machine. In this add, add the `--hyperv-virtual-switch=switch-name` argument to the `minikube start` command.
|
||||
|
||||
Older Hyper-V VM's may have **dynamic memory management** enabled, which can cause problems of unexpected and random restarts which manifests itself in simply losing the connection to the cluster, after which `minikube status` would simply state `stopped`. **Solution**: run `minikube delete` to delete the old VM.
|
||||
|
||||
To use the driver:
|
||||
|
||||
```shell
|
||||
minikube start --vm-driver hyperv --hyperv-virtual-switch=switch-name
|
||||
```
|
||||
|
||||
or, to use hyperv as a default driver:
|
||||
|
||||
```shell
|
||||
minikube config set vm-driver hyperv && minikube config set hyperv-virtual-switch switch-name
|
||||
```
|
||||
|
||||
and run minikube as usual:
|
||||
|
||||
```shell
|
||||
minikube start
|
||||
```
|
||||
|
||||
## VMware unified driver
|
||||
|
||||
The VMware unified driver will eventually replace the existing vmwarefusion driver.
|
||||
The new unified driver supports both VMware Fusion (on macOS) and VMware Workstation (on Linux and Windows)
|
||||
|
||||
To install the vmware unified driver, head over at <https://github.com/machine-drivers/docker-machine-driver-vmware/releases> and download the release for your operating system.
|
||||
|
||||
The driver must be:
|
||||
|
||||
1. Stored in `$PATH`
|
||||
2. Named `docker-machine-driver-vmware`
|
||||
3. Executable (`chmod +x` on UNIX based platforms)
|
||||
|
||||
If you're running on macOS with Fusion, this is an easy way install the driver:
|
||||
|
||||
```shell
|
||||
export LATEST_VERSION=$(curl -L -s -H 'Accept: application/json' https://github.com/machine-drivers/docker-machine-driver-vmware/releases/latest | sed -e 's/.*"tag_name":"\([^"]*\)".*/\1/') \
|
||||
&& curl -L -o docker-machine-driver-vmware https://github.com/machine-drivers/docker-machine-driver-vmware/releases/download/$LATEST_VERSION/docker-machine-driver-vmware_darwin_amd64 \
|
||||
&& chmod +x docker-machine-driver-vmware \
|
||||
&& mv docker-machine-driver-vmware /usr/local/bin/
|
||||
```
|
||||
|
||||
To use the driver:
|
||||
|
||||
```shell
|
||||
minikube start --vm-driver vmware
|
||||
```
|
||||
|
||||
or, to use vmware unified driver as a default driver:
|
||||
|
||||
```shell
|
||||
minikube config set vm-driver vmware
|
||||
```
|
||||
|
||||
and run minikube as usual:
|
||||
|
||||
```shell
|
||||
minikube start
|
||||
```
|
||||
|
||||
## Parallels driver
|
||||
|
||||
This driver is useful for users who own Parallels Desktop for Mac that do not have VT-x hardware support required by the hyperkit driver.
|
||||
|
||||
Pre-requisites: Parallels Desktop for Mac
|
||||
|
||||
Install the [Parallels docker-machine driver](https://github.com/Parallels/docker-machine-parallels) using [brew](https://brew.sh):
|
||||
|
||||
```shell
|
||||
brew install docker-machine-parallels
|
||||
```
|
||||
|
||||
To use the driver:
|
||||
|
||||
```shell
|
||||
minikube start --vm-driver parallels
|
||||
```
|
||||
|
||||
or, to use parallels as a default driver for minikube:
|
||||
|
||||
```shell
|
||||
minikube config set vm-driver parallels
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
minikube is currently unable to display the error message received back from the VM driver. Users can however reveal the error by passing `--alsologtostderr -v=8` to `minikube start`. For instance:
|
||||
|
||||
```shell
|
||||
minikube start --vm-driver=kvm2 --alsologtostderr -v=8
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```text
|
||||
Found binary path at /usr/local/bin/docker-machine-driver-kvm2
|
||||
Launching plugin server for driver kvm2
|
||||
Error starting plugin binary: fork/exec /usr/local/bin/docker-machine-driver-kvm2: exec format error
|
||||
```
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/drivers/
|
||||
|
|
|
@ -1,59 +1 @@
|
|||
|
||||
# minikube Environment Variables
|
||||
|
||||
## Config option variables
|
||||
|
||||
minikube supports passing environment variables instead of flags for every value listed in `minikube config list`. This is done by passing an environment variable with the prefix `MINIKUBE_`.
|
||||
|
||||
For example the `minikube start --iso-url="$ISO_URL"` flag can also be set by setting the `MINIKUBE_ISO_URL="$ISO_URL"` environment variable.
|
||||
|
||||
## Other variables
|
||||
|
||||
Some features can only be accessed by environment variables, here is a list of these features:
|
||||
|
||||
* **MINIKUBE_HOME** - (string) sets the path for the .minikube directory that minikube uses for state/configuration
|
||||
|
||||
* **MINIKUBE_IN_STYLE** - (bool) manually sets whether or not emoji and colors should appear in minikube. Set to false or 0 to disable this feature, true or 1 to force it to be turned on.
|
||||
|
||||
* **MINIKUBE_WANTUPDATENOTIFICATION** - (bool) sets whether the user wants an update notification for new minikube versions
|
||||
|
||||
* **MINIKUBE_REMINDERWAITPERIODINHOURS** - (int) sets the number of hours to check for an update notification
|
||||
|
||||
* **CHANGE_MINIKUBE_NONE_USER** - (bool) automatically change ownership of ~/.minikube to the value of $SUDO_USER
|
||||
|
||||
* **MINIKUBE_ENABLE_PROFILING** - (int, `1` enables it) enables trace profiling to be generated for minikube
|
||||
|
||||
## Making these values permanent
|
||||
|
||||
To make the exported variables permanent:
|
||||
|
||||
* Linux and macOS: Add these declarations to `~/.bashrc` or wherever your shells environment variables are stored.
|
||||
* Windows: Add these declarations via [system settings](https://support.microsoft.com/en-au/help/310519/how-to-manage-environment-variables-in-windows-xp) or using [setx](https://stackoverflow.com/questions/5898131/set-a-persistent-environment-variable-from-cmd-exe)
|
||||
|
||||
### Example: Disabling emoji
|
||||
|
||||
```shell
|
||||
export MINIKUBE_IN_STYLE=false
|
||||
minikube start
|
||||
```
|
||||
|
||||
### Example: Profiling
|
||||
|
||||
```shell
|
||||
MINIKUBE_ENABLE_PROFILING=1 minikube start
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
``` text
|
||||
2017/01/09 13:18:00 profile: cpu profiling enabled, /tmp/profile933201292/cpu.pprof
|
||||
Starting local Kubernetes cluster...
|
||||
Kubectl is now configured to use the cluster.
|
||||
2017/01/09 13:19:06 profile: cpu profiling disabled, /tmp/profile933201292/cpu.pprof
|
||||
```
|
||||
|
||||
Examine the cpu profiling results:
|
||||
|
||||
```shell
|
||||
go tool pprof /tmp/profile933201292/cpu.pprof
|
||||
```
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/environment_variables
|
||||
|
|
122
docs/gpu.md
122
docs/gpu.md
|
@ -1,121 +1 @@
|
|||
# (Experimental) NVIDIA GPU support in minikube
|
||||
|
||||
minikube has experimental support for using NVIDIA GPUs on Linux.
|
||||
|
||||
## Using NVIDIA GPUs on minikube on Linux with `--vm-driver=kvm2`
|
||||
|
||||
When using NVIDIA GPUs with the kvm2 vm-driver. We passthrough spare GPUs on the
|
||||
host to the minikube VM. Doing so has a few prerequisites:
|
||||
|
||||
- You must install the [kvm2 driver](drivers.md#kvm2-driver). If you already had
|
||||
this installed make sure that you fetch the latest
|
||||
`docker-machine-driver-kvm2` binary that has GPU support.
|
||||
|
||||
- Your CPU must support IOMMU. Different vendors have different names for this
|
||||
technology. Intel calls it Intel VT-d. AMD calls it AMD-Vi. Your motherboard
|
||||
must also support IOMMU.
|
||||
|
||||
- You must enable IOMMU in the kernel: add `intel_iommu=on` or `amd_iommu=on`
|
||||
(depending to your CPU vendor) to the kernel command line. Also add `iommu=pt`
|
||||
to the kernel command line.
|
||||
|
||||
- You must have spare GPUs that are not used on the host and can be passthrough
|
||||
to the VM. These GPUs must not be controlled by the nvidia/nouveau driver. You
|
||||
can ensure this by either not loading the nvidia/nouveau driver on the host at
|
||||
all or assigning the spare GPU devices to stub kernel modules like `vfio-pci`
|
||||
or `pci-stub` at boot time. You can do that by adding the
|
||||
[vendorId:deviceId](https://pci-ids.ucw.cz/read/PC/10de) of your spare GPU to
|
||||
the kernel command line. For ex. for Quadro M4000 add `pci-stub.ids=10de:13f1`
|
||||
to the kernel command line. Note that you will have to do this for all GPUs
|
||||
you want to passthrough to the VM and all other devices that are in the IOMMU
|
||||
group of these GPUs.
|
||||
|
||||
- Once you reboot the system after doing the above, you should be ready to use
|
||||
GPUs with kvm2. Run the following command to start minikube:
|
||||
```shell
|
||||
minikube start --vm-driver kvm2 --gpu
|
||||
```
|
||||
|
||||
This command will check if all the above conditions are satisfied and
|
||||
passthrough spare GPUs found on the host to the VM.
|
||||
|
||||
If this succeeded, run the following commands:
|
||||
```shell
|
||||
minikube addons enable nvidia-gpu-device-plugin
|
||||
minikube addons enable nvidia-driver-installer
|
||||
```
|
||||
|
||||
This will install the NVIDIA driver (that works for GeForce/Quadro cards)
|
||||
on the VM.
|
||||
|
||||
- If everything succeeded, you should be able to see `nvidia.com/gpu` in the
|
||||
capacity:
|
||||
```shell
|
||||
kubectl get nodes -ojson | jq .items[].status.capacity
|
||||
```
|
||||
|
||||
### Where can I learn more about GPU passthrough?
|
||||
|
||||
See the excellent documentation at
|
||||
<https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF>
|
||||
|
||||
### Why are so many manual steps required to use GPUs with kvm2 on minikube?
|
||||
|
||||
These steps require elevated privileges which minikube doesn't run with and they
|
||||
are disruptive to the host, so we decided to not do them automatically.
|
||||
|
||||
## Using NVIDIA GPU on minikube on Linux with `--vm-driver=none`
|
||||
|
||||
NOTE: This approach used to expose GPUs here is different than the approach used
|
||||
to expose GPUs with `--vm-driver=kvm2`. Please don't mix these instructions.
|
||||
|
||||
- Install minikube.
|
||||
|
||||
- Install the nvidia driver, nvidia-docker and configure docker with nvidia as
|
||||
the default runtime. See instructions at
|
||||
<https://github.com/NVIDIA/nvidia-docker>
|
||||
|
||||
- Start minikube:
|
||||
```shell
|
||||
minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost
|
||||
```
|
||||
|
||||
- Install NVIDIA's device plugin:
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.10/nvidia-device-plugin.yml
|
||||
```
|
||||
|
||||
## Why does minikube not support NVIDIA GPUs on macOS?
|
||||
|
||||
VM drivers supported by minikube for macOS doesn't support GPU passthrough:
|
||||
|
||||
- [mist64/xhyve#108](https://github.com/mist64/xhyve/issues/108)
|
||||
- [moby/hyperkit#159](https://github.com/moby/hyperkit/issues/159)
|
||||
- [VirtualBox docs](http://www.virtualbox.org/manual/ch09.html#pcipassthrough)
|
||||
|
||||
Also:
|
||||
|
||||
- For quite a while, all Mac hardware (both laptops and desktops) have come with
|
||||
Intel or AMD GPUs (and not with NVIDIA GPUs). Recently, Apple added [support
|
||||
for eGPUs](https://support.apple.com/en-us/HT208544), but even then all the
|
||||
supported GPUs listed are AMD’s.
|
||||
|
||||
- nvidia-docker [doesn't support
|
||||
macOS](https://github.com/NVIDIA/nvidia-docker/issues/101) either.
|
||||
|
||||
## Why does minikube not support NVIDIA GPUs on Windows?
|
||||
|
||||
minikube supports Windows host through Hyper-V or VirtualBox.
|
||||
|
||||
- VirtualBox doesn't support PCI passthrough for [Windows
|
||||
host](http://www.virtualbox.org/manual/ch09.html#pcipassthrough).
|
||||
|
||||
- Hyper-V supports DDA (discrete device assignment) but [only for Windows Server
|
||||
2016](https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment)
|
||||
|
||||
Since the only possibility of supporting GPUs on minikube on Windows is on a
|
||||
server OS where users don't usually run minikube, we haven't invested time in
|
||||
trying to support NVIDIA GPUs on minikube on Windows.
|
||||
|
||||
Also, nvidia-docker [doesn't support
|
||||
Windows](https://github.com/NVIDIA/nvidia-docker/issues/197) either.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tutorials/nvidia_gpu/
|
||||
|
|
|
@ -1,76 +1 @@
|
|||
# Mounting Host Folders
|
||||
|
||||
`minikube mount /path/to/dir/to/mount:/vm-mount-path` is the recommended way to mount directories into minikube so that they can be used in your local Kubernetes cluster. The command works on all supported platforms. Below is an example workflow for using `minikube mount`:
|
||||
|
||||
```shell
|
||||
# terminal 1
|
||||
$ mkdir ~/mount-dir
|
||||
$ minikube mount ~/mount-dir:/mount-9p
|
||||
Mounting /home/user/mount-dir/ into /mount-9p on the minikubeVM
|
||||
This daemon process needs to stay alive for the mount to still be accessible...
|
||||
ufs starting
|
||||
# This process has to stay open, so in another terminal...
|
||||
```
|
||||
|
||||
```shell
|
||||
# terminal 2
|
||||
$ echo "hello from host" > ~/mount-dir/hello-from-host
|
||||
$ kubectl run -i --rm --tty ubuntu --overrides='
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "ubuntu"
|
||||
},
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"name": "ubuntu",
|
||||
"image": "ubuntu:14.04",
|
||||
"args": [
|
||||
"bash"
|
||||
],
|
||||
"stdin": true,
|
||||
"stdinOnce": true,
|
||||
"tty": true,
|
||||
"workingDir": "/mount-9p",
|
||||
"volumeMounts": [{
|
||||
"mountPath": "/mount-9p",
|
||||
"name": "host-mount"
|
||||
}]
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"name": "host-mount",
|
||||
"hostPath": {
|
||||
"path": "/mount-9p"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
' --image=ubuntu:14.04 --restart=Never -- bash
|
||||
|
||||
Waiting for pod default/ubuntu to be running, status is Pending, pod ready: false
|
||||
Waiting for pod default/ubuntu to be running, status is Running, pod ready: false
|
||||
# ======================================================================================
|
||||
# We are now in the pod
|
||||
#=======================================================================================
|
||||
root@ubuntu:/mount-9p# cat hello-from-host
|
||||
hello from host
|
||||
root@ubuntu:/mount-9p# echo "hello from pod" > /mount-9p/hello-from-pod
|
||||
root@ubuntu:/mount-9p# ls
|
||||
hello-from-host hello-from-pod
|
||||
root@ubuntu:/mount-9p# exit
|
||||
exit
|
||||
Waiting for pod default/ubuntu to terminate, status is Running
|
||||
pod "ubuntu" deleted
|
||||
# ======================================================================================
|
||||
# We are back on the host
|
||||
#=======================================================================================
|
||||
$ cat ~/mount-dir/hello-from-pod
|
||||
hello from pod
|
||||
```
|
||||
|
||||
Some drivers themselves provide host-folder sharing options, but we plan to deprecate these in the future as they are all implemented differently and they are not configurable through minikube.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/mount/
|
||||
|
|
|
@ -1,102 +1 @@
|
|||
# minikube: Using HTTP/HTTPS proxies
|
||||
|
||||
minikube requires access to the internet via HTTP, HTTPS, and DNS protocols. If a HTTP proxy is required to access the internet, you may need to pass the proxy connection information to both minikube and Docker using environment variables:
|
||||
|
||||
* `HTTP_PROXY` - The URL to your HTTP proxy
|
||||
* `HTTPS_PROXY` - The URL to your HTTPS proxy
|
||||
* `NO_PROXY` - A comma-separated list of hosts which should not go through the proxy.
|
||||
|
||||
The NO_PROXY variable here is important: Without setting it, minikube may not be able to access resources within the VM. minikube uses two IP ranges, which should not go through the proxy:
|
||||
|
||||
* **192.168.99.0/24**: Used by the minikube VM. Configurable for some hypervisors via `--host-only-cidr`
|
||||
* **192.168.39.0/24**: Used by the minikube kvm2 driver.
|
||||
* **10.96.0.0/12**: Used by service cluster IP's. Configurable via `--service-cluster-ip-range`
|
||||
|
||||
One important note: If NO_PROXY is required by non-Kubernetes applications, such as Firefox or Chrome, you may want to specifically add the minikube IP to the comma-separated list, as they may not understand IP ranges ([#3827](https://github.com/kubernetes/minikube/issues/3827)).
|
||||
|
||||
## Example Usage
|
||||
|
||||
### macOS and Linux
|
||||
|
||||
```shell
|
||||
export HTTP_PROXY=http://<proxy hostname:port>
|
||||
export HTTPS_PROXY=https://<proxy hostname:port>
|
||||
export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
|
||||
|
||||
minikube start
|
||||
```
|
||||
|
||||
To make the exported variables permanent, consider adding the declarations to ~/.bashrc or wherever your user-set environment variables are stored.
|
||||
|
||||
### Windows
|
||||
|
||||
```shell
|
||||
set HTTP_PROXY=http://<proxy hostname:port>
|
||||
set HTTPS_PROXY=https://<proxy hostname:port>
|
||||
set NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.1/24,192.168.39.0/24
|
||||
|
||||
minikube start
|
||||
```
|
||||
|
||||
To set these environment variables permanently, consider adding these to your [system settings](https://support.microsoft.com/en-au/help/310519/how-to-manage-environment-variables-in-windows-xp) or using [setx](https://stackoverflow.com/questions/5898131/set-a-persistent-environment-variable-from-cmd-exe)
|
||||
|
||||
## Configuring Docker to use a proxy
|
||||
|
||||
As of v1.0, minikube automatically configures the Docker instance inside of the VM to use the proxy environment variables, unless you have specified a `--docker-env` override. If you need to manually configure Docker for a set of proxies, use:
|
||||
|
||||
```shell
|
||||
minikube start \
|
||||
--docker-env=HTTP_PROXY=$HTTP_PROXY \
|
||||
--docker-env HTTPS_PROXY=$HTTPS_PROXY \
|
||||
--docker-env NO_PROXY=$NO_PROXY
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### unable to cache ISO... connection refused
|
||||
|
||||
```text
|
||||
Unable to start VM: unable to cache ISO: https://storage.googleapis.com/minikube/iso/minikube.iso:
|
||||
failed to download: failed to download to temp file: download failed: 5 error(s) occurred:
|
||||
|
||||
* Temporary download error: Get https://storage.googleapis.com/minikube/iso/minikube.iso:
|
||||
proxyconnect tcp: dial tcp <host>:<port>: connect: connection refused
|
||||
```
|
||||
|
||||
This error indicates that the host:port combination defined by HTTPS_PROXY or HTTP_PROXY is incorrect, or that the proxy is unavailable.
|
||||
|
||||
## Unable to pull images..Client.Timeout exceeded while awaiting headers
|
||||
|
||||
```text
|
||||
Unable to pull images, which may be OK:
|
||||
|
||||
failed to pull image "k8s.gcr.io/kube-apiserver:v1.13.3": output: Error response from daemon:
|
||||
Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection
|
||||
(Client.Timeout exceeded while awaiting headers)
|
||||
```
|
||||
|
||||
This error indicates that the container runtime running within the VM does not have access to the internet. Verify that you are passing the appropriate value to `--docker-env HTTPS_PROXY`.
|
||||
|
||||
## x509: certificate signed by unknown authority
|
||||
|
||||
```text
|
||||
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.3:
|
||||
output: Error response from daemon:
|
||||
Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
|
||||
```
|
||||
|
||||
This is because minikube VM is stuck behind a proxy that rewrites HTTPS responses to contain its own TLS certificate. The [solution](https://github.com/kubernetes/minikube/issues/3613#issuecomment-461034222) is to install the proxy certificate into a location that is copied to the VM at startup, so that it can be validated.
|
||||
|
||||
Ask your IT department for the appropriate PEM file, and add it to:
|
||||
|
||||
`~/.minikube/files/etc/ssl/certs`
|
||||
|
||||
Then run `minikube delete` and `minikube start`.
|
||||
|
||||
## downloading binaries: proxyconnect tcp: tls: oversized record received with length 20527
|
||||
|
||||
Your need to set a correct `HTTPS_PROXY` value.
|
||||
|
||||
## Additional Information
|
||||
|
||||
* [Configure Docker to use a proxy server](https://docs.docker.com/network/proxy/)
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
|
||||
|
|
|
@ -1,28 +1 @@
|
|||
# Enabling Docker Insecure Registry
|
||||
|
||||
Minikube allows users to configure the docker engine's `--insecure-registry` flag. You can use the `--insecure-registry` flag on the
|
||||
`minikube start` command to enable insecure communication between the docker engine and registries listening to requests from the CIDR range.
|
||||
|
||||
One nifty hack is to allow the kubelet running in minikube to talk to registries deployed inside a pod in the cluster without backing them
|
||||
with TLS certificates. Because the default service cluster IP is known to be available at 10.0.0.1, users can pull images from registries
|
||||
deployed inside the cluster by creating the cluster with `minikube start --insecure-registry "10.0.0.0/24"`.
|
||||
|
||||
## Private Container Registries
|
||||
|
||||
**GCR/ECR/Docker**: Minikube has an addon, `registry-creds` which maps credentials into Minikube to support pulling from Google Container Registry (GCR), Amazon's EC2 Container Registry (ECR), and Private Docker registries. You will need to run `minikube addons configure registry-creds` and `minikube addons enable registry-creds` to get up and running. An example of this is below:
|
||||
|
||||
```shell
|
||||
$ minikube addons configure registry-creds
|
||||
Do you want to enable AWS Elastic Container Registry? [y/n]: n
|
||||
|
||||
Do you want to enable Google Container Registry? [y/n]: y
|
||||
-- Enter path to credentials (e.g. /home/user/.config/gcloud/application_default_credentials.json):/home/user/.config/gcloud/application_default_credentials.json
|
||||
|
||||
Do you want to enable Docker Registry? [y/n]: n
|
||||
registry-creds was successfully configured
|
||||
$ minikube addons enable registry-creds
|
||||
```
|
||||
|
||||
For additional information on private container registries, see [this page](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).
|
||||
|
||||
We recommend you use _ImagePullSecrets_, but if you would like to configure access on the minikube VM you can place the `.dockercfg` in the `/home/docker` directory or the `config.json` in the `/var/lib/kubelet` directory. Make sure to restart your kubelet (for kubeadm) process with `sudo systemctl restart kubelet`.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/registry/
|
||||
|
|
|
@ -1,78 +1,2 @@
|
|||
# Networking
|
||||
|
||||
## Firewalls, VPN's, and proxies
|
||||
|
||||
minikube may require access from the host to the following IP ranges: 192.168.99.0/24, 192.168.39.0/24, and 10.96.0.0/12. These networks can be changed in minikube using `--host-only-cidr` and `--service-cluster-ip-range`.
|
||||
|
||||
* To use minikube with a proxy, see [Using HTTP/HTTPS proxies](http_proxy.md).
|
||||
|
||||
* If you are using minikube with a VPN, you may need to configure the VPN to allow local routing for traffic to the afforementioned IP ranges.
|
||||
|
||||
* If you are using minikube with a local firewall, you will need to allow access from the host to the afforementioned IP ranges on TCP ports 22 and 8443. You will also need to add access from these IP's to TCP ports 443 and 53 externally to pull images.
|
||||
|
||||
## Access to NodePort services
|
||||
|
||||
The minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the `minikube ip` command. Any services of type `NodePort` can be accessed over that IP address, on the NodePort.
|
||||
|
||||
To determine the NodePort for your service, you can use a `kubectl` command like this (note that `nodePort` begins with lowercase `n` in JSON output):
|
||||
|
||||
`kubectl get service $SERVICE --output='jsonpath="{.spec.ports[0].nodePort}"'`
|
||||
|
||||
We also have a shortcut for fetching the minikube IP and a service's `NodePort`:
|
||||
|
||||
`minikube service --url $SERVICE`
|
||||
|
||||
### Increasing the NodePort range
|
||||
|
||||
By default, minikube only exposes ports 30000-32767. If this is not enough, you can configure the apiserver to allow all ports using:
|
||||
|
||||
`minikube start --extra-config=apiserver.service-node-port-range=1-65535`
|
||||
|
||||
This flag also accepts a comma separated list of ports and port ranges.
|
||||
|
||||
## Access to LoadBalancer services using `minikube tunnel`
|
||||
|
||||
Services of type `LoadBalancer` can be exposed via the `minikube tunnel` command.
|
||||
|
||||
````shell
|
||||
minikube tunnel
|
||||
````
|
||||
|
||||
Will output:
|
||||
|
||||
```text
|
||||
out/minikube tunnel
|
||||
Password: *****
|
||||
Status:
|
||||
machine: minikube
|
||||
pid: 59088
|
||||
route: 10.96.0.0/12 -> 192.168.99.101
|
||||
minikube: Running
|
||||
services: []
|
||||
errors:
|
||||
minikube: no errors
|
||||
router: no errors
|
||||
loadbalancer emulator: no errors
|
||||
|
||||
|
||||
````
|
||||
|
||||
Tunnel might ask you for password for creating and deleting network routes.
|
||||
|
||||
## Cleaning up orphaned routes
|
||||
|
||||
If the `minikube tunnel` shuts down in an unclean way, it might leave a network route around.
|
||||
This case the ~/.minikube/tunnels.json file will contain an entry for that tunnel.
|
||||
To cleanup orphaned routes, run:
|
||||
|
||||
````shell
|
||||
minikube tunnel --cleanup
|
||||
````
|
||||
|
||||
## Tunnel: Avoid entering password multiple times
|
||||
|
||||
`minikube tunnel` runs as a separate daemon, creates a network route on the host to the service CIDR of the cluster using the cluster's IP address as a gateway. Adding a route requires root privileges for the user, and thus there are differences in how to run `minikube tunnel` depending on the OS.
|
||||
|
||||
If you want to avoid entering the root password, consider setting NOPASSWD for "ip" and "route" commands:
|
||||
|
||||
<https://superuser.com/questions/1328452/sudoers-nopasswd-for-single-executable-but-allowing-others>
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/networking/
|
||||
|
|
@ -1,40 +1 @@
|
|||
# Offline support in minikube
|
||||
|
||||
As of v1.0, `minikube start` is offline compatible out of the box. Here are some implementation details to help systems integrators:
|
||||
|
||||
## Requirements
|
||||
|
||||
* On the initial run for a given Kubernetes release, `minikube start` must have access to the internet, or a configured `--image-repository` to pull from.
|
||||
|
||||
## Cache location
|
||||
|
||||
* `~/.minikube/cache` - Top-level folder
|
||||
* `~/.minikube/cache/iso` - VM ISO image. Typically updated once per major minikube release.
|
||||
* `~/.minikube/cache/images` - Docker images used by Kubernetes.
|
||||
* `~/.minikube/cache/<version>` - Kubernetes binaries, such as `kubeadm` and `kubelet`
|
||||
|
||||
## Sharing the minikube cache
|
||||
|
||||
For offline use on other hosts, one can copy the contents of `~/.minikube/cache`. As of the v1.0 release, this directory
|
||||
contains 685MB of data:
|
||||
|
||||
```text
|
||||
cache/iso/minikube-v1.0.0.iso
|
||||
cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
|
||||
cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
|
||||
cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
|
||||
cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1
|
||||
cache/images/k8s.gcr.io/kube-scheduler_v1.14.0
|
||||
cache/images/k8s.gcr.io/coredns_1.3.1
|
||||
cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0
|
||||
cache/images/k8s.gcr.io/kube-apiserver_v1.14.0
|
||||
cache/images/k8s.gcr.io/pause_3.1
|
||||
cache/images/k8s.gcr.io/etcd_3.3.10
|
||||
cache/images/k8s.gcr.io/kube-addon-manager_v9.0
|
||||
cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
|
||||
cache/images/k8s.gcr.io/kube-proxy_v1.14.0
|
||||
cache/v1.14.0/kubeadm
|
||||
cache/v1.14.0/kubelet
|
||||
```
|
||||
|
||||
If any of these files exist, minikube will use copy them into the VM directly rather than pulling them from the internet.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/cache/
|
||||
|
|
|
@ -1,33 +1 @@
|
|||
# OpenID Connect Authentication
|
||||
|
||||
Minikube `kube-apiserver` can be configured to support OpenID Connect Authentication.
|
||||
|
||||
Read more about OpenID Connect Authentication for Kubernetes here: <https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens>
|
||||
|
||||
## Configuring the API Server
|
||||
|
||||
Configuration values can be passed to the API server using the `--extra-config` flag on the `minikube start` command. See [configuring_kubernetes.md](https://github.com/kubernetes/minikube/blob/master/docs/configuring_kubernetes.md) for more details.
|
||||
|
||||
The following example configures your Minikube cluster to support RBAC and OIDC:
|
||||
|
||||
```shell
|
||||
minikube start \
|
||||
--extra-config=apiserver.authorization-mode=RBAC \
|
||||
--extra-config=apiserver.oidc-issuer-url=https://example.com \
|
||||
--extra-config=apiserver.oidc-username-claim=email \
|
||||
--extra-config=apiserver.oidc-client-id=kubernetes-local
|
||||
```
|
||||
|
||||
## Configuring kubectl
|
||||
|
||||
You can use the kubectl `oidc` authenticator to create a kubeconfig as shown in the Kubernetes docs: <https://kubernetes.io/docs/reference/access-authn-authz/authentication/#option-1-oidc-authenticator>
|
||||
|
||||
`minikube start` already creates a kubeconfig that includes a `cluster`, in order to use it with your `oidc` authenticator kubeconfig, you can run:
|
||||
|
||||
```shell
|
||||
kubectl config set-context kubernetes-local-oidc --cluster=minikube --user username@example.com
|
||||
Context "kubernetes-local-oidc" created.
|
||||
kubectl config use-context kubernetes-local-oidc
|
||||
```
|
||||
|
||||
For the new context to work you will need to create, at the very minimum, a `Role` and a `RoleBinding` in your cluster to grant permissions to the `subjects` included in your `oidc-username-claim`.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tutorials/openid_connect_auth/
|
||||
|
|
|
@ -1,39 +1 @@
|
|||
# Persistent Volumes
|
||||
|
||||
Minikube supports [PersistentVolumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) of type `hostPath` out of the box. These PersistentVolumes are mapped to a directory inside the running Minikube instance (usually a VM, unless you use `--vm-driver=none`). For more information on how this works, read the Dynamic Provisioning section below.
|
||||
|
||||
## A note on mounts, persistence, and Minikube hosts
|
||||
|
||||
Minikube is configured to persist files stored under the following directories, which are made in the Minikube VM (or on your localhost if running on bare metal). You may lose data from other directories on reboots.
|
||||
|
||||
* `/data`
|
||||
* `/var/lib/minikube`
|
||||
* `/var/lib/docker`
|
||||
* `/tmp/hostpath_pv`
|
||||
* `/tmp/hostpath-provisioner`
|
||||
|
||||
Here is an example PersistentVolume config to persist data in the '/data' directory:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv0001
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
capacity:
|
||||
storage: 5Gi
|
||||
hostPath:
|
||||
path: /data/pv0001/
|
||||
```
|
||||
|
||||
You can also achieve persistence by creating a PV in a mounted host folder.
|
||||
|
||||
## Dynamic provisioning and CSI
|
||||
|
||||
In addition, minikube implements a very simple, canonical implementation of dynamic storage controller that runs alongside its deployment. This manages provisioning of *hostPath* volumes (rather then via the previous, in-tree hostPath provider).
|
||||
|
||||
The default [Storage Provisioner Controller](https://github.com/kubernetes/minikube/blob/master/pkg/storage/storage_provisioner.go) is managed internally, in the minikube codebase, demonstrating how easy it is to plug a custom storage controller into kubernetes as a storage component of the system, and provides pods with dynamically, to test your pod's behaviour when persistent storage is mapped to it.
|
||||
|
||||
NOTE: this is not a CSI based storage provider. It simply declares an appropriate PersistentVolume in response to an incoming storage request.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/persistent_volumes/
|
||||
|
|
|
@ -1,77 +1 @@
|
|||
# Reusing the Docker daemon
|
||||
|
||||
## Method 1: Without minikube registry addon
|
||||
|
||||
When using a single VM of Kubernetes it's really handy to reuse the Docker daemon inside the VM; as this means you don't have to build on your host machine and push the image into a docker registry - you can just build inside the same docker daemon as minikube which speeds up local experiments.
|
||||
|
||||
To be able to work with the docker daemon on your mac/linux host use the docker-env command in your shell:
|
||||
|
||||
```shell
|
||||
eval $(minikube docker-env)
|
||||
```
|
||||
|
||||
You should now be able to use docker on the command line on your host mac/linux machine talking to the docker daemon inside the minikube VM:
|
||||
|
||||
```shell
|
||||
docker ps
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
On CentOS 7, Docker may report the following error:
|
||||
|
||||
```shell
|
||||
Could not read CA certificate "/etc/docker/ca.pem": open /etc/docker/ca.pem: no such file or directory
|
||||
```
|
||||
|
||||
The fix is to update /etc/sysconfig/docker to ensure that minikube's environment changes are respected:
|
||||
|
||||
```diff
|
||||
< DOCKER_CERT_PATH=/etc/docker
|
||||
---
|
||||
> if [ -z "${DOCKER_CERT_PATH}" ]; then
|
||||
> DOCKER_CERT_PATH=/etc/docker
|
||||
> fi
|
||||
```
|
||||
|
||||
Remember to turn off the _imagePullPolicy:Always_, as otherwise Kubernetes won't use images you built locally.
|
||||
|
||||
## Method 2: With minikube registry addon
|
||||
|
||||
Enable minikube registry addon and then push images directly into registry. Steps are as follows:
|
||||
|
||||
For illustration purpose, we will assume that minikube VM has one of the ip from `192.168.39.0/24` subnet. If you have not overridden these subnets as per [networking guide](https://github.com/kubernetes/minikube/blob/master/docs/networking.md), you can find out default subnet being used by minikube for a specific OS and driver combination [here](https://github.com/kubernetes/minikube/blob/dfd9b6b83d0ca2eeab55588a16032688bc26c348/pkg/minikube/cluster/cluster.go#L408) which is subject to change. Replace `192.168.39.0/24` with appropriate values for your environment wherever applicable.
|
||||
|
||||
Ensure that docker is configured to use `192.168.39.0/24` as insecure registry. Refer [here](https://docs.docker.com/registry/insecure/) for instructions.
|
||||
|
||||
Ensure that `192.168.39.0/24` is enabled as insecure registry in minikube. Refer [here](https://github.com/kubernetes/minikube/blob/master/docs/insecure_registry.md) for instructions..
|
||||
|
||||
Enable minikube registry addon:
|
||||
|
||||
```shell
|
||||
minikube addons enable registry
|
||||
```
|
||||
|
||||
Build docker image and tag it appropriately:
|
||||
|
||||
```shell
|
||||
docker build --tag $(minikube ip):5000/test-img .
|
||||
```
|
||||
|
||||
Push docker image to minikube registry:
|
||||
|
||||
```shell
|
||||
docker push $(minikube ip):5000/test-img
|
||||
```
|
||||
|
||||
Now run it in minikube:
|
||||
|
||||
```shell
|
||||
kubectl run test-img --image=$(minikube ip):5000/test-img
|
||||
```
|
||||
|
||||
Or if `192.168.39.0/24` is not enabled as insecure registry in minikube, then:
|
||||
|
||||
```shell
|
||||
kubectl run test-img --image=localhost:5000/test-img
|
||||
```
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/docker_daemon/
|
||||
|
|
|
@ -1,15 +1 @@
|
|||
# minikube: Syncing files into the VM
|
||||
|
||||
## Syncing files during start up
|
||||
|
||||
As soon as a VM is created, minikube will populate the root filesystem with any files stored in $MINIKUBE_HOME (~/.minikube/files).
|
||||
|
||||
For example, running the following commands will result in `/etc/OMG` being added with the contents of `hello` into the minikube VM:
|
||||
|
||||
```shell
|
||||
mkdir -p ~/.minikube/files/etc
|
||||
echo hello > ~/.minikube/files/etc/OMG
|
||||
minikube start
|
||||
```
|
||||
|
||||
This method of file synchronization can be useful for adding configuration files for apiserver, or adding HTTPS certificates.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/sync/
|
||||
|
|
145
docs/tunnel.md
145
docs/tunnel.md
|
@ -1,144 +1 @@
|
|||
# Minikube Tunnel Design Doc
|
||||
|
||||
## Background
|
||||
|
||||
Minikube today only exposes a single IP address for all cluster and VM communication.
|
||||
This effectively requires users to connect to any running Pods, Services or LoadBalancers over ClusterIPs, which can require modifications to workflows when compared to developing against a production cluster.
|
||||
|
||||
A main goal of Minikube is to minimize the differences required in code and configuration between development and production, so this is not ideal.
|
||||
If all cluster IP addresses and Load Balancers were made available on the minikube host machine, these modifications would not be necessary and users would get the "magic" experience of developing from inside a cluster.
|
||||
|
||||
Tools like telepresence.io, sshuttle, and the OpenVPN chart provide similar capabilities already.
|
||||
|
||||
Also, Steve Sloka has provided a very detailed guide on how to setup a similar configuration [manually](https://stevesloka.com/access-minikube-service-from-linux-host/).
|
||||
|
||||
Elson Rodriguez has provided a similar guide, including a Minikube [external LB controller](https://github.com/elsonrodriguez/minikube-lb-patch).
|
||||
|
||||
## Example usage
|
||||
|
||||
```shell
|
||||
$ minikube tunnel
|
||||
Starting minikube tunnel process. Press Ctrl+C to exit.
|
||||
All cluster IPs and load balancers are now available from your host machine.
|
||||
```
|
||||
|
||||
## Overview
|
||||
|
||||
We will introduce a new command, `minikube tunnel`, that must be run with root permissions.
|
||||
This command will:
|
||||
|
||||
* Establish networking routes from the host into the VM for all IP ranges used by Kubernetes.
|
||||
* Enable a cluster controller that allocates IPs to services external `LoadBalancer` IPs.
|
||||
* Clean up routes and IPs when stopped, or when `minikube` stops.
|
||||
|
||||
Additionally, we will introduce a Minikube LoadBalancer controller that manages a CIDR of IPs and assigns them to services of type `LoadBalancer`.
|
||||
These IPs will also be made available on the host machine.
|
||||
|
||||
## Network Routes
|
||||
|
||||
Minikube drivers usually establish "host-only" IP addresses (192.168.1.1, for example) that route into the running VM
|
||||
from the host.
|
||||
|
||||
The new `minikube tunnel` command will create a static routing table entry that maps the CIDRs used by Pods, Services and LoadBalancers to the host-only IP, obtainable via the `minikube ip` command.
|
||||
|
||||
The commands below detail adding routes for the entire `/8` block, we should probably add individual entries for each CIDR we manage instead.
|
||||
|
||||
### Linux
|
||||
|
||||
Route entries for the entire 10.* block can be added via:
|
||||
|
||||
```shell
|
||||
sudo ip route add 10.0.0.0/8 via $(minikube ip)
|
||||
```
|
||||
|
||||
and deleted via:
|
||||
|
||||
```shell
|
||||
sudo ip route delete 10.0.0.0/8
|
||||
```
|
||||
|
||||
The routing table can be queried with `netstat -nr -f inet`
|
||||
|
||||
### OSX
|
||||
|
||||
Route entries can be added via:
|
||||
|
||||
```shell
|
||||
sudo route -n add 10.0.0.0/8 $(minikube ip)
|
||||
```
|
||||
|
||||
and deleted via:
|
||||
|
||||
```shell
|
||||
sudo route -n delete 10.0.0.0/8
|
||||
|
||||
```
|
||||
|
||||
The routing table can be queried with `netstat -nr -f inet`
|
||||
|
||||
### Windows
|
||||
|
||||
Route entries can be added via:
|
||||
|
||||
```shell
|
||||
route ADD 10.0.0.0 MASK 255.0.0.0 <minikube ip>
|
||||
```
|
||||
|
||||
and deleted via:
|
||||
|
||||
```shell
|
||||
route DELETE 10.0.0.0
|
||||
```
|
||||
|
||||
The routing table can be queried with `route print -4`
|
||||
|
||||
### Handling unclean shutdowns
|
||||
|
||||
Unclean shutdowns of the tunnel process can result in partially executed cleanup process, leaving network routes in the routing table.
|
||||
We will keep track of the routes created by each tunnel in a centralized location in the main minikube config directory.
|
||||
This list serves as a registry for tunnels containing information about
|
||||
|
||||
- machine profile
|
||||
- process ID
|
||||
- and the route that was created
|
||||
|
||||
The cleanup command cleans the routes from both the routing table and the registry for tunnels that are not running:
|
||||
|
||||
```shell
|
||||
minikube tunnel --cleanup
|
||||
```
|
||||
|
||||
Updating the tunnel registry and the routing table is an atomic transaction:
|
||||
|
||||
- create route in the routing table + create registry entry if both are successful, otherwise rollback
|
||||
- delete route in the routing table + remove registry entry if both are successful, otherwise rollback
|
||||
|
||||
*Note*: because we don't support currently real multi cluster setup (due to overlapping CIDRs), the handling of running/not-running processes is not strictly required however it is forward looking.
|
||||
|
||||
### Handling routing table conflicts
|
||||
|
||||
A routing table conflict happens when a destination CIDR of the route required by the tunnel overlaps with an existing route.
|
||||
Minikube tunnel will warn the user if this happens and should not create the rule.
|
||||
There should not be any automated removal of conflicting routes.
|
||||
|
||||
*Note*: If the user removes the minikube config directory, this might leave conflicting rules in the network routing table that will have to be cleaned up manually.
|
||||
|
||||
## Load Balancer Controller
|
||||
|
||||
In addition to making IPs routable, minikube tunnel will assign an external IP (the ClusterIP) to all services of type `LoadBalancer`.
|
||||
|
||||
The logic of this controller will be, roughly:
|
||||
|
||||
```python
|
||||
for service in services:
|
||||
if service.type == "LoadBalancer" and len(service.ingress) == 0:
|
||||
add_ip_to_service(ClusterIP, service)
|
||||
sleep
|
||||
```
|
||||
|
||||
Note that the Minikube ClusterIP can change over time (during system reboots) and this loop should also handle reconciliation of those changes.
|
||||
|
||||
## Handling multiple clusters
|
||||
|
||||
Multiple clusters are currently not supported due to our inability to specify ServiceCIDR.
|
||||
This causes conflicting routes having the same destination CIDR.
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/tasks/loadbalancer/
|
||||
|
|
|
@ -1,119 +1 @@
|
|||
# vm-driver=none
|
||||
|
||||
## Overview
|
||||
|
||||
This document is written for system integrators who are familiar with minikube, and wish to run it within a customized VM environment.
|
||||
|
||||
The `none` driver allows advanced minikube users to skip VM creation, allowing minikube to be run on a user-supplied VM.
|
||||
|
||||
## What operating systems are supported?
|
||||
|
||||
The `none` driver supports releases of Debian, Ubuntu, and Fedora that are less than 2 years old. In practice, any systemd-based modern distribution is likely to work, and we will accept pull requests which improve compatibility with other systems.
|
||||
|
||||
## Example: basic usage
|
||||
|
||||
`sudo minikube start --vm-driver=none`
|
||||
|
||||
NOTE: The none driver requires minikube to be run as root, until [#3760](https://github.com/kubernetes/minikube/issues/3760) can be addressed.
|
||||
|
||||
## Example: Using minikube for continuous integration testing
|
||||
|
||||
Most continuous integration environments are already running inside a VM, and may not supported nested virtualization. The `none` driver was designed for this use case. Here is an example, that runs minikube from a non-root user, and ensures that the latest stable kubectl is installed:
|
||||
|
||||
```shell
|
||||
curl -Lo minikube \
|
||||
https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
|
||||
&& sudo install minikube /usr/local/bin/
|
||||
|
||||
kv=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)
|
||||
curl -Lo kubectl \
|
||||
https://storage.googleapis.com/kubernetes-release/release/$kv/bin/linux/amd64/kubectl \
|
||||
&& sudo install kubectl /usr/local/bin/
|
||||
|
||||
export MINIKUBE_WANTUPDATENOTIFICATION=false
|
||||
export MINIKUBE_WANTREPORTERRORPROMPT=false
|
||||
export MINIKUBE_HOME=$HOME
|
||||
export CHANGE_MINIKUBE_NONE_USER=true
|
||||
export KUBECONFIG=$HOME/.kube/config
|
||||
|
||||
mkdir -p $HOME/.kube $HOME/.minikube
|
||||
touch $KUBECONFIG
|
||||
|
||||
sudo -E minikube start --vm-driver=none
|
||||
```
|
||||
|
||||
At this point, kubectl should be able to interact with the minikube cluster.
|
||||
|
||||
## Can the none driver be used outside of a VM?
|
||||
|
||||
Yes, *but please avoid doing so if at all possible.*
|
||||
|
||||
minikube was designed to run Kubernetes within a dedicated VM, and assumes that it has complete control over the machine it is executing on. With the `none` driver, minikube and Kubernetes run in an environment with very limited isolation, which could result in:
|
||||
|
||||
* Decreased security
|
||||
* Decreased reliability
|
||||
* Data loss
|
||||
|
||||
We'll cover these in detail below:
|
||||
|
||||
### Decreased security
|
||||
|
||||
* minikube starts services that may be available on the Internet. Please ensure that you have a firewall to protect your host from unexpected access. For instance:
|
||||
* apiserver listens on TCP *:8443
|
||||
* kubelet listens on TCP *:10250 and *:10255
|
||||
* kube-scheduler listens on TCP *:10259
|
||||
* kube-controller listens on TCP *:10257
|
||||
* Containers may have full access to your filesystem.
|
||||
* Containers may be able to execute arbitrary code on your host, by using container escape vulnerabilities such as [CVE-2019-5736](https://access.redhat.com/security/vulnerabilities/runcescape). Please keep your release of minikube up to date.
|
||||
|
||||
### Decreased reliability
|
||||
|
||||
* minikube with the none driver may be tricky to configure correctly at first, because there are many more chances for interference with other locally run services, such as dnsmasq.
|
||||
|
||||
* When run in `none` mode, minikube has no built-in resource limit mechanism, which means you could deploy pods which would consume all of the hosts resources.
|
||||
|
||||
* minikube and the Kubernetes services it starts may interfere with other running software on the system. For instance, minikube will start and stop container runtimes via systemd, such as docker, containerd, cri-o.
|
||||
|
||||
### Data loss
|
||||
|
||||
With the `none` driver, minikube will overwrite the following system paths:
|
||||
|
||||
* /usr/bin/kubeadm - Updated to match the exact version of Kubernetes selected
|
||||
* /usr/bin/kubelet - Updated to match the exact version of Kubernetes selected
|
||||
* /etc/kubernetes - configuration files
|
||||
|
||||
These paths will be erased when running `minikube delete`:
|
||||
|
||||
* /data/minikube
|
||||
* /etc/kubernetes/manifests
|
||||
* /var/lib/minikube
|
||||
|
||||
As Kubernetes has full access to both your filesystem as well as your docker images, it is possible that other unexpected data loss issues may arise.
|
||||
|
||||
## Environment variables
|
||||
|
||||
Some environment variables may be useful for using the `none` driver:
|
||||
|
||||
* **CHANGE_MINIKUBE_NONE_USER**: Sets file ownership to the user running sudo ($SUDO_USER)
|
||||
* **MINIKUBE_HOME**: Saves all files to this directory instead of $HOME
|
||||
* **MINIKUBE_WANTUPDATENOTIFICATION**: Toggles the notification that your version of minikube is obsolete
|
||||
* **MINIKUBE_WANTREPORTERRORPROMPT**: Toggles the error reporting prompt
|
||||
* **MINIKUBE_IN_STYLE**: Toggles color output and emoji usage
|
||||
|
||||
## Known Issues
|
||||
|
||||
* `systemctl` is required. [#2704](https://github.com/kubernetes/minikube/issues/2704)
|
||||
* `-p` (profiles) are unsupported: It is not possible to run more than one `--vm-driver=none` instance
|
||||
* Many `minikube` commands are not supported, such as: `dashboard`, `mount`, `ssh`
|
||||
* minikube with the `none` driver has a confusing permissions model, as some commands need to be run as root ("start"), and others by a regular user ("dashboard")
|
||||
* CoreDNS detects resolver loop, goes into CrashloopBackoff - [#3511](https://github.com/kubernetes/minikube/issues/3511)
|
||||
* Some versions of Linux have a version of docker that is newer then what Kubernetes expects. To overwrite this, run minikube with the following parameters: `sudo -E minikube start --vm-driver=none --kubernetes-version v1.11.8 --extra-config kubeadm.ignore-preflight-errors=SystemVerification`
|
||||
* On Ubuntu 18.04 (and probably others), because of how `systemd-resolve` is configured by default, one needs to bypass the default `resolv.conf` file and use a different one instead.
|
||||
- In this case, you should use this file: `/run/systemd/resolve/resolv.conf`
|
||||
- `sudo -E minikube --vm-driver=none start --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf`
|
||||
- Apperently, though, if `resolve.conf` is too big (about 10 lines!!!), one gets the following error: `Waiting for pods: apiserver proxy! Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition`
|
||||
- This error happens in Kubernetes 0.11.x, 0.12.x and 0.13.x, but *not* in 0.14.x
|
||||
- If that's your case, try this:
|
||||
- `grep -E "^nameserver" /run/systemd/resolve/resolv.conf |head -n 3 > /tmp/resolv.conf && sudo -E minikube --vm-driver=none start --extra-config=kubelet.resolv-conf=/tmp/resolv.conf`
|
||||
|
||||
* [Full list of open 'none' driver issues](https://github.com/kubernetes/minikube/labels/co%2Fnone-driver)
|
||||
This document has moved to https://minikube.sigs.k8s.io/docs/reference/drivers/none/
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
title: "Proxy Support"
|
||||
linkTitle: "Proxy Support"
|
||||
title: "HTTP Proxiies"
|
||||
linkTitle: "HTTP Proxies"
|
||||
weight: 6
|
||||
date: 2017-01-05
|
||||
description: >
|
||||
How to use an HTTP proxy with minikube
|
||||
How to use an HTTP/HTTPS proxy with minikube
|
||||
---
|
||||
|
||||
minikube requires access to the internet via HTTP, HTTPS, and DNS protocols. If a HTTP proxy is required to access the internet, you may need to pass the proxy connection information to both minikube and Docker using environment variables:
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
title: "VPN Support"
|
||||
linkTitle: "VPN Support"
|
||||
title: "Host VPN"
|
||||
linkTitle: "Host VPN"
|
||||
weight: 6
|
||||
date: 2019-08-01
|
||||
description: >
|
||||
How to use a VPN with minikube
|
||||
Using minikube on a host with a VPN installed
|
||||
---
|
||||
|
||||
minikube requires access from the host to the following IP ranges:
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Caching"
|
||||
linkTitle: "Caching"
|
||||
title: "Disk cache"
|
||||
linkTitle: "Disk cache"
|
||||
weight: 6
|
||||
date: 2019-08-01
|
||||
description: >
|
||||
|
@ -14,32 +14,14 @@ minikube has built-in support for caching downloaded resources into `$MINIKUBE_H
|
|||
* `~/.minikube/cache/images` - Docker images used by Kubernetes.
|
||||
* `~/.minikube/cache/<version>` - Kubernetes binaries, such as `kubeadm` and `kubelet`
|
||||
|
||||
## Caching arbitrary Docker images
|
||||
|
||||
minikube supports caching arbitrary images using the `minikube cache` command. Cached images are stored in `$MINIKUBE_HOME/cache/images`, and loaded into the VM's container runtime on `minikube start`.
|
||||
|
||||
### Adding an image
|
||||
|
||||
```shell
|
||||
minikube cache add ubuntu:16.04
|
||||
```
|
||||
|
||||
### Listing images
|
||||
|
||||
```shell
|
||||
minikube cache list
|
||||
```
|
||||
|
||||
### Deleting an image
|
||||
|
||||
```shell
|
||||
minikube cache delete <image name>
|
||||
```
|
||||
|
||||
## Built-in Kubernetes image caching
|
||||
## Kubernetes image cache
|
||||
|
||||
`minikube start` caches all required Kubernetes images by default. This default may be changed by setting `--cache-images=false`. These images are not displayed by the `minikube cache` command.
|
||||
|
||||
## Arbitrary docker image cache
|
||||
|
||||
See [Tasks: Caching images]({{< ref "/docs/tasks/caching.md" >}})
|
||||
|
||||
## Sharing the minikube cache
|
||||
|
||||
For offline use on other hosts, one can copy the contents of `~/.minikube/cache`. As of the v1.0 release, this directory contains 685MB of data:
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: "Environment Variables"
|
||||
linkTitle: "Environment Variables"
|
||||
weight: 6
|
||||
date: 2019-08-01
|
||||
---
|
||||
|
||||
## Config option variables
|
||||
|
||||
minikube supports passing environment variables instead of flags for every value listed in `minikube config list`. This is done by passing an environment variable with the prefix `MINIKUBE_`.
|
||||
|
||||
For example the `minikube start --iso-url="$ISO_URL"` flag can also be set by setting the `MINIKUBE_ISO_URL="$ISO_URL"` environment variable.
|
||||
|
||||
## Other variables
|
||||
|
||||
Some features can only be accessed by environment variables, here is a list of these features:
|
||||
|
||||
* **MINIKUBE_HOME** - (string) sets the path for the .minikube directory that minikube uses for state/configuration
|
||||
|
||||
* **MINIKUBE_IN_STYLE** - (bool) manually sets whether or not emoji and colors should appear in minikube. Set to false or 0 to disable this feature, true or 1 to force it to be turned on.
|
||||
|
||||
* **MINIKUBE_WANTUPDATENOTIFICATION** - (bool) sets whether the user wants an update notification for new minikube versions
|
||||
|
||||
* **MINIKUBE_REMINDERWAITPERIODINHOURS** - (int) sets the number of hours to check for an update notification
|
||||
|
||||
* **CHANGE_MINIKUBE_NONE_USER** - (bool) automatically change ownership of ~/.minikube to the value of $SUDO_USER
|
||||
|
||||
* **MINIKUBE_ENABLE_PROFILING** - (int, `1` enables it) enables trace profiling to be generated for minikube
|
||||
|
||||
|
||||
## Example: Disabling emoji
|
||||
|
||||
```shell
|
||||
export MINIKUBE_IN_STYLE=false
|
||||
minikube start
|
||||
```
|
||||
|
||||
## Making values persistent
|
||||
|
||||
To make the exported variables persistent across reboots:
|
||||
|
||||
* Linux and macOS: Add these declarations to `~/.bashrc` or wherever your shells environment variables are stored.
|
||||
* Windows: Add these declarations via [system settings](https://support.microsoft.com/en-au/help/310519/how-to-manage-environment-variables-in-windows-xp) or using [setx](https://stackoverflow.com/questions/5898131/set-a-persistent-environment-variable-from-cmd-exe)
|
||||
|
|
@ -67,4 +67,4 @@ If you are already running minikube from inside a VM, it is possible to skip the
|
|||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
{{% readfile file="/docs/Getting started/includes/post_install.inc" %}}
|
||||
{{% readfile file="/docs/Start/includes/post_install.inc" %}}
|
|
@ -50,4 +50,4 @@ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin
|
|||
|
||||
{{% /tabs %}}
|
||||
|
||||
{{% readfile file="/docs/Getting started/includes/post_install.inc" %}}
|
||||
{{% readfile file="/docs/Start/includes/post_install.inc" %}}
|
|
@ -61,4 +61,4 @@ Hyper-V Requirements: A hypervisor has been detected.
|
|||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
{{% readfile file="/docs/Getting started/includes/post_install.inc" %}}
|
||||
{{% readfile file="/docs/Start/includes/post_install.inc" %}}
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: "Building within"
|
||||
title: "Building images within minikube"
|
||||
date: 2019-08-05
|
||||
weight: 1
|
||||
description: >
|
||||
Building images from within minikube
|
||||
Building images within minikube
|
||||
---
|
||||
|
||||
When using a single VM of Kubernetes it's really handy to build inside the VM; as this means you don't have to build on your host machine and push the image into a docker registry - you can just build inside the same machine as minikube which speeds up local experiments.
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
title: "Caching images"
|
||||
date: 2019-08-05
|
||||
weight: 1
|
||||
description: >
|
||||
How to cache arbitrary Docker images
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
For offline use and performance reasons, minikube caches required Docker images onto the local file system. Developers may find it useful to add their own images to this cache for local development.
|
||||
|
||||
## Adding an image
|
||||
|
||||
To add the ubuntu 16.04 image to minikube's image cache:
|
||||
|
||||
```shell
|
||||
minikube cache add ubuntu:16.04
|
||||
```
|
||||
|
||||
The add command will store the requested image to `$MINIKUBE_HOME/cache/images`, and load it into the VM's container runtime environment next time `minikube start` is called.
|
||||
|
||||
## Listing images
|
||||
|
||||
To display images you have added to the cache:
|
||||
|
||||
```shell
|
||||
minikube cache list
|
||||
```
|
||||
|
||||
This listing will not include the images which are built-in to minikube.
|
||||
|
||||
## Deleting an image
|
||||
|
||||
```shell
|
||||
minikube cache delete <image name>
|
||||
```
|
||||
|
||||
### Additional Information
|
||||
|
||||
* [Reference: Disk Cache]({{< ref "/docs/reference/disk_cache.md" >}})
|
||||
* [Reference: cache command]({{< ref "/docs/reference/commands/cache.md" >}})
|
Loading…
Reference in New Issue