* fix documentation for gcp-auth addon
* make sure kube-system pods are up before enabling gcp-auth
* fix lint
* add failurePolicy for webhook
* only install addons if asked
* better comment
* slightly less hacky code
* defer addons properly
* simplify code for performance
Always use architecture when building the image.
When developing locally, tag the image with version
(this tag now refers to the local host architecture)
When making a release, make a manifest with version
(this manifest now lists all supported architectures)
Don't use architecture, when specifying "image:".
Create a new addon, `podsecuritypolicies` that applies the
PodSecurityPolicy and related RBAC configuration from the
https://minikube.sigs.k8s.io/docs/tutorials/using_psp/ tutorial.
Apparently, recent work on the addons system has invalidated the
procedure shown in that tutorial, as the configuration is no longer
automatically applied. The last known working version is `1.6.2`.
This allows clusters started with
`--extra-configs=apiserver.enable-admission-plugins=PodSecurityPolicy`
to succeed, so long as they also include `--addons=podsecuritypolicies`.
Problem: The current version of OLM shipped with Minikube as an addon
places memory limits on OLM and the pods created for CatalogSources.
If these limits exist, the operatorhubio catalog will almost certainly
hit an OOM issue.
Solution: Update OLM to a more recent version, specifically v0.15.1.
The most recent Fedora image doesn't include a `diff` binary.
This changes the container image to use Ubuntu Bionic instead,
which _does_ provide a diff binary.
- Service Account and binding to run the job
- Registry aliases ConfigMap
- Registry aliases daemonset to update the node etc/hosts
fixes: 4604
Signed-off-by: Kamesh Sampath <ksampath@redhat.com>
This fix contains few things:
* Used k8s.gcr.io/debian-base-amd64:v2.0.0 as base image to build storage-provisioner
* Modify RBAC permission used to cluster-admin
* Build storage-provisioner as static binary
The link is broken as the config.toml is not using module.mounts.
To fix this need to add [module] section to point to the deploy/
folder as the README.md files are inside that folder
Put the different directory as separate module.mount and upgrade
hugo version to 0.59.0
extensions/v1beta1 are deprecated and will not be served with kubernetes 1.16
anymore.
For Deployment,DaemonSet and StatefulSet the apps/v1 api has been present
since kubernetes 1.9.
See following blog post for details:
https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
This PR addresses #4604 by adding a new selector to concerned svc/rc only.
This also reverts `kubernetes.io/minikube-addons` to `registy` for registry-proxy
so that addon manager can deploy registry-proxy when registry addon is enabled.
which can be picked up during integration testing.
I opted to do it this way because the locally built gvisor image wasn't
being picked up by minikube, because the docker daemon wasn't
configured, since minikube isn't up and running yet. Even if the docker daemon was configured to point to
minikube, we wouldn't be able to build the gvisor-image from the test
itself.
We should rebuild the gvisor image for integration tests, so that if
changes are made to the gvisor image they are tested. I added an
environment variable that, when set, will change the expected gvisor
image repo.
# This is the 1st commit message:
Fix doc comment version.gitCommitID doc
# This is the commit message #2:
Add hyperkit doc
# This is the commit message #3:
Add commit id to docker-machine-driver-kvm2 version
# This is the commit message #4:
removed label selector for registry-proxy daemonset
# This is the commit message #5:
Add support to custom qemu uri on kvm2 driver
# This is the commit message #6:
Improve hyperkit vm stop
# This is the commit message #7:
Make virtualbox DNS settings configurable
# This is the commit message #8:
added integration tests for registry addon
As per [this blog](https://blog.hasura.io/sharing-a-local-registry-for-minikube-37c7240d0615) and [this gist](https://gist.github.com/coco98/b750b3debc6d517308596c248daf3bb1), we need to deploy a registry-proxy
which will expose docker registry on the minikube host.
Once this daemon set is deployed on minikube, one can access registry on `$(minikube ip):5000`.
This has been tested with minikube v1.0.1 with none driver. With this, one will not have to use
`kubectl port-forward`. I was able to push a container image to registry using
```
docker push $(minikube ip):5000/test-img
```
And then ran it in minikube using
```
kubectl run -i -t test-img --image=$(minikube ip):5000/test-img --restart=Never
```
Running the `minikube addons enable registry` yields `registry was successfully enabled` but no `registry` Pod ends up being run.
I've narrowed it down to this `env` entry not being quoted.
Logs from `kube-addon-manager-minikube` Pod show this error:
```
Error from server (BadRequest): error when creating "/etc/kubernetes/addons/registry-rc.yaml": ReplicationController in version "v1" cannot be handled as a ReplicationController: v1.ReplicationController.Spec: v1.ReplicationControllerSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true}],"ima|..., bigger context ...|"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":true}],"image":"registry.hub.docker.com/library/reg|...
```
In the default installation I get the same error:
```
$ minikube addons open heapster
💣 This addon does not have an endpoint defined for the 'addons open' command.
You can add one by annotating a service with the label kubernetes.io/minikube-addons-endpoint:heapster
```
This PR is simply implementing the suggested fix by adding the aforementioned label to the heapster service.
Some users (especially for those in mainland China) may have issue
accessing the default image repository. This patchset allows users
to override the default image repository gcr.io to a different
repository by specifying --image-repository option in the command
line as a simple workaround. Images will be pulled from the
specified image repository instead of the default ones.
Example (using mirror by Aliyun):
minikube start ...
--image-repository
registry.cn-hangzhou.aliyuncs.com/google_containers
When a storage provider is enabled (storage-provisioner-glusterfile),
mark it "is-default" and set "is-default" to "false" in all other
StorageClasses.
There can only be one StorageClass be marked as default. When the
storage-provisioner-gluster addon is enabled, users expect it to be the
default StorageClass.
Instead of removing the "is-default" annotation from the other
StorageClasses, set it to "false". This leaves only the "glusterfile"
StorageClass as "is-default".
With this addon dynamic provisioning based on Gluster can be enabled:
$ minikube addons enable storage-provisioner-gluster
This will deploy several pods in a new 'storage-gluster' namespace:
- glusterfs, storage service with a 10GB sparse /srv/fake-disk.img
- heketi, a smart Gluster volume manager
- glusterfile-provisioner, external-storage provisioner
In addition, the StorageClass 'glusterfile' will be created. It is
currently not configured as default StorageClass, so PVCs need to refer
to the new StorageClass.
Previously, minikube has been shipped with the default CNI config
(/etc/cni/net.d/k8s.conf) in its rootfs. This complicated a lot
when using a custom CNI plugin, as the default config was picked
by kubelet before the custom CNI plugin has installed its own CNI
config. So, the end result was that some Pods were attached to a
network defined in the default config, and some got managed by
the custom plugin.
This commit introduces the flag "--enable-default-cni" to
"minikube start" to trigger the provisioning of the default CNI
config.
Signed-off-by: Martynas Pumputis <m@lambda.lt>
Change the policy for the minikube-hostpath storage class addon from
Reconcile to EnsureExists. When it's set to reconcile, it's impossible
to change the default storage class in Minikube because it will keep
setting the minikube-hostpath storageclass to default.
Ported from kubernetes/kubernetes#66235
* Change restart policy on gvisor pod
Change the restart policy on the gvisor pod to Always. This way, if a
user runs
minikube addons enable gvisor
minikube stop
minikube start
when the addon manager tries to restart the gvisor pod, it will be
restarted and gvisor will start running automatically. This PR also adds an
integration test for this functionality.
* Test stop and start
* Revert test to delete
Revert test to delete for now, for some reason "stop" and then "start"
is failing both locally and in Jenkins for VirtualBox with a "panic test
timed out after 30 min" error
This PR adds the code for enabling gvisor in minikube. It adds the pod
that will run when the addon is enabled, and the code for the image
which will run when this happens.
When gvisor is enabled, the pod will download runsc and the
gvisor-containerd-shim. It will replace the containerd config.toml and
restart containerd.
When gvisor is disabled, the pod will be deleted by the addon manager.
This will trigger a pre-stop hook which will revert the config.toml to
it's original state and restart containerd.
- Updates Ingress-Controller Version to 0.19.0
- Adds Service Account for Ingress-Controller
- Adds Support for Prometheus
- Fixes bug with TCP/UDP ConfigMaps not Loading
- Adds more resource limits to default-backend
- Use new ingress class name
- Use app.kubernetes.io/xxxxxxxxxxx labels
This provides an additional level of security, by enforcing host checking, applying port randomization, and requiring explicit user intent to expose the service to the host.
The default configuration here for ES_JAVA_OPTS will pretty much always fail as is, making this addon useless and broken unless modified. Since this is deployed automatically when addon is enabled, it seems like providing a value that works is the best option here, otherwise in a minikube the pod(s) deployed will continually fail to start.
HSTS has been deactivated explicitly by default for the ingress controller in minikube, because it causes trouble for local development. Minikube is intended for local development, so this feature should not be turned on by default.