Merge pull request #7997 from aledbf/ingress

Update ingress-nginx addon
pull/8027/head^2
Thomas Strömberg 2020-05-07 10:08:48 -07:00 committed by GitHub
commit 9d322942f8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 485 additions and 183 deletions

View File

@ -6,25 +6,25 @@ DNS service for ingress controllers running on your minikube server
## Overview
### Problem
When running minikube locally you are highly likely to want to run your services on an ingress controller so that you
don't have to use minikube tunnel or NodePorts to access your services. While NodePort might be ok in a lot of
circumstances in order to test some features an ingress is necessary. Ingress controllers are great because you can
define your entire architecture in something like a helm chart and all your services will be available. There is only
1 problem. That is that your ingress controller works basically off of dns and while running minikube that means that
your local dns names like `myservice.test` will have to resolve to `$(minikube ip)` not really a big deal except the
only real way to do this is to add an entry for every service in your `/etc/hosts` file. This gets messy for obvious
reasons. If you have a lot of services running that each have their own dns entry then you have to set those up
manually. Even if you automate it you then need to rely on the host operating system storing configurations instead of
storing them in your cluster. To make it worse it has to be constantly maintained and updated as services are added,
When running minikube locally you are highly likely to want to run your services on an ingress controller so that you
don't have to use minikube tunnel or NodePorts to access your services. While NodePort might be ok in a lot of
circumstances in order to test some features an ingress is necessary. Ingress controllers are great because you can
define your entire architecture in something like a helm chart and all your services will be available. There is only
1 problem. That is that your ingress controller works basically off of dns and while running minikube that means that
your local dns names like `myservice.test` will have to resolve to `$(minikube ip)` not really a big deal except the
only real way to do this is to add an entry for every service in your `/etc/hosts` file. This gets messy for obvious
reasons. If you have a lot of services running that each have their own dns entry then you have to set those up
manually. Even if you automate it you then need to rely on the host operating system storing configurations instead of
storing them in your cluster. To make it worse it has to be constantly maintained and updated as services are added,
remove, and renamed. I call it the `/ets/hosts` pollution problem.
### Solution
What if you could just access your local services magically without having to edit your `/etc/hosts` file? Well now you
can. This addon acts as a DNS service that runs inside your kubernetes cluster. All you have to do is install the
service and add the `$(minikube ip)` as a DNS server on your host machine. Each time the dns service is queried an
API call is made to the kubernetes master service for a list of all the ingresses. If a match is found for the name a
response is given with an IP address as the `$(minikube ip)`. So for example lets say my minikube ip address is
`192.168.99.106` and I have an ingress controller with the name of `myservice.test` then I would get a result like so:
can. This addon acts as a DNS service that runs inside your kubernetes cluster. All you have to do is install the
service and add the `$(minikube ip)` as a DNS server on your host machine. Each time the dns service is queried an
API call is made to the kubernetes master service for a list of all the ingresses. If a match is found for the name a
response is given with an IP address as the `$(minikube ip)`. So for example lets say my minikube ip address is
`192.168.99.106` and I have an ingress controller with the name of `myservice.test` then I would get a result like so:
```text
#bash:~$ nslookup myservice.test $(minikube ip)
@ -58,7 +58,7 @@ nameserver 192.168.99.169
search_order 1
timeout 5
```
Replace `192.168.99.169` with your minikube ip and `profilename` is the name of the minikube profile for the
Replace `192.168.99.169` with your minikube ip and `profilename` is the name of the minikube profile for the
corresponding ip address
If you have multiple minikube ips you must configure multiple files
@ -78,7 +78,7 @@ Replace `192.168.99.169` with your minikube ip
If your linux OS uses `systemctl` run the following commands
```bash
sudo resolvconf -u
systemctl disable --now resolvconf.service
systemctl disable --now resolvconf.service
```
If your linux does not use `systemctl` run the following commands
@ -161,7 +161,7 @@ sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.mDNSResponder.pli
## TODO
- Add a service that runs on the host OS which will update the files in `/etc/resolver` automatically
- Start this service when running `minikube addons enable ingress-dns` and stop the service when running
- Start this service when running `minikube addons enable ingress-dns` and stop the service when running
`minikube addons disable ingress-dns`
## Contributors
@ -171,5 +171,5 @@ sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.mDNSResponder.pli
| Image | Source | Owner |
| :--- | :--- | :--- |
| [nginx-ingress](https://hub.docker.com/r/nginx/nginx-ingress) | [ingress-nginx](https://github.com/kubernetes/ingress-nginx) | Nginx
| [ingress-nginx](https://quay.io/repository/kubernetes-ingress-controller/nginx-ingress-controller) | [ingress-nginx](https://github.com/kubernetes/ingress-nginx) | Kubernetes ingress-nginx
| [minikube-ingress-dns](https://hub.docker.com/r/cryptexlabs/minikube-ingress-dns) | [minikube-ingress-dns](https://gitlab.com/cryptexlabs/public/development/minikube-ingress-dns) | Cryptex Labs

View File

@ -38,21 +38,19 @@ kind: Ingress
metadata:
name: example-ingress
namespace: kube-system
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-john.test
http:
paths:
- path: /|/(.+)
- path: /
backend:
serviceName: hello-world-app
servicePort: 80
- host: hello-jane.test
http:
paths:
- path: /|/(.+)
- path: /
backend:
serviceName: hello-world-app
servicePort: 80
@ -79,4 +77,4 @@ spec:
protocol: TCP
type: NodePort
selector:
app: hello-world-app
app: hello-world-app

View File

@ -15,7 +15,6 @@
apiVersion: v1
data:
# see https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md for all possible options and their description
map-hash-bucket-size: "128"
hsts: "false"
kind: ConfigMap
metadata:

View File

@ -16,11 +16,13 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
name: ingress-nginx-controller
namespace: kube-system
labels:
app.kubernetes.io/name: nginx-ingress-controller
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/part-of: kube-system
app.kubernetes.io/component: controller
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 1
@ -32,67 +34,251 @@ spec:
maxSurge: 1
selector:
matchLabels:
app.kubernetes.io/name: nginx-ingress-controller
app.kubernetes.io/part-of: kube-system
addonmanager.kubernetes.io/mode: Reconcile
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
template:
metadata:
labels:
app.kubernetes.io/name: nginx-ingress-controller
app.kubernetes.io/part-of: kube-system
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
addonmanager.kubernetes.io/mode: Reconcile
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress
terminationGracePeriodSeconds: 60
serviceAccountName: ingress-nginx
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller{{.ExoticArch}}:0.26.1
name: nginx-ingress-controller
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
# (Optional) we expose 18080 to access nginx stats in url /nginx-status
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
# use minikube IP address in ingress status field
- --report-node-internal-ip-address
securityContext:
capabilities:
- name: controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
- --report-node-internal-ip-address
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
- name: https
containerPort: 443
protocol: TCP
hostPort: 443
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
namespace: kube-system
webhooks:
- name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- extensions
- networking.k8s.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
clientConfig:
service:
namespace: kube-system
name: ingress-nginx-controller-admission
path: /extensions/v1beta1/ingresses
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
namespace: kube-system
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: kube-system
---
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
namespace: kube-system
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: jettech/kube-webhook-certgen:v1.2.0
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.kube-system.svc
- --namespace=kube-system
- --secret-name=ingress-nginx-admission
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
namespace: kube-system
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: jettech/kube-webhook-certgen:v1.2.0
imagePullPolicy:
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=kube-system
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
addonmanager.kubernetes.io/mode: Reconcile
name: ingress-nginx-controller-admission
namespace: kube-system
spec:
ports:
- name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller

View File

@ -3,122 +3,186 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress
name: ingress-nginx
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
addonmanager.kubernetes.io/mode: Reconcile
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:nginx-ingress
name: system::ingress-nginx
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.18+
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: system::nginx-ingress-role
name: system::ingress-nginx
namespace: kube-system
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.18+
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- endpoints
verbs:
- get
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: system::nginx-ingress-role-binding
name: system::ingress-nginx
namespace: kube-system
labels:
kubernetes.io/bootstrapping: rbac-defaults
@ -126,10 +190,10 @@ metadata:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: system::nginx-ingress-role
name: system::ingress-nginx
subjects:
- kind: ServiceAccount
name: nginx-ingress
name: ingress-nginx
namespace: kube-system
---
@ -137,15 +201,59 @@ subjects:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:nginx-ingress
name: system::ingress-nginx
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:nginx-ingress
name: system::ingress-nginx
subjects:
- kind: ServiceAccount
name: nginx-ingress
name: ingress-nginx
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
namespace: kube-system
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system::ingress-nginx-admission
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: admission-webhook
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: kube-system

View File

@ -91,18 +91,29 @@ func validateIngressAddon(ctx context.Context, t *testing.T, profile string) {
t.Fatalf("failed to get kubernetes client: %v", client)
}
if err := kapi.WaitForDeploymentToStabilize(client, "kube-system", "nginx-ingress-controller", Minutes(6)); err != nil {
if err := kapi.WaitForDeploymentToStabilize(client, "kube-system", "ingress-nginx-controller", Minutes(6)); err != nil {
t.Errorf("failed waiting for ingress-controller deployment to stabilize: %v", err)
}
if _, err := PodWait(ctx, t, profile, "kube-system", "app.kubernetes.io/name=nginx-ingress-controller", Minutes(12)); err != nil {
if _, err := PodWait(ctx, t, profile, "kube-system", "app.kubernetes.io/name=ingress-nginx", Minutes(12)); err != nil {
t.Fatalf("failed waititing for nginx-ingress-controller : %v", err)
}
rr, err := Run(t, exec.CommandContext(ctx, "kubectl", "--context", profile, "replace", "--force", "-f", filepath.Join(*testdataDir, "nginx-ing.yaml")))
if err != nil {
t.Errorf("failed to kubectl replace nginx-ing. args %q. %v", rr.Command(), err)
createIngress := func() error {
rr, err := Run(t, exec.CommandContext(ctx, "kubectl", "--context", profile, "replace", "--force", "-f", filepath.Join(*testdataDir, "nginx-ing.yaml")))
if err != nil {
return err
}
if rr.Stderr.String() != "" {
t.Logf("%v: unexpected stderr: %s (may be temproary)", rr.Command(), rr.Stderr)
}
return nil
}
rr, err = Run(t, exec.CommandContext(ctx, "kubectl", "--context", profile, "replace", "--force", "-f", filepath.Join(*testdataDir, "nginx-pod-svc.yaml")))
if err := retry.Expo(createIngress, 1*time.Second, Seconds(90)); err != nil {
t.Errorf("failed to create ingress: %v", err)
}
rr, err := Run(t, exec.CommandContext(ctx, "kubectl", "--context", profile, "replace", "--force", "-f", filepath.Join(*testdataDir, "nginx-pod-svc.yaml")))
if err != nil {
t.Errorf("failed to kubectl replace nginx-pod-svc. args %q. %v", rr.Command(), err)
}