feat: argo-cd can deploy Redis HA (#270)

* feat: argo-cd can deploy Redis HA

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* fix: add unarchived subchart redis-ha

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* fix: Redis HA upgraded since 4.3.4 contains a bug on the chart

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* docs: how to configure Redis and Redis HA

* fix: add missing chart folder

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* fix: Helm bug with subcharts and alias

* fix: Chart version

* fix: Remove archived subcharts

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* fix: lint script

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* Revert "fix: lint script"

This reverts commit f4b81cbb6f.

* fix: lint and publish scripts

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* fix: align test-image versions

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* fix: remove sudo from scripts
Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* fix: add required repositories to helm

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* fix: simplify expression

Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

* fix: bump up chart version
Signed-off-by: Carlos Juan Gómez Peñalver <carlosjuangp@gmail.com>

Co-authored-by: Spencer Gilbert <Spencer.Gilbert@gmail.com>
main
Carlos Juan Gómez Peñalver 2020-04-09 17:31:13 +01:00 committed by GitHub
parent 567e7ce91f
commit d7da8e863f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
42 changed files with 1951 additions and 33 deletions

View File

@ -2,7 +2,7 @@ version: 2.1
jobs:
lint:
docker:
- image: gcr.io/kubernetes-charts-ci/test-image:v3.0.1
- image: gcr.io/kubernetes-charts-ci/test-image:v3.1.0
steps:
- checkout
- run: ct lint --config .circleci/chart-testing.yaml --lint-conf .circleci/lintconf.yaml
@ -11,7 +11,7 @@ jobs:
publish:
docker:
# We just need an image with `helm` on it. Handily we know of one already.
- image: gcr.io/kubernetes-charts-ci/test-image:v3.0.1
- image: gcr.io/kubernetes-charts-ci/test-image:v3.1.0
steps:
# install the additional keys needed to push to Github. Alex Collins owns these keys.
- add_ssh_keys

2
.gitignore vendored
View File

@ -1,4 +1,4 @@
output
.vscode
.DS_Store
*.tgz
**/*.tgz

View File

@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "1.5.1"
description: A Helm chart for ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes.
name: argo-cd
version: 2.1.2
version: 2.2.0
home: https://github.com/argoproj/argo-helm
icon: https://raw.githubusercontent.com/argoproj/argo/master/docs/assets/argo.png
keywords:

View File

@ -13,7 +13,7 @@ This chart currently installs the non-HA version of ArgoCD.
## Upgrading
### 1.8.7 to 2.0.0
### 1.8.7 to 2.x.x
`controller.extraArgs`, `repoServer.extraArgs` and `server.extraArgs` are not arrays of strings intead of a map
@ -75,8 +75,8 @@ Helm v3 has removed the `install-crds` hook so CRDs are now populated by files i
## ArgoCD Controller
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| Key | Default | Description |
|-----|---------|-------------|
| controller.affinity | Assign custom affinity rules to the deployment https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
| controller.args.operationProcessors | define the controller `--operation-processors` | `"10"` |
| controller.args.statusProcessors | define the controller `--status-processors` | `"20"` |
@ -121,8 +121,8 @@ Helm v3 has removed the `install-crds` hook so CRDs are now populated by files i
## Argo Repo Server
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| Key | Default | Description |
|-----|---------|-------------|
| repoServer.affinity | Assign custom affinity rules to the deployment https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
| repoServer.autoscaling.enabled | Enable Horizontal Pod Autoscaler (HPA) for the repo server | `false` |
| repoServer.autoscaling.minReplicas | Minimum number of replicas for the repo server HPA | `1` |
@ -168,8 +168,8 @@ Helm v3 has removed the `install-crds` hook so CRDs are now populated by files i
## Argo Server
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| Key | Default | Description |
|-----|---------|-------------|
| server.affinity | Assign custom affinity rules to the deployment https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
| server.autoscaling.enabled | Enable Horizontal Pod Autoscaler (HPA) for the server | `false` |
| server.autoscaling.minReplicas | Minimum number of replicas for the server HPA | `1` |
@ -234,8 +234,8 @@ Helm v3 has removed the `install-crds` hook so CRDs are now populated by files i
## Dex
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| Key | Default | Description |
|-----|---------|-------------|
| dex.affinity | Assign custom affinity rules to the deployment https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
| dex.containerPortGrpc | GRPC container port | `5557` |
| dex.containerPortHttp | HTTP container port | `5556` |
@ -263,8 +263,14 @@ Helm v3 has removed the `install-crds` hook so CRDs are now populated by files i
## Redis
| Key | Type | Default | Description |
|-----|------|---------|-------------|
When Redis is completely disabled from the chart (`redis.enabled=false`) and
an external Redis instance wants to be used or
when Redis HA subcart is enabled (`redis.enabled=true and redis-ha.enabled=true`)
but HA proxy is disabled `redis-ha.haproxy.enabled=false` Redis flags need to be specified
through `xxx.extraArgs`
| Key | Default | Description |
|-----|---------|-------------|
| redis.affinity | Assign custom affinity rules to the deployment https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
| redis.containerPort | Redis container port | `6379` |
| redis.enabled | Enable redis | `true` |
@ -280,3 +286,12 @@ Helm v3 has removed the `install-crds` hook so CRDs are now populated by files i
| redis.resources | Resource limits and requests for redis | `{}` |
| redis.servicePort | Redis service port | `6379` |
| redis.tolerations | Tolerations for use with node taints https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `[]` |
| redis-ha | Configures Redis HA subchart https://github.com/helm/charts/tree/master/stable/redis-ha | | |
| redis-ha.enabled | Enables the Redis HA subchart and disables the custom Redis single node deployment| `false` |
| redis-ha.exporter.enabled | If `true`, the prometheus exporter sidecar is enabled | `true` |
| redis-ha.persistentVolume.enabled | Configures persistency on Redis nodes | `false`
| redis-ha.redis.masterGroupName | Redis convention for naming the cluster group: must match `^[\\w-\\.]+$` and can be templated | `argocd`
| redis-ha.redis.config | Any valid redis config options in this section will be applied to each server (see `redis-ha` chart) | `` |
| redis-ha.redis.config.save | Will save the DB if both the given number of seconds and the given number of write operations against the DB occurred. `""` is disabled | `""` |
| redis-ha.haproxy.enabled | Enabled HAProxy LoadBalancing/Proxy | `true` |
| redis-ha.haproxy.metrics.enabled | HAProxy enable prometheus metric scraping | `true` |

View File

@ -0,0 +1,21 @@
apiVersion: v1
appVersion: 5.0.6
description: Highly available Kubernetes implementation of Redis
engine: gotpl
home: http://redis.io/
icon: https://upload.wikimedia.org/wikipedia/en/thumb/6/6b/Redis_Logo.svg/1200px-Redis_Logo.svg.png
keywords:
- redis
- keyvalue
- database
maintainers:
- email: salimsalaues@gmail.com
name: ssalaues
- email: aaron.layfield@gmail.com
name: dandydeveloper
name: redis-ha
sources:
- https://redis.io/download
- https://github.com/scality/Zenko/tree/development/1.0/kubernetes/zenko/charts/redis-ha
- https://github.com/oliver006/redis_exporter
version: 4.4.2

View File

@ -0,0 +1,6 @@
approvers:
- ssalaues
- dandydeveloper
reviewers:
- ssalaues
- dandydeveloper

View File

@ -0,0 +1,230 @@
# Redis
[Redis](http://redis.io/) is an advanced key-value cache and store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets, sorted sets, bitmaps and hyperloglogs.
## TL;DR;
```bash
$ helm install stable/redis-ha
```
By default this chart install 3 pods total:
* one pod containing a redis master and sentinel container (optional prometheus metrics exporter sidecar available)
* two pods each containing a redis slave and sentinel containers (optional prometheus metrics exporter sidecars available)
## Introduction
This chart bootstraps a [Redis](https://redis.io) highly available master/slave statefulset in a [Kubernetes](http://kubernetes.io) cluster using the Helm package manager.
## Prerequisites
- Kubernetes 1.8+ with Beta APIs enabled
- PV provisioner support in the underlying infrastructure
## Upgrading the Chart
Please note that there have been a number of changes simplifying the redis management strategy (for better failover and elections) in the 3.x version of this chart. These changes allow the use of official [redis](https://hub.docker.com/_/redis/) images that do not require special RBAC or ServiceAccount roles. As a result when upgrading from version >=2.0.1 to >=3.0.0 of this chart, `Role`, `RoleBinding`, and `ServiceAccount` resources should be deleted manually.
### Upgrading the chart from 3.x to 4.x
Starting from version `4.x` HAProxy sidecar prometheus-exporter removed and replaced by the embedded [HAProxy metrics endpoint](https://github.com/haproxy/haproxy/tree/master/contrib/prometheus-exporter), as a result when upgrading from version 3.x to 4.x section `haproxy.exporter` should be removed and the `haproxy.metrics` need to be configured for fit your needs.
## Installing the Chart
To install the chart
```bash
$ helm install stable/redis-ha
```
The command deploys Redis on the Kubernetes cluster in the default configuration. By default this chart install one master pod containing redis master container and sentinel container along with 2 redis slave pods each containing their own sentinel sidecars. The [configuration](#configuration) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the deployment:
```bash
$ helm delete <chart-name>
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following table lists the configurable parameters of the Redis chart and their default values.
| Parameter | Description | Default |
|:--------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------|
| `image` | Redis image | `redis` |
| `imagePullSecrets` | Reference to one or more secrets to be used when pulling redis images | [] |
| `tag` | Redis tag | `5.0.6-alpine` |
| `replicas` | Number of redis master/slave pods | `3` |
| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` |
| `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the redis-ha.fullname template |
| `rbac.create` | Create and use RBAC resources | `true` |
| `redis.port` | Port to access the redis service | `6379` |
| `redis.masterGroupName` | Redis convention for naming the cluster group: must match `^[\\w-\\.]+$` and can be templated | `mymaster` |
| `redis.config` | Any valid redis config options in this section will be applied to each server (see below) | see values.yaml |
| `redis.customConfig` | Allows for custom redis.conf files to be applied. If this is used then `redis.config` is ignored | `` |
| `redis.resources` | CPU/Memory for master/slave nodes resource requests/limits | `{}` |
| `sentinel.port` | Port to access the sentinel service | `26379` |
| `sentinel.quorum` | Minimum number of servers necessary to maintain quorum | `2` |
| `sentinel.config` | Valid sentinel config options in this section will be applied as config options to each sentinel (see below) | see values.yaml |
| `sentinel.customConfig` | Allows for custom sentinel.conf files to be applied. If this is used then `sentinel.config` is ignored | `` |
| `sentinel.resources` | CPU/Memory for sentinel node resource requests/limits | `{}` |
| `init.resources` | CPU/Memory for init Container node resource requests/limits | `{}` |
| `auth` | Enables or disables redis AUTH (Requires `redisPassword` to be set) | `false` |
| `redisPassword` | A password that configures a `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`) | `` |
| `authKey` | The key holding the redis password in an existing secret. | `auth` |
| `existingSecret` | An existing secret containing a key defined by `authKey` that configures `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`, cannot be used in conjunction with `.Values.redisPassword`) | `` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `hardAntiAffinity` | Whether the Redis server pods should be forced to run on separate nodes. | `true` |
| `additionalAffinities` | Additional affinities to add to the Redis server pods. | `{}` |
| `securityContext` | Security context to be added to the Redis server pods. | `{runAsUser: 1000, fsGroup: 1000, runAsNonRoot: true}` |
| `affinity` | Override all other affinity settings with a string. | `""` |
| `persistentVolume.size` | Size for the volume | 10Gi |
| `persistentVolume.annotations` | Annotations for the volume | `{}` |
| `persistentVolume.reclaimPolicy` | Method used to reclaim an obsoleted volume. `Delete` or `Retain` | `""` |
| `emptyDir` | Configuration of `emptyDir`, used only if persistentVolume is disabled and no hostPath specified | `{}` |
| `exporter.enabled` | If `true`, the prometheus exporter sidecar is enabled | `false` |
| `exporter.image` | Exporter image | `oliver006/redis_exporter` |
| `exporter.tag` | Exporter tag | `v0.31.0` |
| `exporter.port` | Exporter port | `9121` |
| `exporter.annotations` | Prometheus scrape annotations | `{prometheus.io/path: /metrics, prometheus.io/port: "9121", prometheus.io/scrape: "true"}` |
| `exporter.extraArgs` | Additional args for the exporter | `{}` |
| `exporter.script` | A custom custom Lua script that will be mounted to exporter for collection of custom metrics. Creates a ConfigMap and sets env var `REDIS_EXPORTER_SCRIPT`. | |
| `exporter.serviceMonitor.enabled` | Use servicemonitor from prometheus operator | `false` |
| `exporter.serviceMonitor.namespace` | Namespace the service monitor is created in | `default` |
| `exporter.serviceMonitor.interval` | Scrape interval, If not set, the Prometheus default scrape interval is used | `nil` |
| `exporter.serviceMonitor.telemetryPath` | Path to redis-exporter telemetry-path | `/metrics` |
| `exporter.serviceMonitor.labels` | Labels for the servicemonitor passed to Prometheus Operator | `{}` |
| `exporter.serviceMonitor.timeout` | How long until a scrape request times out. If not set, the Prometheus default scape timeout is used | `nil` |
| `haproxy.enabled` | Enabled HAProxy LoadBalancing/Proxy | `false` |
| `haproxy.replicas` | Number of HAProxy instances | `3` |
| `haproxy.image.repository`| HAProxy Image Repository | `haproxy` |
| `haproxy.image.tag` | HAProxy Image Tag | `2.0.1` |
| `haproxy.image.pullPolicy`| HAProxy Image PullPolicy | `IfNotPresent` |
| `haproxy.imagePullSecrets`| Reference to one or more secrets to be used when pulling haproxy images | [] |
| `haproxy.annotations` | HAProxy template annotations | `{}` |
| `haproxy.customConfig` | Allows for custom config-haproxy.cfg file to be applied. If this is used then default config will be overwriten | `` |
| `haproxy.extraConfig` | Allows to place any additional configuration section to add to the default config-haproxy.cfg | `` |
| `haproxy.resources` | HAProxy resources | `{}` |
| `haproxy.emptyDir` | Configuration of `emptyDir` | `{}` |
| `haproxy.service.type` | HAProxy service type "ClusterIP", "LoadBalancer" or "NodePort" | `ClusterIP` |
| `haproxy.service.nodePort` | HAProxy service nodePort value (haproxy.service.type must be NodePort) | not set |
| `haproxy.service.annotations` | HAProxy service annotations | `{}` |
| `haproxy.stickyBalancing` | HAProxy sticky load balancing to Redis nodes. Helps with connections shutdown. | `false` |
| `haproxy.hapreadport.enable` | Enable a read only port for redis slaves | `false` |
| `haproxy.hapreadport.port` | Haproxy port for read only redis slaves | `6380` |
| `haproxy.metrics.enabled` | HAProxy enable prometheus metric scraping | `false` |
| `haproxy.metrics.port` | HAProxy prometheus metrics scraping port | `9101` |
| `haproxy.metrics.portName` | HAProxy metrics scraping port name | `exporter-port` |
| `haproxy.metrics.scrapePath` | HAProxy prometheus metrics scraping port | `/metrics` |
| `haproxy.metrics.serviceMonitor.enabled` | Use servicemonitor from prometheus operator for HAProxy metrics | `false` |
| `haproxy.metrics.serviceMonitor.namespace` | Namespace the service monitor for HAProxy metrics is created in | `default` |
| `haproxy.metrics.serviceMonitor.interval` | Scrape interval, If not set, the Prometheus default scrape interval is used | `nil` |
| `haproxy.metrics.serviceMonitor.telemetryPath` | Path to HAProxy metrics telemetry-path | `/metrics` |
| `haproxy.metrics.serviceMonitor.labels` | Labels for the HAProxy metrics servicemonitor passed to Prometheus Operator | `{}` |
| `haproxy.metrics.serviceMonitor.timeout` | How long until a scrape request times out. If not set, the Prometheus default scape timeout is used | `nil` |
| `haproxy.init.resources` | Extra init resources | `{}` |
| `haproxy.timeout.connect` | haproxy.cfg `timeout connect` setting | `4s` |
| `haproxy.timeout.server` | haproxy.cfg `timeout server` setting | `30s` |
| `haproxy.timeout.client` | haproxy.cfg `timeout client` setting | `30s` |
| `haproxy.timeout.check` | haproxy.cfg `timeout check` setting | `2s` |
| `haproxy.priorityClassName` | priorityClassName for `haproxy` deployment | not set |
| `haproxy.securityContext` | Security context to be added to the HAProxy deployment. | `{runAsUser: 1000, fsGroup: 1000, runAsNonRoot: true}` |
| `haproxy.hardAntiAffinity` | Whether the haproxy pods should be forced to run on separate nodes. | `true` |
| `haproxy.affinity` | Override all other haproxy affinity settings with a string. | `""` |
| `haproxy.additionalAffinities` | Additional affinities to add to the haproxy server pods. | `{}` |
| `podDisruptionBudget` | Pod Disruption Budget rules | `{}` |
| `priorityClassName` | priorityClassName for `redis-ha-statefulset` | not set |
| `hostPath.path` | Use this path on the host for data storage | not set |
| `hostPath.chown` | Run an init-container as root to set ownership on the hostPath | `true` |
| `sysctlImage.enabled` | Enable an init container to modify Kernel settings | `false` |
| `sysctlImage.command` | sysctlImage command to execute | [] |
| `sysctlImage.registry` | sysctlImage Init container registry | `docker.io` |
| `sysctlImage.repository` | sysctlImage Init container name | `busybox` |
| `sysctlImage.tag` | sysctlImage Init container tag | `1.31.1` |
| `sysctlImage.pullPolicy` | sysctlImage Init container pull policy | `Always` |
| `sysctlImage.mountHostSys`| Mount the host `/sys` folder to `/host-sys` | `false` |
| `sysctlImage.resources` | sysctlImage resources | `{}` |
| `schedulerName` | Alternate scheduler name | `nil` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```bash
$ helm install \
--set image=redis \
--set tag=5.0.5-alpine \
stable/redis-ha
```
The above command sets the Redis server within `default` namespace.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```bash
$ helm install -f values.yaml stable/redis-ha
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Custom Redis and Sentinel config options
This chart allows for most redis or sentinel config options to be passed as a key value pair through the `values.yaml` under `redis.config` and `sentinel.config`. See links below for all available options.
[Example redis.conf](http://download.redis.io/redis-stable/redis.conf)
[Example sentinel.conf](http://download.redis.io/redis-stable/sentinel.conf)
For example `repl-timeout 60` would be added to the `redis.config` section of the `values.yaml` as:
```yml
repl-timeout: "60"
```
Note:
1. Some config options should be renamed by redis versione.g.:
```
# In redis 5.xsee https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf
min-replicas-to-write: 1
min-replicas-max-lag: 5
# In redis 4.x and redis 3.xsee https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf and https://raw.githubusercontent.com/antirez/redis/3.0/redis.conf
min-slaves-to-write 1
min-slaves-max-lag 5
```
Sentinel options supported must be in the the `sentinel <option> <master-group-name> <value>` format. For example, `sentinel down-after-milliseconds 30000` would be added to the `sentinel.config` section of the `values.yaml` as:
```yml
down-after-milliseconds: 30000
```
If more control is needed from either the redis or sentinel config then an entire config can be defined under `redis.customConfig` or `sentinel.customConfig`. Please note that these values will override any configuration options under their respective section. For example, if you define `sentinel.customConfig` then the `sentinel.config` is ignored.
## Host Kernel Settings
Redis may require some changes in the kernel of the host machine to work as expected, in particular increasing the `somaxconn` value and disabling transparent huge pages.
To do so, you can set up a privileged initContainer with the `sysctlImage` config values, for example:
```
sysctlImage:
enabled: true
mountHostSys: true
command:
- /bin/sh
- -xc
- |-
sysctl -w net.core.somaxconn=10000
echo never > /host-sys/kernel/mm/transparent_hugepage/enabled
```
## HAProxy startup
When HAProxy is enabled, it will attempt to connect to each announce-service of each redis replica instance in its init container before starting.
It will fail if announce-service IP is not available fast enough (10 seconds max by announce-service).
A such case could happen if the orchestator is pending the nomination of redis pods.
Risk is limited because announce-service is using `publishNotReadyAddresses: true`, although, in such case, HAProxy pod will be rescheduled afterward by the orchestrator.

View File

@ -0,0 +1,10 @@
---
## Enable HAProxy to manage Load Balancing
haproxy:
enabled: true
annotations:
any.domain/key: "value"
serviceAccount:
create: true
metrics:
enabled: true

View File

@ -0,0 +1,25 @@
Redis can be accessed via port {{ .Values.redis.port }} and Sentinel can be accessed via port {{ .Values.sentinel.port }} on the following DNS name from within your cluster:
{{ template "redis-ha.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
To connect to your Redis server:
{{- if .Values.auth }}
1. To retrieve the redis password:
echo $(kubectl get secret {{ template "redis-ha.fullname" . }} -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the {{ template "redis-ha.fullname" . }}-server-0 pod is configured as the master:
kubectl exec -it {{ template "redis-ha.fullname" . }}-server-0 sh -n {{ .Release.Namespace }}
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
{{- else }}
1. Run a Redis pod that you can use as a client:
kubectl exec -it {{ template "redis-ha.fullname" . }}-server-0 sh -n {{ .Release.Namespace }}
2. Connect using the Redis CLI:
redis-cli -h {{ template "redis-ha.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
{{- end }}

View File

@ -0,0 +1,275 @@
{{/* vim: set filetype=mustache: */}}
{{- define "config-redis.conf" }}
{{- if .Values.redis.customConfig }}
{{ tpl .Values.redis.customConfig . | indent 4 }}
{{- else }}
dir "/data"
port {{ .Values.redis.port }}
{{- range $key, $value := .Values.redis.config }}
{{ $key }} {{ $value }}
{{- end }}
{{- if .Values.auth }}
requirepass replace-default-auth
masterauth replace-default-auth
{{- end }}
{{- end }}
{{- end }}
{{- define "config-sentinel.conf" }}
{{- if .Values.sentinel.customConfig }}
{{ tpl .Values.sentinel.customConfig . | indent 4 }}
{{- else }}
dir "/data"
{{- range $key, $value := .Values.sentinel.config }}
{{- if eq "maxclients" $key }}
{{ $key }} {{ $value }}
{{- else }}
sentinel {{ $key }} {{ template "redis-ha.masterGroupName" $ }} {{ $value }}
{{- end }}
{{- end }}
{{- if .Values.auth }}
sentinel auth-pass {{ template "redis-ha.masterGroupName" . }} replace-default-auth
{{- end }}
{{- end }}
{{- end }}
{{- define "config-init.sh" }}
HOSTNAME="$(hostname)"
INDEX="${HOSTNAME##*-}"
MASTER="$(redis-cli -h {{ template "redis-ha.fullname" . }} -p {{ .Values.sentinel.port }} sentinel get-master-addr-by-name {{ template "redis-ha.masterGroupName" . }} | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
MASTER_GROUP="{{ template "redis-ha.masterGroupName" . }}"
QUORUM="{{ .Values.sentinel.quorum }}"
REDIS_CONF=/data/conf/redis.conf
REDIS_PORT={{ .Values.redis.port }}
SENTINEL_CONF=/data/conf/sentinel.conf
SENTINEL_PORT={{ .Values.sentinel.port }}
SERVICE={{ template "redis-ha.fullname" . }}
set -eu
sentinel_update() {
echo "Updating sentinel config with master $MASTER"
eval MY_SENTINEL_ID="\${SENTINEL_ID_$INDEX}"
sed -i "1s/^/sentinel myid $MY_SENTINEL_ID\\n/" "$SENTINEL_CONF"
sed -i "2s/^/sentinel monitor $MASTER_GROUP $1 $REDIS_PORT $QUORUM \\n/" "$SENTINEL_CONF"
echo "sentinel announce-ip $ANNOUNCE_IP" >> $SENTINEL_CONF
echo "sentinel announce-port $SENTINEL_PORT" >> $SENTINEL_CONF
}
redis_update() {
echo "Updating redis config"
echo "slaveof $1 $REDIS_PORT" >> "$REDIS_CONF"
echo "slave-announce-ip $ANNOUNCE_IP" >> $REDIS_CONF
echo "slave-announce-port $REDIS_PORT" >> $REDIS_CONF
}
copy_config() {
cp /readonly-config/redis.conf "$REDIS_CONF"
cp /readonly-config/sentinel.conf "$SENTINEL_CONF"
}
setup_defaults() {
echo "Setting up defaults"
if [ "$INDEX" = "0" ]; then
echo "Setting this pod as the default master"
redis_update "$ANNOUNCE_IP"
sentinel_update "$ANNOUNCE_IP"
sed -i "s/^.*slaveof.*//" "$REDIS_CONF"
else
DEFAULT_MASTER="$(getent hosts "$SERVICE-announce-0" | awk '{ print $1 }')"
if [ -z "$DEFAULT_MASTER" ]; then
echo "Unable to resolve host"
exit 1
fi
echo "Setting default slave config.."
redis_update "$DEFAULT_MASTER"
sentinel_update "$DEFAULT_MASTER"
fi
}
find_master() {
echo "Attempting to find master"
if [ "$(redis-cli -h "$MASTER"{{ if .Values.auth }} -a "$AUTH"{{ end }} ping)" != "PONG" ]; then
echo "Can't ping master, attempting to force failover"
if redis-cli -h "$SERVICE" -p "$SENTINEL_PORT" sentinel failover "$MASTER_GROUP" | grep -q 'NOGOODSLAVE' ; then
setup_defaults
return 0
fi
sleep 10
MASTER="$(redis-cli -h $SERVICE -p $SENTINEL_PORT sentinel get-master-addr-by-name $MASTER_GROUP | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
if [ "$MASTER" ]; then
sentinel_update "$MASTER"
redis_update "$MASTER"
else
echo "Could not failover, exiting..."
exit 1
fi
else
echo "Found reachable master, updating config"
sentinel_update "$MASTER"
redis_update "$MASTER"
fi
}
mkdir -p /data/conf/
echo "Initializing config.."
copy_config
ANNOUNCE_IP=$(getent hosts "$SERVICE-announce-$INDEX" | awk '{ print $1 }')
if [ -z "$ANNOUNCE_IP" ]; then
"Could not resolve the announce ip for this pod"
exit 1
elif [ "$MASTER" ]; then
find_master
else
setup_defaults
fi
if [ "${AUTH:-}" ]; then
echo "Setting auth values"
ESCAPED_AUTH=$(echo "$AUTH" | sed -e 's/[\/&]/\\&/g');
sed -i "s/replace-default-auth/${ESCAPED_AUTH}/" "$REDIS_CONF" "$SENTINEL_CONF"
fi
echo "Ready..."
{{- end }}
{{- define "config-haproxy.cfg" }}
{{- if .Values.haproxy.customConfig }}
{{ .Values.haproxy.customConfig | indent 4}}
{{- else }}
defaults REDIS
mode tcp
timeout connect {{ .Values.haproxy.timeout.connect }}
timeout server {{ .Values.haproxy.timeout.server }}
timeout client {{ .Values.haproxy.timeout.client }}
timeout check {{ .Values.haproxy.timeout.check }}
listen health_check_http_url
bind :8888
mode http
monitor-uri /healthz
option dontlognull
{{- $root := . }}
{{- $fullName := include "redis-ha.fullname" . }}
{{- $replicas := int (toString .Values.replicas) }}
{{- $masterGroupName := include "redis-ha.masterGroupName" . }}
{{- range $i := until $replicas }}
# Check Sentinel and whether they are nominated master
backend check_if_redis_is_master_{{ $i }}
mode tcp
option tcp-check
tcp-check connect
{{- if $root.auth }}
tcp-check send AUTH\ {{ $root.redisPassword }}\r\n
tcp-check expect string +OK
{{- end }}
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send SENTINEL\ get-master-addr-by-name\ {{ $masterGroupName }}\r\n
tcp-check expect string REPLACE_ANNOUNCE{{ $i }}
tcp-check send QUIT\r\n
tcp-check expect string +OK
{{- range $i := until $replicas }}
server R{{ $i }} {{ $fullName }}-announce-{{ $i }}:26379 check inter 1s
{{- end }}
{{- end }}
# decide redis backend to use
#master
frontend ft_redis_master
bind *:{{ $root.Values.redis.port }}
use_backend bk_redis_master
{{- if .Values.haproxy.readOnly.enabled }}
#slave
frontend ft_redis_slave
bind *:{{ .Values.haproxy.readOnly.port }}
use_backend bk_redis_slave
{{- end }}
# Check all redis servers to see if they think they are master
backend bk_redis_master
{{- if .Values.haproxy.stickyBalancing }}
balance source
hash-type consistent
{{- end }}
mode tcp
option tcp-check
tcp-check connect
{{- if .Values.auth }}
tcp-check send AUTH\ REPLACE_AUTH_SECRET\r\n
tcp-check expect string +OK
{{- end }}
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
{{- range $i := until $replicas }}
use-server R{{ $i }} if { srv_is_up(R{{ $i }}) } { nbsrv(check_if_redis_is_master_{{ $i }}) ge 2 }
server R{{ $i }} {{ $fullName }}-announce-{{ $i }}:{{ $root.Values.redis.port }} check inter 1s fall 1 rise 1
{{- end }}
{{- if .Values.haproxy.readOnly.enabled }}
backend bk_redis_slave
{{- if .Values.haproxy.stickyBalancing }}
balance source
hash-type consistent
{{- end }}
mode tcp
option tcp-check
tcp-check connect
{{- if .Values.auth }}
tcp-check send AUTH\ REPLACE_AUTH_SECRET\r\n
tcp-check expect string +OK
{{- end }}
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:slave
tcp-check send QUIT\r\n
tcp-check expect string +OK
{{- range $i := until $replicas }}
server R{{ $i }} {{ $fullName }}-announce-{{ $i }}:{{ $root.Values.redis.port }} check inter 1s fall 1 rise 1
{{- end }}
{{- end }}
{{- if .Values.haproxy.metrics.enabled }}
frontend metrics
mode http
bind *:{{ .Values.haproxy.metrics.port }}
option http-use-htx
http-request use-service prometheus-exporter if { path {{ .Values.haproxy.metrics.scrapePath }} }
{{- end }}
{{- if .Values.haproxy.extraConfig }}
# Additional configuration
{{ .Values.haproxy.extraConfig | indent 4 }}
{{- end }}
{{- end }}
{{- end }}
{{- define "config-haproxy_init.sh" }}
HAPROXY_CONF=/data/haproxy.cfg
cp /readonly/haproxy.cfg "$HAPROXY_CONF"
{{- $fullName := include "redis-ha.fullname" . }}
{{- $replicas := int (toString .Values.replicas) }}
{{- range $i := until $replicas }}
for loop in $(seq 1 10); do
getent hosts {{ $fullName }}-announce-{{ $i }} && break
echo "Waiting for service {{ $fullName }}-announce-{{ $i }} to be ready ($loop) ..." && sleep 1
done
ANNOUNCE_IP{{ $i }}=$(getent hosts "{{ $fullName }}-announce-{{ $i }}" | awk '{ print $1 }')
if [ -z "$ANNOUNCE_IP{{ $i }}" ]; then
echo "Could not resolve the announce ip for {{ $fullName }}-announce-{{ $i }}"
exit 1
fi
sed -i "s/REPLACE_ANNOUNCE{{ $i }}/$ANNOUNCE_IP{{ $i }}/" "$HAPROXY_CONF"
if [ "${AUTH:-}" ]; then
echo "Setting auth values"
ESCAPED_AUTH=$(echo "$AUTH" | sed -e 's/[\/&]/\\&/g');
sed -i "s/REPLACE_AUTH_SECRET/${ESCAPED_AUTH}/" "$HAPROXY_CONF"
fi
{{- end }}
{{- end }}

View File

@ -0,0 +1,83 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "redis-ha.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "redis-ha.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Return sysctl image
*/}}
{{- define "redis.sysctl.image" -}}
{{- $registryName := default "docker.io" .Values.sysctlImage.registry -}}
{{- $tag := default "latest" .Values.sysctlImage.tag | toString -}}
{{- printf "%s/%s:%s" $registryName .Values.sysctlImage.repository $tag -}}
{{- end -}}
{{- /*
Credit: @technosophos
https://github.com/technosophos/common-chart/
labels.standard prints the standard Helm labels.
The standard labels are frequently used in metadata.
*/ -}}
{{- define "labels.standard" -}}
app: {{ template "redis-ha.name" . }}
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: {{ template "chartref" . }}
{{- end -}}
{{- /*
Credit: @technosophos
https://github.com/technosophos/common-chart/
chartref prints a chart name and version.
It does minimal escaping for use in Kubernetes labels.
Example output:
zookeeper-1.2.3
wordpress-3.2.1_20170219
*/ -}}
{{- define "chartref" -}}
{{- replace "+" "_" .Chart.Version | printf "%s-%s" .Chart.Name -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "redis-ha.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "redis-ha.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- define "redis-ha.masterGroupName" -}}
{{- $masterGroupName := tpl ( .Values.redis.masterGroupName | default "") . -}}
{{- $validMasterGroupName := regexMatch "^[\\w-\\.]+$" $masterGroupName -}}
{{- if $validMasterGroupName -}}
{{ $masterGroupName }}
{{- else -}}
{{ required "A valid .Values.redis.masterGroupName entry is required (matching ^[\\w-\\.]+$)" ""}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,12 @@
{{- if and .Values.auth (not .Values.existingSecret) -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
type: Opaque
data:
{{ .Values.authKey }}: {{ .Values.redisPassword | b64enc | quote }}
{{- end -}}

View File

@ -0,0 +1,41 @@
{{- $fullName := include "redis-ha.fullname" . }}
{{- $namespace := .Release.Namespace -}}
{{- $replicas := int (toString .Values.replicas) }}
{{- $root := . }}
{{- range $i := until $replicas }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ $fullName }}-announce-{{ $i }}
namespace: {{ $namespace }}
labels:
{{ include "labels.standard" $root | indent 4 }}
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
{{- if $root.Values.serviceAnnotations }}
{{ toYaml $root.Values.serviceAnnotations | indent 4 }}
{{- end }}
spec:
publishNotReadyAddresses: true
type: ClusterIP
ports:
- name: server
port: {{ $root.Values.redis.port }}
protocol: TCP
targetPort: redis
- name: sentinel
port: {{ $root.Values.sentinel.port }}
protocol: TCP
targetPort: sentinel
{{- if $root.Values.exporter.enabled }}
- name: exporter
port: {{ $root.Values.exporter.port }}
protocol: TCP
targetPort: exporter-port
{{- end }}
selector:
release: {{ $root.Release.Name }}
app: {{ include "redis-ha.name" $root }}
"statefulset.kubernetes.io/pod-name": {{ $fullName }}-server-{{ $i }}
{{- end }}

View File

@ -0,0 +1,25 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "redis-ha.fullname" . }}-configmap
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
data:
redis.conf: |
{{- include "config-redis.conf" . }}
sentinel.conf: |
{{- include "config-sentinel.conf" . }}
init.sh: |
{{- include "config-init.sh" . }}
{{ if .Values.haproxy.enabled }}
haproxy.cfg: |-
{{- include "config-haproxy.cfg" . }}
{{- end }}
haproxy_init.sh: |
{{- include "config-haproxy_init.sh" . }}

View File

@ -0,0 +1,11 @@
{{- if .Values.exporter.script }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "redis-ha.fullname" . }}-exporter-script-configmap
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
data:
script: {{ toYaml .Values.exporter.script | indent 2 }}
{{- end }}

View File

@ -0,0 +1,15 @@
{{- if .Values.podDisruptionBudget -}}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "redis-ha.fullname" . }}-pdb
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
spec:
selector:
matchLabels:
release: {{ .Release.Name }}
app: {{ template "redis-ha.name" . }}
{{ toYaml .Values.podDisruptionBudget | indent 2 }}
{{- end -}}

View File

@ -0,0 +1,19 @@
{{- if and .Values.serviceAccount.create .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
{{- end }}

View File

@ -0,0 +1,19 @@
{{- if and .Values.serviceAccount.create .Values.rbac.create }}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "redis-ha.serviceAccountName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "redis-ha.fullname" . }}
{{- end }}

View File

@ -0,0 +1,35 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
{{- if and ( .Values.exporter.enabled ) ( .Values.exporter.serviceMonitor.enabled ) }}
servicemonitor: enabled
{{- end }}
annotations:
{{- if .Values.serviceAnnotations }}
{{ toYaml .Values.serviceAnnotations | indent 4 }}
{{- end }}
spec:
type: ClusterIP
clusterIP: None
ports:
- name: server
port: {{ .Values.redis.port }}
protocol: TCP
targetPort: redis
- name: sentinel
port: {{ .Values.sentinel.port }}
protocol: TCP
targetPort: sentinel
{{- if .Values.exporter.enabled }}
- name: exporter-port
port: {{ .Values.exporter.port }}
protocol: TCP
targetPort: exporter-port
{{- end }}
selector:
release: {{ .Release.Name }}
app: {{ template "redis-ha.name" . }}

View File

@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "redis-ha.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
{{- end }}

View File

@ -0,0 +1,35 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.exporter.serviceMonitor.enabled ) ( .Values.exporter.enabled ) }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
{{- if .Values.exporter.serviceMonitor.labels }}
labels:
{{ toYaml .Values.exporter.serviceMonitor.labels | indent 4}}
{{- end }}
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- if .Values.exporter.serviceMonitor.namespace }}
namespace: {{ .Values.exporter.serviceMonitor.namespace }}
{{- end }}
spec:
endpoints:
- targetPort: {{ .Values.exporter.port }}
{{- if .Values.exporter.serviceMonitor.interval }}
interval: {{ .Values.exporter.serviceMonitor.interval }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.telemetryPath }}
path: {{ .Values.exporter.serviceMonitor.telemetryPath }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.timeout }}
scrapeTimeout: {{ .Values.exporter.serviceMonitor.timeout }}
{{- end }}
jobLabel: {{ template "redis-ha.fullname" . }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
servicemonitor: enabled
{{- end }}

View File

@ -0,0 +1,319 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "redis-ha.fullname" . }}-server
namespace: {{ .Release.Namespace }}
labels:
{{ template "redis-ha.fullname" . }}: replica
{{ include "labels.standard" . | indent 4 }}
spec:
selector:
matchLabels:
release: {{ .Release.Name }}
app: {{ template "redis-ha.name" . }}
serviceName: {{ template "redis-ha.fullname" . }}
replicas: {{ .Values.replicas }}
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
template:
metadata:
annotations:
checksum/init-config: {{ print (include "config-redis.conf" .) (include "config-init.sh" .) | sha256sum }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
{{- if .Values.exporter.enabled }}
prometheus.io/port: "{{ .Values.exporter.port }}"
prometheus.io/scrape: "true"
prometheus.io/path: {{ .Values.exporter.scrapePath }}
{{- end }}
labels:
release: {{ .Release.Name }}
app: {{ template "redis-ha.name" . }}
{{ template "redis-ha.fullname" . }}: replica
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
affinity:
{{- if .Values.affinity }}
{{- with .Values.affinity }}
{{ tpl . $ | indent 8 }}
{{- end }}
{{- else }}
{{- if .Values.additionalAffinities }}
{{ toYaml .Values.additionalAffinities | indent 8 }}
{{- end }}
podAntiAffinity:
{{- if .Values.hardAntiAffinity }}
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
{{ template "redis-ha.fullname" . }}: replica
topologyKey: kubernetes.io/hostname
{{- else }}
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
{{ template "redis-ha.fullname" . }}: replica
topologyKey: kubernetes.io/hostname
{{- end }}
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
{{ template "redis-ha.fullname" . }}: replica
topologyKey: failure-domain.beta.kubernetes.io/zone
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.imagePullSecrets | nindent 8 }}
{{- end }}
securityContext:
{{ toYaml .Values.securityContext | indent 8 }}
serviceAccountName: {{ template "redis-ha.serviceAccountName" . }}
initContainers:
{{- if .Values.sysctlImage.enabled }}
- name: init-sysctl
image: {{ template "redis.sysctl.image" . }}
imagePullPolicy: {{ .Values.sysctlImage.pullPolicy }}
resources:
{{ toYaml .Values.sysctlImage.resources | indent 10 }}
{{- if .Values.sysctlImage.mountHostSys }}
volumeMounts:
- name: host-sys
mountPath: /host-sys
{{- end }}
command:
{{ toYaml .Values.sysctlImage.command | indent 10 }}
securityContext:
runAsNonRoot: false
privileged: true
runAsUser: 0
{{- end }}
{{- if and .Values.hostPath.path .Values.hostPath.chown }}
- name: hostpath-chown
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
securityContext:
runAsNonRoot: false
runAsUser: 0
command:
- chown
- "{{ .Values.securityContext.runAsUser }}"
- /data
volumeMounts:
- name: data
mountPath: /data
{{- end }}
- name: config-init
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
{{ toYaml .Values.init.resources | indent 10 }}
command:
- sh
args:
- /readonly-config/init.sh
env:
{{- $replicas := int (toString .Values.replicas) -}}
{{- range $i := until $replicas }}
- name: SENTINEL_ID_{{ $i }}
value: {{ printf "%s\n%s\nindex: %d" (include "redis-ha.name" $) ($.Release.Name) $i | sha1sum }}
{{ end -}}
{{- if .Values.auth }}
- name: AUTH
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: {{ .Values.authKey }}
{{- end }}
volumeMounts:
- name: config
mountPath: /readonly-config
readOnly: true
- name: data
mountPath: /data
containers:
- name: redis
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- redis-server
args:
- /data/conf/redis.conf
env:
{{- if .Values.auth }}
- name: AUTH
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: {{ .Values.authKey }}
{{- end }}
livenessProbe:
tcpSocket:
port: {{ .Values.redis.port }}
initialDelaySeconds: 15
resources:
{{ toYaml .Values.redis.resources | indent 10 }}
ports:
- name: redis
containerPort: {{ .Values.redis.port }}
volumeMounts:
- mountPath: /data
name: data
- name: sentinel
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- redis-sentinel
args:
- /data/conf/sentinel.conf
{{- if .Values.auth }}
env:
- name: AUTH
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: {{ .Values.authKey }}
{{- end }}
livenessProbe:
tcpSocket:
port: {{ .Values.sentinel.port }}
initialDelaySeconds: 15
resources:
{{ toYaml .Values.sentinel.resources | indent 10 }}
ports:
- name: sentinel
containerPort: {{ .Values.sentinel.port }}
volumeMounts:
- mountPath: /data
name: data
{{- if .Values.exporter.enabled }}
- name: redis-exporter
image: "{{ .Values.exporter.image }}:{{ .Values.exporter.tag }}"
imagePullPolicy: {{ .Values.exporter.pullPolicy }}
args:
{{- range $key, $value := .Values.exporter.extraArgs }}
- --{{ $key }}={{ $value }}
{{- end }}
env:
- name: REDIS_ADDR
value: redis://localhost:{{ .Values.redis.port }}
{{- if .Values.auth }}
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: {{ .Values.authKey }}
{{- end }}
{{- if .Values.exporter.script }}
- name: REDIS_EXPORTER_SCRIPT
value: /script/script.lua
{{- end }}
livenessProbe:
httpGet:
path: {{ .Values.exporter.scrapePath }}
port: {{ .Values.exporter.port }}
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
resources:
{{ toYaml .Values.exporter.resources | indent 10 }}
ports:
- name: exporter-port
containerPort: {{ .Values.exporter.port }}
{{- if .Values.exporter.script }}
volumeMounts:
- mountPath: /script
name: script-mount
{{- end }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "redis-ha.fullname" . }}-configmap
{{- if .Values.sysctlImage.mountHostSys }}
- name: host-sys
hostPath:
path: /sys
{{- end }}
{{- if .Values.exporter.script }}
- name: script-mount
configMap:
name: {{ template "redis-ha.fullname" . }}-exporter-script-configmap
items:
- key: script
path: script.lua
{{- end }}
{{- if .Values.persistentVolume.enabled }}
volumeClaimTemplates:
- metadata:
name: data
annotations:
{{- range $key, $value := .Values.persistentVolume.annotations }}
{{ $key }}: {{ $value }}
{{- end }}
spec:
accessModes:
{{- range .Values.persistentVolume.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistentVolume.size | quote }}
{{- if .Values.persistentVolume.storageClass }}
{{- if (eq "-" .Values.persistentVolume.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
{{- if .Values.persistentVolume.reclaimPolicy }}
persistentVolumeReclaimPolicy: "{{ .Values.persistentVolume.reclaimPolicy }}"
{{- end }}
{{- else if .Values.hostPath.path }}
- name: data
hostPath:
path: {{ tpl .Values.hostPath.path .}}
{{- else }}
- name: data
emptyDir:
{{ toYaml .Values.emptyDir | indent 10 }}
{{- end }}

View File

@ -0,0 +1,151 @@
{{- if .Values.haproxy.enabled }}
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ template "redis-ha.fullname" . }}-haproxy
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
spec:
strategy:
type: RollingUpdate
revisionHistoryLimit: 1
replicas: {{ .Values.haproxy.replicas }}
selector:
matchLabels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
template:
metadata:
name: {{ template "redis-ha.fullname" . }}-haproxy
labels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
annotations:
{{- if .Values.haproxy.metrics.enabled }}
prometheus.io/port: "{{ .Values.haproxy.metrics.port }}"
prometheus.io/scrape: "true"
prometheus.io/path: "{{ .Values.haproxy.metrics.scrapePath }}"
{{- end }}
checksum/config: {{ print (include "config-haproxy.cfg" .) (include "config-haproxy_init.sh" .) | sha256sum }}
{{- if .Values.haproxy.annotations }}
{{ toYaml .Values.haproxy.annotations | indent 8 }}
{{- end }}
spec:
# Needed when using unmodified rbac-setup.yml
{{ if .Values.haproxy.serviceAccount.create }}
serviceAccountName: {{ template "redis-ha.serviceAccountName" . }}-haproxy
{{ end }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
affinity:
{{- if .Values.haproxy.affinity }}
{{- with .Values.haproxy.affinity }}
{{ tpl . $ | indent 8 }}
{{- end }}
{{- else }}
{{- if .Values.haproxy.additionalAffinities }}
{{ toYaml .Values.haproxy.additionalAffinities | indent 8 }}
{{- end }}
podAntiAffinity:
{{- if .Values.haproxy.hardAntiAffinity }}
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
{{- else }}
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
{{- end }}
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
topologyKey: failure-domain.beta.kubernetes.io/zone
{{- end }}
initContainers:
- name: config-init
image: {{ .Values.haproxy.image.repository }}:{{ .Values.haproxy.image.tag }}
imagePullPolicy: {{ .Values.haproxy.image.pullPolicy }}
resources:
{{ toYaml .Values.haproxy.init.resources | indent 10 }}
command:
- sh
args:
- /readonly/haproxy_init.sh
{{- if .Values.auth }}
env:
- name: AUTH
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: {{ .Values.authKey }}
{{- end }}
volumeMounts:
- name: config-volume
mountPath: /readonly
readOnly: true
- name: data
mountPath: /data
{{- if .Values.haproxy.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.haproxy.imagePullSecrets | nindent 8 }}
{{- end }}
securityContext:
{{ toYaml .Values.haproxy.securityContext | indent 8 }}
containers:
- name: haproxy
image: {{ .Values.haproxy.image.repository }}:{{ .Values.haproxy.image.tag }}
imagePullPolicy: {{ .Values.haproxy.image.pullPolicy }}
livenessProbe:
httpGet:
path: /healthz
port: 8888
initialDelaySeconds: 5
periodSeconds: 3
ports:
- name: redis
containerPort: {{ default "6379" .Values.redis.port }}
{{- if .Values.haproxy.readOnly.enabled }}
- name: readonlyport
containerPort: {{ default "6380" .Values.haproxy.readOnly.port }}
{{- end }}
{{- if .Values.haproxy.metrics.enabled }}
- name: metrics-port
containerPort: {{ default "9101" .Values.haproxy.metrics.port }}
{{- end }}
resources:
{{ toYaml .Values.haproxy.resources | indent 10 }}
volumeMounts:
- name: data
mountPath: /usr/local/etc/haproxy
- name: shared-socket
mountPath: /run/haproxy
{{- if .Values.haproxy.priorityClassName }}
priorityClassName: {{ .Values.haproxy.priorityClassName }}
{{- end }}
volumes:
- name: config-volume
configMap:
name: {{ template "redis-ha.fullname" . }}-configmap
- name: shared-socket
emptyDir:
{{ toYaml .Values.haproxy.emptyDir | indent 10 }}
- name: data
emptyDir:
{{ toYaml .Values.haproxy.emptyDir | indent 10 }}
{{- end }}

View File

@ -0,0 +1,42 @@
{{- if .Values.haproxy.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "redis-ha.fullname" . }}-haproxy
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
component: {{ template "redis-ha.fullname" . }}-haproxy
annotations:
{{- if .Values.haproxy.service.annotations }}
{{ toYaml .Values.haproxy.service.annotations | indent 4 }}
{{- end }}
spec:
type: {{ default "ClusterIP" .Values.haproxy.service.type }}
{{- if and (eq .Values.haproxy.service.type "LoadBalancer") .Values.haproxy.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.haproxy.service.loadBalancerIP }}
{{- end }}
ports:
- name: haproxy
port: {{ .Values.redis.port }}
protocol: TCP
targetPort: redis
{{- if and (eq .Values.haproxy.service.type "NodePort") .Values.haproxy.service.nodePort }}
nodePort: {{ .Values.haproxy.service.nodePort }}
{{- end }}
{{- if .Values.haproxy.readOnly.enabled }}
- name: haproxyreadonly
port: {{ .Values.haproxy.readOnly.port }}
protocol: TCP
targetPort: {{ .Values.haproxy.readOnly.port }}
{{- end }}
{{- if .Values.haproxy.metrics.enabled }}
- name: {{ .Values.haproxy.metrics.portName }}
port: {{ .Values.haproxy.metrics.port }}
protocol: TCP
targetPort: metrics-port
{{- end }}
selector:
release: {{ .Release.Name }}
app: {{ template "redis-ha.name" . }}-haproxy
{{- end }}

View File

@ -0,0 +1,12 @@
{{- if and .Values.haproxy.serviceAccount.create .Values.haproxy.enabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "redis-ha.serviceAccountName" . }}-haproxy
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
{{- end }}

View File

@ -0,0 +1,34 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.haproxy.metrics.serviceMonitor.enabled ) ( .Values.haproxy.metrics.enabled ) }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
{{- with .Values.haproxy.metrics.serviceMonitor.labels }}
labels: {{ toYaml . | nindent 4}}
{{- end }}
name: {{ template "redis-ha.fullname" . }}-haproxy
namespace: {{ .Release.Namespace }}
{{- if .Values.haproxy.metrics.serviceMonitor.namespace }}
namespace: {{ .Values.haproxy.metrics.serviceMonitor.namespace }}
{{- end }}
spec:
endpoints:
- targetPort: {{ .Values.haproxy.metrics.port }}
{{- if .Values.haproxy.metrics.serviceMonitor.interval }}
interval: {{ .Values.haproxy.metrics.serviceMonitor.interval }}
{{- end }}
{{- if .Values.haproxy.metrics.serviceMonitor.telemetryPath }}
path: {{ .Values.haproxy.metrics.serviceMonitor.telemetryPath }}
{{- end }}
{{- if .Values.haproxy.metrics.serviceMonitor.timeout }}
scrapeTimeout: {{ .Values.haproxy.metrics.serviceMonitor.timeout }}
{{- end }}
jobLabel: {{ template "redis-ha.fullname" . }}-haproxy
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
component: {{ template "redis-ha.fullname" . }}-haproxy
{{- end }}

View File

@ -0,0 +1,27 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ template "redis-ha.fullname" . }}-configmap-test
labels:
{{ include "labels.standard" . | indent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: check-init
image: koalaman/shellcheck:v0.5.0
args:
- --shell=sh
- /readonly-config/init.sh
volumeMounts:
- name: config
mountPath: /readonly-config
readOnly: true
{{- if .Values.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.imagePullSecrets | nindent 4 }}
{{- end }}
restartPolicy: Never
volumes:
- name: config
configMap:
name: {{ template "redis-ha.fullname" . }}-configmap

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ template "redis-ha.fullname" . }}-service-test
labels:
{{ include "labels.standard" . | indent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: "{{ .Release.Name }}-service-test"
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
command:
- sh
- -c
- redis-cli -h {{ template "redis-ha.fullname" . }} -p {{ .Values.redis.port }} info server
{{- if .Values.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.imagePullSecrets | nindent 4 }}
{{- end }}
restartPolicy: Never

View File

@ -0,0 +1,362 @@
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
repository: redis
tag: 5.0.6-alpine
pullPolicy: IfNotPresent
## Reference to one or more secrets to be used when pulling images
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## This imagePullSecrets is only for redis images
##
imagePullSecrets: []
# - name: "image-pull-secret"
## replicas number for each component
replicas: 3
## Kubernetes priorityClass name for the redis-ha-server pod
# priorityClassName: ""
## Custom labels for the redis pod
labels: {}
## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the redis-ha.fullname template
# name:
## Enables a HA Proxy for better LoadBalancing / Sentinel Master support. Automatically proxies to Redis master.
## Recommend for externally exposed Redis clusters.
## ref: https://cbonte.github.io/haproxy-dconv/1.9/intro.html
haproxy:
enabled: false
# Enable if you want a dedicated port in haproxy for redis-slaves
readOnly:
enabled: false
port: 6380
replicas: 3
image:
repository: haproxy
tag: 2.0.4
pullPolicy: IfNotPresent
## Reference to one or more secrets to be used when pulling images
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imagePullSecrets: []
# - name: "image-pull-secret"
annotations: {}
resources: {}
emptyDir: {}
## Enable sticky sessions to Redis nodes via HAProxy
## Very useful for long-living connections as in case of Sentry for example
stickyBalancing: false
## Kubernetes priorityClass name for the haproxy pod
# priorityClassName: ""
## Service type for HAProxy
##
service:
type: ClusterIP
loadBalancerIP:
annotations: {}
serviceAccount:
create: true
## Official HAProxy embedded prometheus metrics settings.
## Ref: https://github.com/haproxy/haproxy/tree/master/contrib/prometheus-exporter
##
metrics:
enabled: false
# prometheus port & scrape path
port: 9101
portName: exporter-port
scrapePath: /metrics
serviceMonitor:
# When set true then use a ServiceMonitor to configure scraping
enabled: false
# Set the namespace the ServiceMonitor should be deployed
# namespace: monitoring
# Set how frequently Prometheus should scrape
# interval: 30s
# Set path to redis-exporter telemtery-path
# telemetryPath: /metrics
# Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
# labels: {}
# Set timeout for scrape
# timeout: 10s
init:
resources: {}
timeout:
connect: 4s
server: 30s
client: 30s
check: 2s
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
## Whether the haproxy pods should be forced to run on separate nodes.
hardAntiAffinity: true
## Additional affinities to add to the haproxy pods.
additionalAffinities: {}
## Override all other affinity settings for the haproxy pods with a string.
affinity: |
## Custom config-haproxy.cfg files used to override default settings. If this file is
## specified then the config-haproxy.cfg above will be ignored.
# customConfig: |-
# Define configuration here
## Place any additional configuration section to add to the default config-haproxy.cfg
# extraConfig: |-
# Define configuration here
## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
create: true
sysctlImage:
enabled: false
command: []
registry: docker.io
repository: busybox
tag: 1.31.1
pullPolicy: Always
mountHostSys: false
resources: {}
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Redis specific configuration options
redis:
port: 6379
masterGroupName: "mymaster" # must match ^[\\w-\\.]+$) and can be templated
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-replicas-to-write: 1
min-replicas-max-lag: 5 # Value in seconds
maxmemory: "0" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "volatile-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
# Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
save: "900 1"
# When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"
## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
# Define configuration here
resources: {}
# requests:
# memory: 200Mi
# cpu: 100m
# limits:
# memory: 700Mi
## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 2
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated expect maxclients option.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5
maxclients: 10000
## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
# customConfig: |-
# Define configuration here
resources: {}
# requests:
# memory: 200Mi
# cpu: 100m
# limits:
# memory: 200Mi
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
nodeSelector: {}
## Whether the Redis server pods should be forced to run on separate nodes.
## This is accomplished by setting their AntiAffinity with requiredDuringSchedulingIgnoredDuringExecution as opposed to preferred.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature
##
hardAntiAffinity: true
## Additional affinities to add to the Redis server pods.
##
## Example:
## nodeAffinity:
## preferredDuringSchedulingIgnoredDuringExecution:
## - weight: 50
## preference:
## matchExpressions:
## - key: spot
## operator: NotIn
## values:
## - "true"
##
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
additionalAffinities: {}
## Override all other affinity settings for the Redis server pods with a string.
##
## Example:
## affinity: |
## podAntiAffinity:
## requiredDuringSchedulingIgnoredDuringExecution:
## - labelSelector:
## matchLabels:
## app: {{ template "redis-ha.name" . }}
## release: {{ .Release.Name }}
## topologyKey: kubernetes.io/hostname
## preferredDuringSchedulingIgnoredDuringExecution:
## - weight: 100
## podAffinityTerm:
## labelSelector:
## matchLabels:
## app: {{ template "redis-ha.name" . }}
## release: {{ .Release.Name }}
## topologyKey: failure-domain.beta.kubernetes.io/zone
##
affinity: |
# Prometheus exporter specific configuration options
exporter:
enabled: false
image: oliver006/redis_exporter
tag: v1.3.2
pullPolicy: IfNotPresent
# prometheus port & scrape path
port: 9121
scrapePath: /metrics
# cpu/memory resource limits/requests
resources: {}
# Additional args for redis exporter
extraArgs: {}
# Used to mount a LUA-Script via config map and use it for metrics-collection
# script: |
# -- Example script copied from: https://github.com/oliver006/redis_exporter/blob/master/contrib/sample_collect_script.lua
# -- Example collect script for -script option
# -- This returns a Lua table with alternating keys and values.
# -- Both keys and values must be strings, similar to a HGETALL result.
# -- More info about Redis Lua scripting: https://redis.io/commands/eval
#
# local result = {}
#
# -- Add all keys and values from some hash in db 5
# redis.call("SELECT", 5)
# local r = redis.call("HGETALL", "some-hash-with-stats")
# if r ~= nil then
# for _,v in ipairs(r) do
# table.insert(result, v) -- alternating keys and values
# end
# end
#
# -- Set foo to 42
# table.insert(result, "foo")
# table.insert(result, "42") -- note the string, use tostring() if needed
#
# return result
serviceMonitor:
# When set true then use a ServiceMonitor to configure scraping
enabled: false
# Set the namespace the ServiceMonitor should be deployed
# namespace: monitoring
# Set how frequently Prometheus should scrape
# interval: 30s
# Set path to redis-exporter telemtery-path
# telemetryPath: /metrics
# Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
# labels: {}
# Set timeout for scrape
# timeout: 10s
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:
## Use existing secret containing key `authKey` (ignores redisPassword)
# existingSecret:
## Defines the key holding the redis password in existing secret.
authKey: auth
persistentVolume:
enabled: true
## redis-ha data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 10Gi
annotations: {}
# reclaimPolicy per https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
reclaimPolicy: ""
init:
resources: {}
# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
## path is evaluated as template so placeholders are replaced
# path: "/data/{{ .Release.Name }}"
# if chown is true, an init-container with root permissions is launched to
# change the owner of the hostPath folder to the user defined in the
# security context
chown: true
emptyDir: {}

View File

@ -0,0 +1,6 @@
dependencies:
- name: redis-ha
repository: https://kubernetes-charts.storage.googleapis.com
version: 4.4.2
digest: sha256:70fdd035c3aa3b7185882f12a73143c58ab32f04262dda2cf34a2b1a52116d96
generated: "2020-03-29T14:37:59.349371452+01:00"

View File

@ -0,0 +1,5 @@
dependencies:
- name: redis-ha
version: 4.4.2
repository: https://kubernetes-charts.storage.googleapis.com
condition: redis-ha.enabled

View File

@ -42,8 +42,15 @@ Create dex name and version as used by the chart label.
Create redis name and version as used by the chart label.
*/}}
{{- define "argo-cd.redis.fullname" -}}
{{ $redisHa := (index .Values "redis-ha") }}
{{- if $redisHa.enabled -}}
{{- if $redisHa.haproxy.enabled -}}
{{- printf "%s-redis-ha-haproxy" .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- else -}}
{{- printf "%s-%s" (include "argo-cd.fullname" .) .Values.redis.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Create argocd server name and version as used by the chart label.

View File

@ -1,4 +1,4 @@
{{- $redisHa := (index .Values "redis-ha") -}}
apiVersion: apps/v1
kind: Deployment
metadata:
@ -56,7 +56,7 @@ spec:
- {{ template "argo-cd.repoServer.fullname" . }}:{{ .Values.repoServer.service.port }}
- --loglevel
- {{ .Values.controller.logLevel }}
{{- if .Values.redis.enabled }}
{{- if or (and .Values.redis.enabled (not $redisHa.enabled)) (and $redisHa.enabled $redisHa.haproxy.enabled) }}
- --redis
- {{ template "argo-cd.redis.fullname" . }}:{{ .Values.redis.servicePort }}
{{- end }}

View File

@ -1,3 +1,4 @@
{{- $redisHa := (index .Values "redis-ha") -}}
apiVersion: apps/v1
kind: Deployment
metadata:
@ -52,7 +53,7 @@ spec:
imagePullPolicy: {{ default .Values.global.image.imagePullPolicy .Values.repoServer.image.imagePullPolicy }}
command:
- argocd-repo-server
{{- if .Values.redis.enabled }}
{{- if or (and .Values.redis.enabled (not $redisHa.enabled)) (and $redisHa.enabled $redisHa.haproxy.enabled) }}
- --redis
- {{ template "argo-cd.redis.fullname" . }}:{{ .Values.redis.servicePort }}
{{- end }}

View File

@ -1,3 +1,4 @@
{{- $redisHa := (index .Values "redis-ha") -}}
apiVersion: apps/v1
kind: Deployment
metadata:
@ -62,7 +63,7 @@ spec:
{{- end }}
- --loglevel
- {{ .Values.server.logLevel }}
{{- if .Values.redis.enabled }}
{{- if or (and .Values.redis.enabled (not $redisHa.enabled)) (and $redisHa.enabled $redisHa.haproxy.enabled) }}
- --redis
- {{ template "argo-cd.redis.fullname" . }}:{{ .Values.redis.servicePort }}
{{- end }}

View File

@ -1,4 +1,5 @@
{{- if .Values.redis.enabled }}
{{- $redisHa := (index .Values "redis-ha") -}}
{{- if and .Values.redis.enabled (not $redisHa.enabled) -}}
apiVersion: apps/v1
kind: Deployment
metadata:

View File

@ -1,4 +1,5 @@
{{- if .Values.redis.enabled }}
{{- $redisHa := (index .Values "redis-ha") -}}
{{- if and .Values.redis.enabled (not $redisHa.enabled) -}}
apiVersion: v1
kind: Service
metadata:

View File

@ -286,6 +286,24 @@ redis:
volumeMounts: []
volumes: []
# This key configures Redis-HA subchart and when enabled (redis-ha.enabled=true)
# the custom redis deployment is omitted
redis-ha:
enabled: false
# Check the redis-ha chart for more properties
exporter:
enabled: true
persistentVolume:
enabled: false
redis:
masterGroupName: argocd
config:
save: "\"\""
haproxy:
enabled: true
metrics:
enabled: true
## Server
server:
name: server

Binary file not shown.

Binary file not shown.

View File

@ -5,14 +5,15 @@ SRCROOT="$(cd "$(dirname "$0")/.." && pwd)"
for dir in $(find $SRCROOT/charts -mindepth 1 -maxdepth 1 -type d);
do
name=$(basename $dir)
echo "Running Helm linting for $name"
docker run \
-v "$SRCROOT:/workdir" \
gcr.io/kubernetes-charts-ci/test-image:v3.0.1 \
ct \
lint \
--config .circleci/chart-testing.yaml \
--lint-conf .circleci/lintconf.yaml \
--charts "/workdir/charts/${name}"
rm -rf $dir/charts
name=$(basename $dir)
echo "Running Helm linting for $name"
docker run \
-v "$SRCROOT:/workdir" \
gcr.io/kubernetes-charts-ci/test-image:v3.1.0 \
ct \
lint \
--config .circleci/chart-testing.yaml \
--lint-conf .circleci/lintconf.yaml \
--charts "/workdir/charts/${name}"
done

View File

@ -6,11 +6,32 @@ GIT_PUSH=${GIT_PUSH:-false}
rm -rf $SRCROOT/output && git clone -b gh-pages git@github.com:argoproj/argo-helm.git $SRCROOT/output
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add argoproj https://argoproj.github.io/argo-helm
for dir in $(find $SRCROOT/charts -mindepth 1 -maxdepth 1 -type d);
do
echo "Processing $dir"
helm package $dir
rm -rf $dir/charts
name=$(basename $dir)
if [ $(helm dep list $dir 2>/dev/null| wc -l) -gt 1 ]
then
# Bug with Helm subcharts with hyphen on them
# https://github.com/argoproj/argo-helm/pull/270#issuecomment-608695684
if [ "$name" == "argo-cd" ]
then
echo "Restore ArgoCD RedisHA subchart"
git checkout $dir
fi
echo "Processing chart dependencies"
helm --debug dep build $dir
fi
echo "Processing $dir"
helm --debug package $dir
done
cp $SRCROOT/*.tgz output/
cd $SRCROOT/output && helm repo index .