Compare commits

...

116 Commits

Author SHA1 Message Date
Steven Kang 47cac8b428
chore: version bump 2.33.1 (#188) 2025-08-27 17:39:38 +12:00
samdulam 3fa625a897
Release 2.33.0 (#187) 2025-08-20 10:50:41 +05:30
Yajith Dayarathna 7e6f805da9
Merge pull request #181 from portainer/chore/workflow-permissions-pull-requests
adjusting workflow permissions - pull-requests: write
2025-07-03 08:23:29 +12:00
Yajith Dayarathna afb12adcb6
adjusting workflow permissions 2025-07-03 08:21:35 +12:00
Yajith Dayarathna 00322e4686
Merge pull request #180 from portainer/chore/workflow-permissions
adjusting workflow permissions - 2
2025-07-03 08:15:44 +12:00
Yajith Dayarathna 66ca5a437b
Merge branch 'master' into chore/workflow-permissions 2025-07-03 08:13:42 +12:00
Yajith Dayarathna c481b0a01a
adjusting workflow permissions 2025-07-03 08:12:58 +12:00
Yajith Dayarathna fd81adc8ec
Merge pull request #179 from portainer/chore/workflow-permissions
adjusting workflow permissions
2025-07-03 08:09:41 +12:00
Yajith Dayarathna 46e3aa61ed
workflow update 2025-07-03 08:01:44 +12:00
samdulam 55aea28d1c
2.27.9 with trusted-origins added (#178)
* 2.27.9 with trusted-origins added

* typo fix
2025-07-02 20:43:19 +12:00
samdulam aa64a6225b
Release 2.27.8 (#177) 2025-06-25 14:29:03 +12:00
samdulam b36c9d2e86
release 2.27.7 (#175) 2025-06-17 19:36:21 +12:00
samdulam c3af48aa52
updates for release 2.27.6 (#171) 2025-05-09 07:15:43 +05:30
samdulam 6f039e99d8
2.27.5 Release Update (#170) 2025-05-02 10:42:14 +05:30
samdulam bc906681d9
release update (#167) 2025-04-15 12:22:03 +05:30
samdulam 7d76af4fc5
release 2.27.3 (#166) 2025-03-25 08:33:39 +05:30
samdulam c425562ecb
release 2.27.2 (#165) 2025-03-19 09:53:27 +05:30
samdulam 0e14d24fe8
2.27.1 Release + Ci Workflow Updates (#164)
* release 2.27.0

* 2.27.1 Release + ci workflow updates

* add helm version

* update actions

* update workflow

* change helm version to be compatible
2025-02-27 11:44:45 +05:30
samdulam 70eac4ed30
release 2.27.0 (#163) 2025-02-20 10:46:38 +05:30
samdulam c935295160
Release 2.21.5 (#158)
* Release 2.21.5

* python version fix
2024-12-20 14:25:59 +13:00
mwoudstra 962188051e
Add option to set tolerations (#155)
* Add option to set tolerations

* Bump chart version
2024-11-07 10:24:41 +05:30
samdulam aba1aa8f56
2.21.4 release (#156) 2024-10-25 08:33:01 +05:30
samdulam e0206df0d2
bump chart version (#153) 2024-10-18 15:09:14 +05:30
eMagiz a0248dda9f
Optional field for rbac resources (#151)
* Optional field for rbac resources

* Feedback changes

* Feedback II

* rbac to localMgmt in the values file

---------

Co-authored-by: Omar Gadelmawla <o.gadelmawla@emagiz.com>
2024-10-18 19:25:33 +13:00
James Carppe 5087dd9170
Update Helm chart for 2.21.3 (#152) 2024-10-08 17:21:35 +13:00
James Carppe 4272809504
Update for Release 2.21.2 and kind cluster version upgrade for test (#149)
Co-authored-by: samdulam <sam.dulam@portainer.io>
2024-09-24 14:02:48 +12:00
James Carppe c3a4bb19c5
Update for 2.21.1 (#148) 2024-09-10 08:02:05 +05:30
samdulam b9723d814d
Update for 2.21.0 (#147) 2024-08-27 07:53:38 +05:30
James Carppe 70d24842cb
Version bump for 2.19.5 (#143) 2024-04-22 10:04:47 +05:30
samdulam 6f34933803
add periodSeconds:30 to liveness and readiness probes (#142)
* add periodSeconds:30 to liveness and readiness probes to stop pod from restarting before its ready.

* change feature flags to an array and adjust the template

* change feature.flags to list

* fix template

* use range instead of toyaml so we can use squote

* increase probe times to 5
2024-04-08 15:28:19 +05:30
James Carppe 4923718d73
Version bump for 2.19.4 (#140) 2023-12-06 12:05:47 +13:00
James Carppe c90ad06472
Version bump for 2.19.3 (#139) 2023-11-22 12:00:37 +13:00
James Carppe 5c2ea01097
Version bump for 2.19.2 (#138) 2023-11-13 07:13:12 +05:30
samdulam 5f6fc03ec0
Version bump for 2.19.1 (#136) 2023-09-20 07:51:14 +05:30
samdulam e92f0da498
2.19.0 Release update (#135)
* 2.19.0 Release

* remove deprecated ways of specifying storage class @pchang388

* storageclassname to be only used when a storageclass is specified
2023-08-31 10:43:38 +05:30
Nicholas Malcolm 582a6f376f
Fix some whitepsace and formatting issues (#123)
Co-authored-by: samdulam <sam.dulam@portainer.io>
2023-08-31 09:57:53 +05:30
samdulam 50b1ba3f55
2.18.4 Release (#132) 2023-07-07 12:14:46 +12:00
samdulam 310d8f757e
2.18.3 (#130) 2023-05-22 16:50:16 +05:30
Steven Kang 9b1309fd80
feat(helm): update to `2.18.2` (#129) 2023-05-01 12:13:28 +12:00
samdulam edf9ad7fbe
2.18.1 with mtls and updated test workflow (#128)
* 2.18.1 with mtls

* update tests
2023-04-18 22:59:30 +12:00
James Carppe 6555aa9ac3
Update version to 2.17.1 (#126) 2023-02-22 15:27:48 +13:00
samdulam 8adaa4d4d0
update github workflow, add k8s 1.23/4/5 (#125)
* update and bump for 2.17

* Update testing, add 1.23/4/5
2023-02-07 07:43:44 +05:30
samdulam 8f9fd0bf4f
update and bump for 2.17 (#124) 2023-02-07 14:21:38 +13:00
samdulam 9dfed2a871
2.16.2 (#121) 2022-11-21 08:57:49 +05:30
James Carppe dd5a8e0ae0
Update for 2.16.1 (#120) 2022-11-09 18:06:35 +13:00
samdulam 1f9740d078
Update for 2.16 (#119) 2022-10-31 08:00:12 +05:30
James Carppe 6dcd5bd9e9
Update version to 2.15.1 (#118) 2022-09-16 12:55:58 +12:00
samdulam 136d8d7ec3
release 2.15 (#117) 2022-09-06 11:45:46 +12:00
samdulam 389ee1a163
Updates for 2.14.2 release (#116) 2022-07-26 22:09:12 +05:30
James Carppe e783f7b498
Update to 2.14.1 and chart bump (#115) 2022-07-12 14:15:06 +12:00
samdulam a45f047a03
Update to 2.14.0 and chart bump (#114) 2022-06-28 09:39:56 +05:30
samdulam f96411134a
2.13.1 Update (#112) 2022-05-12 15:43:22 +12:00
Steven Kang d230f5ddb2
Make Persistency Optional (#110) 2022-05-12 09:16:20 +12:00
samdulam 1d0aa74dcd
Chart and Manifest Update for 2.13 ce/ee (#111) 2022-05-09 21:51:46 +12:00
samdulam 046a02d6c2
Chart upgrade for EE Release 2.12.2 (#109)
* EE 2.12.0 Release Updates

* manifest updates

* Updates for 2.12.1

* update for 2.12.2
2022-04-04 11:09:45 +12:00
Oscar Zhou 1d3bd8b979
fix(k8s/helm): add semantic version string check (#108) 2022-03-29 14:58:11 +13:00
Oscar Zhou e7aa7b564b
fix(k8s/helm): change to https only causing service crash with helm install (#101) 2022-03-29 10:27:37 +13:00
samdulam 645923289e
Rel2.12.0 (#102)
* EE 2.12.0 Release Updates

* manifest updates

* Updates for 2.12.1
2022-03-15 10:40:10 +13:00
samdulam eb469ca85a
Update Chart and manifests for release EE-2.12.0 (#100)
* EE 2.12.0 Release Updates

* manifest updates
2022-03-09 02:24:55 +13:00
Steven Kang 607750b973
Removing CODEOWNERS 2022-02-19 10:53:59 +13:00
James Carppe f32ffbbf68
Bump version to 2.11.1 (#96) 2022-02-08 11:44:59 +13:00
samdulam 33666e00e8
Fix ingress 1.2x (#91)
* add condition for 1.21>=

* fix ingress object for v1

* fix

* https by default

* chart version bump

* 9443 only if tls.force is true

* typo

* typo

* fix indent

* lint

* tlsforced var

* fix version conditions

* typo

* typo

* typo

* typo

* remove extra end statement

* feat(helm): improved `ingress` to support a various of Kubernetes versions and added `ingressClass` support

* Update values.yaml

Co-authored-by: Steven Kang <stevenk@Stevens-MacBook-Pro.local>
2022-01-24 15:35:14 +13:00
Jevon Tane fb6da8e019
Merge pull request #85 from portainer/feat/ptd272/add-feature-flag
Introduce Portainer Feature Flag
2021-12-16 17:18:52 +13:00
Steven Kang b2ccdc7b4d Merge branch 'feat/ptd272/add-feature-flag' of https://github.com/portainer/k8s into feat/ptd272/add-feature-flag 2021-12-13 16:24:02 +13:00
Steven Kang 6e3cbc55c9 feat(helm): introduced `feature.flags` - `PTD-272` 2021-12-13 16:23:59 +13:00
Steven Kang 1e1a3693cd feat(helm): introduced `feature.flags` - `PTD-272` 2021-12-13 16:23:59 +13:00
samdulam 621b722666
Rel2.11 - Manifest Updates (#82)
* Release CE 2.11 Manifest Updates

* Manifest Updates for CE2.11

Co-authored-by: yi-portainer <yi.chen@portainer.io>
2021-12-09 13:42:42 +13:00
Steven Kang 6eecb3f387 feat(helm): introduced `feature.flags` - `PTD-272` 2021-12-01 17:26:36 +13:00
Steven Kang 834e513ef7 feat(helm): introduced `feature.flags` - `PTD-272` 2021-12-01 17:25:26 +13:00
Steven Kang cba266202b
feat(helm): introduce TLS only flag (#81)
Co-authored-by: samdulam <sam.dulam@portainer.io>
Co-authored-by: ssbkang <steven.kang@portainer.io>
2021-12-01 14:01:05 +13:00
samdulam 6e01446bc0
Update portainer-agent-edge-k8s.yaml (#80)
* Update portainer-agent-edge-k8s.yaml

* Update portainer-agent-k8s-lb.yaml

* Update portainer-agent-k8s-nodeport.yaml

* Release 2.9.3 Update
2021-11-22 11:33:19 +13:00
samdulam 56ee20b679
Manifest and Helm Updates for EE-2.10.0 (#78)
* Manifest and Helm Updates for EE-2.10.0

* Create portainer-agent-ee210-k8s-nodeport.yaml

* updates for ee2.10

* manifest files

Co-authored-by: Ubuntu <ubuntu@ip-172-31-17-39.ec2.internal>
2021-11-16 09:02:38 +13:00
Anthony Lapenna f6ca6c01b4
Fix EE manifests and Helm deployment template (#76)
* Fix EE manifests and Helm deployment template

* bump chart version

* fix invalid condition for the exposed ports

* add condition to only publish 9443 if ce

Co-authored-by: Sam Dulam <Sam.Dulam@portainer.io>
2021-10-12 10:57:06 +13:00
samdulam 78294f83dc
fix ee manifests and update for ce 2.9.1 (#75) 2021-10-12 08:48:55 +13:00
samdulam 0190fa934f
Merge pull request #73 from portainer/add-ssl
Update chart to support BYO SSL certificates
2021-09-27 15:23:50 +13:00
samdulam a158f5557a
Update on-push-lint-charts.yml
change uses: helm/kind-action@v1.1.0 to 1.2.0
2021-09-27 15:16:28 +13:00
David Young 41f944d116
Bump chart version for ssl changes
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-09-16 17:08:53 +12:00
David Young d62f43b5a1
Update httpsNodePort to 30779
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-09-16 17:08:24 +12:00
David Young 026f1c3dea
Only override ssl cert/key path if using existing cert
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-09-13 10:29:51 +12:00
David Young ce1dfc6b23
Switch probes to HTTPS scheme
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-09-13 09:47:19 +12:00
David Young facec87b81
First cut at chart supporting SSL
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-08-19 13:24:52 +12:00
David Young 143789a0bd
Add basic CODEOWNERS
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-08-12 10:36:31 +12:00
samdulam 85dba3221b
change versions in manifest files (#70)
* bump chart ver, agent tag in manifests for ce 2.6

* EE 2.7.0 and chart version bump

* Change Versions in manifest files for 2.7
2021-08-02 12:15:25 +12:00
samdulam 3ee87c0b99
Prep Chart for EE2.7 (#68)
* bump chart ver, agent tag in manifests for ce 2.6

* EE 2.7.0 and chart version bump
2021-07-29 18:29:38 +12:00
Stéphane Busso 4158132537
Merge pull request #60 from portainer/feat/EE-332/EE-562/edge-insecure-poll
feat(agent): support insecure poll flag
2021-07-15 17:17:24 +12:00
Chaim Lev-Ari 5f6b237169 feat: update to use the same filenames 2021-07-14 14:10:16 +03:00
Chaim Lev-Ari f3f7c426aa refactor: rename edge agent scripts 2021-07-14 13:31:42 +03:00
Chaim Lev-Ari 5dc1b40490 fix(agent): change version of agent 2021-07-14 13:31:42 +03:00
Chaim Lev-Ari 05ff2ecf3a feat(agent): add version number to files 2021-07-14 13:31:42 +03:00
Chaim Lev-Ari 0d72ea6b65 feat(agent): version files 2021-07-14 13:31:42 +03:00
Chaim Lev-Ari 810891be98 feat(agent): support insecure poll flag 2021-07-14 13:31:42 +03:00
samdulam 33e1410f50
bump chart ver, agent tag in manifests for ce 2.6 (#65) 2021-06-25 14:17:24 +12:00
Neil Cresswell 52d67fec4e
Update portainer-agent-edge-k8s.yaml 2021-05-24 17:52:44 +12:00
Yi Chen f59a62efda
Update ee version to 2.4.0 (#61)
* * update ee version to 2.4.0
* use file versioning for ee agent
* update ee agent version to 2.4.0

* * fix stack ee agent versions
2021-04-30 19:47:20 +12:00
samdulam 52a05b429f
Update Notes and Bump Chart Ver (#57) 2021-03-19 15:00:06 +13:00
Yi Chen 75ae994f57
* update chart to use ee 2.0.2 (#55) 2021-03-12 11:02:18 +13:00
David Young cdeaec80b2
Update EE in chart to 2.0.1 - Fixes #52 (#54)
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-02-23 20:48:36 +13:00
David Young aebd5bb518
Include namespace in manifests (#50) 2021-02-09 11:21:36 +13:00
David Young b2e9d8757e
Update appVersion to ce-latest-ee-2.0.0, add storageClass to values.yaml (#45) 2021-02-09 10:24:33 +13:00
David Young 7a63c28282
Improve CI testing (#47) 2021-02-03 20:53:05 +13:00
David Young bc786dfcdd
Bump chart appversion to 2.0.1 (#38)
Fixes #37

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-01-19 08:50:30 +13:00
David Young 28d8254051
Add option for nodeSelector to chart/README (#35) 2020-12-09 09:44:30 +13:00
David Young 6f6a011d4c
Merge branch 'gh-pages' into master 2020-12-07 14:25:18 +13:00
David Young 2d2a8d015e
Add default GH token to auto-pr workflow (#33) 2020-12-07 14:21:11 +13:00
David Young 4a739a9d0e
Update GH action to auto-PR deployment manifest changes to gh-pages branch (#32) 2020-12-07 14:18:09 +13:00
David Young c5c90f06fa
Fix misspelling of "Enterprise" (#31) 2020-12-02 17:32:39 +13:00
funkypenguin d94ebbb12b Update index.yaml
Signed-off-by: funkypenguin <funkypenguin@users.noreply.github.com>
2020-12-01 21:51:37 +00:00
funkypenguin f15dd4cafa Update index.yaml
Signed-off-by: funkypenguin <funkypenguin@users.noreply.github.com>
2020-12-01 21:44:52 +00:00
funkypenguin 673134dc4c Update index.yaml
Signed-off-by: funkypenguin <funkypenguin@users.noreply.github.com>
2020-11-17 22:22:39 +00:00
funkypenguin f8198415db Update index.yaml
Signed-off-by: funkypenguin <funkypenguin@users.noreply.github.com>
2020-09-29 23:54:51 +00:00
funkypenguin 527a7355fa Update index.yaml
Signed-off-by: funkypenguin <funkypenguin@users.noreply.github.com>
2020-08-31 03:07:15 +00:00
David Young f24d74c2da
Merge branch 'master' into gh-pages 2020-08-28 16:33:35 +12:00
funkypenguin 4d6e51b5dc Update index.yaml
Signed-off-by: funkypenguin <funkypenguin@users.noreply.github.com>
2020-08-28 03:09:57 +00:00
funkypenguin 9f47de4cac Update index.yaml
Signed-off-by: funkypenguin <funkypenguin@users.noreply.github.com>
2020-08-28 02:16:26 +00:00
David Young dc33e42cbd
Merge manifests and friends to the gh-pages branch (#4)
Co-authored-by: Anthony Lapenna <anthony.lapenna@portainer.io>
Co-authored-by: Anthony Lapenna <lapenna.anthony@gmail.com>
2020-08-28 10:13:06 +12:00
funkypenguin 904ecce9a3 Update index.yaml
Signed-off-by: funkypenguin <funkypenguin@users.noreply.github.com>
2020-08-19 00:00:46 +00:00
62 changed files with 2339 additions and 240 deletions

View File

@ -1,2 +1,2 @@
# This file defines the config for "ct" (chart tester) used by the helm linting GitHub workflow
lint-conf: .ci/lint-config.yaml
lint-conf: .ci/lint-config.yaml

View File

@ -16,7 +16,7 @@
# 3. Remove references to helm in rendered manifests (no point attaching a label like "app.kubernetes.io/managed-by: Helm" if we are not!)
# Create nodeport manifest for ce
helm install --no-hooks --namespace zorgburger --set service.type=NodePort --set disableTest=true --dry-run zorgburger charts/portainer \
helm install --no-hooks --namespace zorgburger --set service.type=NodePort --set disableTest=true --set createNamespace=true --dry-run zorgburger charts/portainer \
| sed -n '1,/NOTES/p' | sed \$d \
| grep -vE 'NAME|LAST DEPLOYED|NAMESPACE|STATUS|REVISION|HOOKS|MANIFEST|TEST SUITE' \
| grep -iv helm \
@ -26,7 +26,7 @@ helm install --no-hooks --namespace zorgburger --set service.type=NodePort --set
# Create lb manifest for ce
helm install --no-hooks --namespace zorgburger --set service.type=LoadBalancer --set disableTest=true --dry-run zorgburger charts/portainer \
helm install --no-hooks --namespace zorgburger --set service.type=LoadBalancer --set disableTest=true --set createNamespace=true --dry-run zorgburger charts/portainer \
| sed -n '1,/NOTES/p' | sed \$d \
| grep -vE 'NAME|LAST DEPLOYED|NAMESPACE|STATUS|REVISION|HOOKS|MANIFEST|TEST SUITE' \
| grep -iv helm \
@ -35,7 +35,7 @@ helm install --no-hooks --namespace zorgburger --set service.type=LoadBalancer -
> deploy/manifests/portainer/portainer-lb.yaml
# Create nodeport manifest for ee
helm install --no-hooks --namespace zorgburger --set enterpriseEdition.enabled=true --set service.type=NodePort --set disableTest=true --dry-run zorgburger charts/portainer \
helm install --no-hooks --namespace zorgburger --set enterpriseEdition.enabled=true --set service.type=NodePort --set disableTest=true --set createNamespace=true --dry-run zorgburger charts/portainer \
| sed -n '1,/NOTES/p' | sed \$d \
| grep -vE 'NAME|LAST DEPLOYED|NAMESPACE|STATUS|REVISION|HOOKS|MANIFEST|TEST SUITE' \
| grep -iv helm \
@ -44,10 +44,10 @@ helm install --no-hooks --namespace zorgburger --set enterpriseEdition.enabled=t
> deploy/manifests/portainer/portainer-ee.yaml
# Create lb manifest for ee
helm install --no-hooks --namespace zorgburger --set enterpriseEdition.enabled=true --set service.type=LoadBalancer --set disableTest=true --dry-run zorgburger charts/portainer \
helm install --no-hooks --namespace zorgburger --set enterpriseEdition.enabled=true --set service.type=LoadBalancer --set disableTest=true --set createNamespace=true --dry-run zorgburger charts/portainer \
| sed -n '1,/NOTES/p' | sed \$d \
| grep -vE 'NAME|LAST DEPLOYED|NAMESPACE|STATUS|REVISION|HOOKS|MANIFEST|TEST SUITE' \
| grep -iv helm \
| sed 's/zorgburger/portainer/' \
| sed 's/portainer-portainer/portainer/' \
> deploy/manifests/portainer/portainer-lb-ee.yaml
> deploy/manifests/portainer/portainer-lb-ee.yaml

0
.ci/scripts/local-ct-lint.sh Executable file → Normal file
View File

0
.ci/scripts/local-kube-score.sh Executable file → Normal file
View File

View File

@ -4,31 +4,104 @@ on:
push:
paths:
- 'charts/**'
- '.github/**'
- '.github/**'
pull_request:
branches:
- master
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.10.1
jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v4.2.0
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test container-resources
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v5.3.0
with:
python-version: 3.13.1
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.1
with:
version: v3.10.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config .ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
id: lint
uses: helm/chart-testing-action@v1.0.0
with:
config: .ci/ct-config.yaml
command: lint
run: ct lint --config .ci/ct-config.yaml
- name: Create kind cluster
uses: helm/kind-action@v1.0.0
# Refer to https://github.com/kubernetes-sigs/kind/releases when updating the node_images
- name: Create 1.29 kind cluster
uses: helm/kind-action@v1.12.0
with:
install_local_path_provisioner: true
# Only build a kind cluster if there are chart changes to test.
if: steps.lint.outputs.changed == 'true'
node_image: kindest/node:v1.29.14@sha256:8703bd94ee24e51b778d5556ae310c6c0fa67d761fae6379c8e0bb480e6fea29
cluster_name: kubernetes-1.29
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install)
uses: helm/chart-testing-action@v1.0.0
- name: Run chart-testing (install) against 1.29
run: ct install --config .ci/ct-config.yaml
- name: Create 1.30 kind cluster
uses: helm/kind-action@v1.12.0
with:
command: install
config: .ci/ct-config.yaml
node_image: kindest/node:v1.30.10@sha256:4de75d0e82481ea846c0ed1de86328d821c1e6a6a91ac37bf804e5313670e507
cluster_name: kubernetes-1.30
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install) against 1.30
run: ct install --config .ci/ct-config.yaml
- name: Create 1.31 kind cluster
uses: helm/kind-action@v1.12.0
with:
node_image: kindest/node:v1.31.6@sha256:28b7cbb993dfe093c76641a0c95807637213c9109b761f1d422c2400e22b8e87
cluster_name: kubernetes-1.31
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install) against 1.31
run: ct install --config .ci/ct-config.yaml
- name: Create 1.32 kind cluster
uses: helm/kind-action@v1.12.0
with:
node_image: kindest/node:v1.32.2@sha256:f226345927d7e348497136874b6d207e0b32cc52154ad8323129352923a3142f
cluster_name: kubernetes-1.32
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install) against 1.32
run: ct install --config .ci/ct-config.yaml

View File

@ -6,13 +6,16 @@ on:
- master
paths:
- 'charts/**'
- '.github/**'
- '.github/**'
- 'deploy/manifests/**'
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: write
pages: write
pull-requests: write
steps:
- uses: actions/checkout@v2
@ -25,6 +28,13 @@ jobs:
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.0.0
uses: helm/chart-releaser-action@v1.1.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: sync gh-pages branch
uses: repo-sync/pull-request@v2
with:
destination_branch: "gh-pages"
github_token: "${{ secrets.GITHUB_TOKEN }}"
pr_allow_empty: false

View File

@ -17,12 +17,15 @@ This repo contains helm and YAML for deploying Portainer into a Kubernetes envir
- [Enterprise Edition](#enterprise-edition-1)
- [Using NodePort on a local/remote cluster](#using-nodeport-on-a-localremote-cluster-3)
- [Using a cloud provider's loadbalancer](#using-a-cloud-providers-loadbalancer-3)
- [Note re persisting data](#note-re-persisting-data)
# Deploying with Helm
Ensure you're using at least helm v3.2, which [includes support](https://github.com/helm/helm/pull/7648) for the `--create-namespace` argument.
Install the repository:
```
@ -30,12 +33,6 @@ helm repo add portainer https://portainer.github.io/k8s/
helm repo update
```
Create the portainer namespace:
```
kubectl create namespace portainer
```
## Community Edition
Install the helm chart:
@ -43,20 +40,22 @@ Install the helm chart:
### Using NodePort on a local/remote cluster
```
helm install -n portainer portainer portainer/portainer
helm install --create-namespace -n portainer portainer portainer/portainer
```
### Using a cloud provider's loadbalancer
```
helm install -n portainer portainer portainer/portainer --set service.type=LoadBalancer
helm install --create-namespace -n portainer portainer portainer/portainer \
--set service.type=LoadBalancer
```
### Using ClusterIP with an ingress
```
helm install -n portainer portainer portainer/portainer --set service.type=ClusterIP
helm install --create-namespace -n portainer portainer portainer/portainer \
--set service.type=ClusterIP
```
For advanced helm customization, see the [chart README](/charts/portainer/README.md)
@ -66,48 +65,45 @@ For advanced helm customization, see the [chart README](/charts/portainer/README
### Using NodePort on a local/remote cluster
```
helm install --set entepriseEdition.enabled=true -n portainer portainer portainer/portainer
helm install --create-namespace -n portainer portainer portainer/portainer \
--set enterpriseEdition.enabled=true
```
### Using a cloud provider's loadbalancer
```
helm install --set entepriseEdition.enabled=true -n portainer portainer portainer/portainer --set service.type=LoadBalancer
helm install --create-namespace -n portainer portainer portainer/portainer \
--set enterpriseEdition.enabled=true \
--set service.type=LoadBalancer
```
### Using ClusterIP with an ingress
```
helm install --set entepriseEdition.enabled=true -n portainer portainer portainer/portainer --set service.type=ClusterIP
helm install --create-namespace -n portainer portainer portainer/portainer \
--set enterpriseEdition.enabled=true \
--set service.type=ClusterIP
```
For advanced helm customization, see the [chart README](/charts/portainer/README.md)
# Deploying with manifests
If you're not into helm, you can install Portainer using manifests, by first creating the portainer namespace:
```
kubectl create namespace portainer
```
And then...
If you're not using helm, you can install Portainer using manifests directly, as follows
## Community Edition
### Using NodePort on a local/remote cluster
```
kubectl create namespace portainer
kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
kubectl apply -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
```
### Using a cloud provider's loadbalancer
```
kubectl create namespace portainer
kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-lb.yaml
kubectl apply -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-lb.yaml
```
## Enterprise Edition
@ -115,13 +111,36 @@ kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/ma
### Using NodePort on a local/remote cluster
```
kubectl create namespace portainer
kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-ee.yaml
kubectl apply- f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-ee.yaml
```
### Using a cloud provider's loadbalancer
```
kubectl create namespace portainer
kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-lb-ee.yaml
kubectl apply -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-lb-ee.yaml
```
# Note re persisting data
The charts/manifests will create a persistent volume for storing Portainer data, using the default StorageClass.
In some Kubernetes clusters (microk8s), the default Storage Class simply creates hostPath volumes, which are not explicitly tied to a particular node. In a multi-node cluster, this can create an issue when the pod is terminated and rescheduled on a different node, "leaving" all the persistent data behind and starting the pod with an "empty" volume.
While this behaviour is inherently a limitation of using hostPath volumes, a suitable workaround is to use add a nodeSelector to the deployment, which effectively "pins" the portainer pod to a particular node.
The nodeSelector can be added in the following ways:
1. Edit your own values.yaml and set the value of nodeSelector like this:
```
nodeSelector:
kubernetes.io/hostname: <YOUR NODE NAME>
```
2. Explicictly set the target node when deploying/updating the helm chart on the CLI, by including `--set nodeSelector.kubernetes.io/hostname=<YOUR NODE NAME>`
3. If you've deployed Portainer via manifests, without Helm, run the following one-liner to "patch" the deployment, forcing the pod to always be scheduled on the node it's currently running on:
```
kubectl patch deployments -n portainer portainer -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "'$(kubectl get pods -n portainer -o jsonpath='{ ..nodeName }')'"}}}}}' || (echo Failed to identify current node of portainer pod; exit 1)
```

View File

@ -16,16 +16,16 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 1.0.6
version: 2.33.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 2.0.0
appVersion: ce-latest-ee-2.33.1
sources:
- https://github.com/portainer/k8s
maintainers:
- name: funkypenguin
email: davidy@funkypenguin.co.nz
url: https://www.funkypenguin.co.nz
- name: Portainer
email: platform-team@portainer.io
url: https://www.portainer.io

View File

@ -61,27 +61,33 @@ The following table lists the configurable parameters of the Portainer chart and
| `image.tag` | Tag for the Portainer image | `latest` |
| `image.pullPolicy` | Portainer image pulling policy | `IfNotPresent` |
| `imagePullSecrets` | If Portainer image requires to be in a private repository | `nil` |
| `nodeSelector` | Used to apply a nodeSelector to the deployment | `{}` |
| `serviceAccount.annotations` | Annotations to add to the service account | `null` |
| `serviceAccount.name` | The name of the service account to use | `portainer-sa-clusteradmin` |
| `localMgmt` | Enables or disables the creation of SA, Roles in local cluster where Portainer runs, only change when you don't need to manage the local cluster through this Portainer instance | `true` |
| `service.type` | Service Type for the main Portainer Service; ClusterIP, NodePort and LoadBalancer | `LoadBalancer` |
| `service.httpPort` | HTTP port for accessing Portainer Web | `9000` |
| `service.httpNodePort` | Static NodePort for accessing Portainer Web. Specify only if the type is NodePort | `30777` |
| `service.edgePort` | TCP port for accessing Portainer Edge | `8000` |
| `service.edgeNodePort` | Static NodePort for accessing Portainer Edge. Specify only if the type is NodePort | `30776` |
| `service.annotations` | Annotations to add to the service | `{}` |
| `feature.flags` | Enable one or more features separated by spaces. For instance, `--feat=open-amt` | `nil` |
| `ingress.enabled` | Create an ingress for Portainer | `false` |
| `ingress.ingressClassName` | For Kubernetes >= 1.18 you should specify the ingress-controller via the field `ingressClassName`. For instance, `nginx` | `nil` |
| `ingress.annotations` | Annotations to add to the ingress. For instane, `kubernetes.io/ingress.class: nginx` | `{}` |
| `ingress.hosts.host` | URL for Portainer Web. For instance, `portainer.example.io` | `nil` |
| `ingress.hosts.paths.path` | Path for the Portainer Web. | `/` |
| `ingress.hosts.paths.port` | Port for the Portainer Web. | `9000` |
| `ingress.tls` | TLS support on ingress. Must create a secret with TLS certificates in advance | `[]` |
| `resources` | Portainer resource requests and limits | `{}` |
| `tls.force` | Force Portainer to be configured to use TLS only | `false` |
| `tls.existingSecret` | Mount the existing TLS secret into the pod | `""` |
| `mtls.enable` | Option to specicy mtls Certs to be used by Portainer | `false` |
| `mtls.existingSecret` | Mount the existing mtls secret into the pod | `""` |
| `persistence.enabled` | Whether to enable data persistence | `true` |
| `persistence.existingClaim` | Name of an existing PVC to use for data persistence | `nil` |
| `persistence.size` | Size of the PVC used for persistence | `10Gi` |
| `persistence.annotations` | Annotations to apply to PVC used for persistence | `{}` |
| `persistence.storageClass` | StorageClass to apply to PVC used for persistence | `default` |
| `persistence.accessMode` | AccessMode for persistence | `ReadWriteOnce` |
| `persistence.selector` | Selector for persistence | `nil` |
| `persistence.selector` | Selector for persistence | `nil` |

View File

@ -1,21 +1,27 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ if .port }}:{{ .port }}{{ else }}:9000{{ end }}{{.path}}
Use the URL below to access the application
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ if .port }}:{{ .port }}{{ else }}{{ end }}{{.path}}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "portainer.fullname" . }})
Get the application URL by running these commands:
{{- if .Values.tls.force }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "portainer.fullname" . }})
{{- else }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[1].nodePort}" services {{ include "portainer.fullname" . }})
{{- end}}
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
echo https://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "portainer.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "portainer.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.httpPort }}
echo https://$SERVICE_IP:{{ .Values.service.httpsPort }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "portainer.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9000 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9000:9000
Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "portainer.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].me$ echo "Visit http://127.0.0.1:9443 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9443:9443
{{- end }}

View File

@ -71,4 +71,17 @@ Provide a pre-defined claim or a claim based on the Release
{{- else -}}
{{- template "portainer.fullname" . }}
{{- end -}}
{{- end -}}
{{/*
Generate a right Ingress apiVersion
*/}}
{{- define "ingress.apiVersion" -}}
{{- if semverCompare ">=1.20-0" .Capabilities.KubeVersion.GitVersion -}}
networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
networking.k8s.io/v1beta1
{{- else -}}
extensions/v1
{{- end }}
{{- end -}}

View File

@ -18,15 +18,31 @@ spec:
labels:
{{- include "portainer.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
nodeSelector: {{- toYaml .Values.nodeSelector | nindent 8 }}
tolerations: {{- toYaml .Values.tolerations | nindent 8 -}}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- if .Values.localMgmt }}
serviceAccountName: {{ include "portainer.serviceAccountName" . }}
{{- end }}
volumes:
- name: "data"
persistentVolumeClaim:
claimName: {{ template "portainer.pvcName" . }}
{{- if .Values.persistence.enabled }}
- name: "data"
persistentVolumeClaim:
claimName: {{ template "portainer.pvcName" . }}
{{- end }}
{{- if .Values.tls.existingSecret }}
- name: certs
secret:
secretName: {{ .Values.tls.existingSecret }}
{{- end }}
{{- if .Values.mtls.existingSecret }}
- name: mtlscerts
secret:
secretName: {{ .Values.mtls.existingSecret }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
{{- if .Values.enterpriseEdition.enabled }}
@ -36,26 +52,150 @@ spec:
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- end }}
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.edgeNodePort))) }}
args: [ '--tunnel-port','{{ .Values.service.edgeNodePort }}' ]
{{- end }}
args:
{{- if .Values.tls.force }}
- --http-disabled
{{- end }}
{{- if .Values.tls.existingSecret }}
- --sslcert=/certs/tls.crt
- --sslkey=/certs/tls.key
{{- end }}
{{- if .Values.mtls.existingSecret }}
- --mtlscacert=/certs/mtls/mtlsca.crt
- --mtlscert=/certs/mtls/mtlscert.crt
- --mtlskey=/certs/mtls/mtlskey.key
{{- end }}
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.edgeNodePort))) }}
- '--tunnel-port={{ .Values.service.edgeNodePort }}'
{{- end }}
{{- if and .Values.trusted_origins.enabled (not (empty .Values.trusted_origins.domains)) }}
- '--trusted-origins={{ .Values.trusted_origins.domains | trim | quote }}'
{{- end }}
{{- range .Values.feature.flags }}
- {{ . | squote }}
{{- end }}
volumeMounts:
{{- if .Values.persistence.enabled }}
- name: data
mountPath: /data
{{- end }}
{{- if .Values.tls.existingSecret }}
- name: certs
mountPath: /certs
readOnly: true
{{- end }}
{{- if .Values.mtls.existingSecret }}
- name: mtlscerts
mountPath: /certs/mtls
readOnly: true
{{- end }}
ports:
{{- if not .Values.tls.force }}
- name: http
containerPort: 9000
protocol: TCP
{{- end }}
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
failureThreshold: 5
initialDelaySeconds: 45
periodSeconds: 30
httpGet:
path: /
{{- if .Values.tls.force }}
port: 9443
scheme: HTTPS
{{- else }}
{{- if .Values.enterpriseEdition.enabled }}
{{- if regexMatch "^[0-9]+\\.[0-9]+\\.[0-9]+$" .Values.enterpriseEdition.image.tag }}
{{- if eq (semver .Values.enterpriseEdition.image.tag | (semver "2.7.0").Compare) -1 }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- else }}
{{- if eq .Values.enterpriseEdition.image.tag "latest" }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- end}}
{{- else }}
{{- if regexMatch "^[0-9]+\\.[0-9]+\\.[0-9]+$" .Values.image.tag }}
{{- if eq (semver .Values.image.tag | (semver "2.6.0").Compare) -1 }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end}}
{{- else }}
{{- if eq .Values.image.tag "latest" }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- end }}
{{- end }}
{{- end }}
readinessProbe:
failureThreshold: 5
initialDelaySeconds: 45
periodSeconds: 30
httpGet:
path: /
{{- if .Values.tls.force }}
port: 9443
scheme: HTTPS
{{- else }}
{{- if .Values.enterpriseEdition.enabled }}
{{- if regexMatch "^[0-9]+\\.[0-9]+\\.[0-9]+$" .Values.enterpriseEdition.image.tag }}
{{- if eq (semver .Values.enterpriseEdition.image.tag | (semver "2.7.0").Compare) -1 }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- else }}
{{- if eq .Values.enterpriseEdition.image.tag "latest" }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- end}}
{{- else }}
{{- if regexMatch "^[0-9]+\\.[0-9]+\\.[0-9]+$" .Values.image.tag }}
{{- if eq (semver .Values.image.tag | (semver "2.6.0").Compare) -1 }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end}}
{{- else }}
{{- if eq .Values.image.tag "latest" }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- end }}
{{- end }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}

View File

@ -1,10 +1,8 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "portainer.fullname" . -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
{{- $tlsforced := .Values.tls.force -}}
{{- $apiVersion := include "ingress.apiVersion" . -}}
apiVersion: {{ $apiVersion }}
kind: Ingress
metadata:
name: {{ $fullName }}
@ -16,6 +14,9 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- with .Values.ingress.ingressClassName }}
ingressClassName: {{ . }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
@ -33,9 +34,27 @@ spec:
paths:
{{- range .paths }}
- path: {{ .path | default "/" }}
{{- if eq $apiVersion "networking.k8s.io/v1" }}
pathType: Prefix
{{- end }}
backend:
{{- if eq $apiVersion "networking.k8s.io/v1" }}
service:
name: {{ $fullName }}
port:
{{- if $tlsforced }}
number: {{ .port | default 9443 }}
{{- else }}
number: {{ .port | default 9000 }}
{{- end }}
{{- else }}
serviceName: {{ $fullName }}
{{- if $tlsforced }}
servicePort: {{ .port | default 9443 }}
{{- else }}
servicePort: {{ .port | default 9000 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,8 @@
{{ if .Values.createNamespace }}
apiVersion: v1
kind: Namespace
metadata:
name: portainer
labels:
pod-security.kubernetes.io/enforce: privileged
{{ end }}

View File

@ -1,30 +1,30 @@
{{- if .Values.persistence.enabled -}}
{{- if not .Values.persistence.existingClaim -}}
---
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: {{ template "portainer.fullname" . }}
namespace: {{ .Release.Namespace }}
namespace: {{ .Release.Namespace }}
annotations:
{{- if .Values.persistence.storageClass }}
volume.beta.kubernetes.io/storage-class: {{ .Values.persistence.storageClass | quote }}
{{- else }}
volume.alpha.kubernetes.io/storage-class: "generic"
{{- end }}
{{- if .Values.persistence.annotations }}
{{ toYaml .Values.persistence.annotations | indent 2 }}
{{ end }}
labels:
io.portainer.kubernetes.application.stack: portainer
{{- include "portainer.labels" . | nindent 4 }}
{{- include "portainer.labels" . | nindent 4 }}
spec:
accessModes:
- {{ default "ReadWriteOnce" .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
storageClassName: {{ .Values.persistence.storageClass | quote }}
{{ end }}
{{- if .Values.persistence.selector }}
selector:
{{ toYaml .Values.persistence.selector | indent 4 }}
{{ end }}
{{- end }}
{{- end }}

View File

@ -1,3 +1,4 @@
{{- if .Values.localMgmt }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
@ -11,4 +12,5 @@ roleRef:
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: {{ include "portainer.serviceAccountName" . }}
name: {{ include "portainer.serviceAccountName" . }}
{{- end }}

View File

@ -15,6 +15,7 @@ metadata:
spec:
type: {{ .Values.service.type }}
ports:
{{- if not .Values.tls.force }}
- port: {{ .Values.service.httpPort }}
targetPort: 9000
protocol: TCP
@ -22,7 +23,15 @@ spec:
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.httpNodePort))) }}
nodePort: {{ .Values.service.httpNodePort}}
{{- end }}
{{- if (eq .Values.service.type "NodePort") }}
{{- end }}
- port: {{ .Values.service.httpsPort }}
targetPort: 9443
protocol: TCP
name: https
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.httpsNodePort))) }}
nodePort: {{ .Values.service.httpsNodePort}}
{{- end }}
{{- if (eq .Values.service.type "NodePort") }}
- port: {{ .Values.service.edgeNodePort }}
targetPort: {{ .Values.service.edgeNodePort }}
{{- else }}
@ -33,6 +42,6 @@ spec:
name: edge
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.edgeNodePort))) }}
nodePort: {{ .Values.service.edgeNodePort }}
{{- end }}
{{- end }}
selector:
{{- include "portainer.selectorLabels" . | nindent 4 }}

View File

@ -1,3 +1,4 @@
{{- if .Values.localMgmt }}
apiVersion: v1
kind: ServiceAccount
metadata:
@ -9,3 +10,4 @@ metadata:
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -9,41 +9,81 @@ enterpriseEdition:
enabled: false
image:
repository: portainer/portainer-ee
tag: 2.0.0
pullPolicy: IfNotPresent
tag: 2.33.1
pullPolicy: Always
image:
repository: portainer/portainer-ce
tag: latest
pullPolicy: IfNotPresent
tag: 2.33.1
pullPolicy: Always
imagePullSecrets: []
nodeSelector: {}
tolerations: []
serviceAccount:
annotations: {}
name: portainer-sa-clusteradmin
# This flag provides the ability to enable or disable RBAC-related resources during the deployment of the Portainer application
# If you are using Portainer to manage the K8s cluster it is deployed to, this flag must be set to true
localMgmt: true
service:
# Set the httpNodePort and edgeNodePort only if the type is NodePort
# For Ingress, set the type to be ClusterIP and set ingress.enabled to true
# For Cloud Providers, set the type to be LoadBalancer
type: NodePort
httpPort: 9000
httpsPort: 9443
httpNodePort: 30777
httpsNodePort: 30779
edgePort: 8000
edgeNodePort: 30776
annotations: {}
tls:
# If set, Portainer will be configured to use TLS only
force: false
# If set, will mount the existing secret into the pod
existingSecret: ""
trusted_origins:
# If set, Portainer will be configured to trust the domains specified in domains
enabled: false
# specify (in a comma-separated list) the domain(s) used to access Portainer when it is behind a reverse proxy
# example: portainer.mydomain.com,portainer.example.com
domains: ""
mtls:
# If set, Portainer will be configured to use mTLS only
enable: false
# If set, will mount the existing secret into the pod
existingSecret: ""
feature:
flags: []
ingress:
enabled: false
ingressClassName: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# Only use below if tls.force=true
# nginx.ingress.kubernetes.io/backend-protocol: HTTPS
# Note: Hosts and paths are of type array
hosts:
- host:
paths: []
# - path: "/"
tls: []
resources: {}
persistence:
enabled: true
size: "10Gi"
annotations: {}
storageClass:
existingClaim:

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,31 @@
apiVersion: v2
name: portainer
description: Helm chart used to deploy the Portainer for Kubernetes
home: https://www.portainer.io
icon: https://github.com/portainer/portainer/raw/develop/app/assets/ico/apple-touch-icon.png
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 1.0.0-pre1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 2.0.0
sources:
- https://github.com/portainer/k8s
maintainers:
- name: funkypenguin
email: davidy@funkypenguin.co.nz
url: https://www.funkypenguin.co.nz

View File

@ -0,0 +1,78 @@
# Deploy Portainer using Helm Chart
Before proceeding, ensure to create a namespace in advance.
For instance:
```bash
kubectl create namespace portainer
```
# Testing the Chart
Execute the following for testing the chart:
```bash
helm install --dry-run --debug portainer -n portainer deploy/helm/portainer
```
# Installing the Chart
Execute the following for installing the chart:
```bash
helm upgrade -i -n portainer portainer deploy/helm/portainer
## Refer to the output NOTES on how-to access Portainer web
## An example is attached below
NOTES:
1. Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace portainer svc -w portainer'
export SERVICE_IP=$(kubectl get svc --namespace portainer portainer --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo http://$SERVICE_IP:9000
http://20.40.176.8:9000
```
# Deleting the Chart
Execute the following for deleting the chart:
```bash
## Delete the Helm Chart
helm delete -n portainer portainer
## Delete the Namespace
kubectl delete namespace portainer
```
# Chart Configuration
The following table lists the configurable parameters of the Portainer chart and their default values. The values file can be found under `deploy/helm/portainer/values.yaml`.
*The parameters will be keep updating.*
| Parameter | Description | Default |
| - | - | - |
| `replicaCount` | Number of Portainer service replicas (ALWAYS set to 1) | `1` |
| `image.repository` | Portainer Docker Hub repository | `portainer/portainer-k8s-beta` |
| `image.tag` | Tag for the Portainer image; `linux-amd64` for Linux and `linux-arm` for ARM | `linux-amd64` |
| `image.pullPolicy` | Portainer image pulling policy | `IfNotPresent` |
| `imagePullSecrets` | If Portainer image requires to be in a private repository | `nil` |
| `serviceAccount.annotations` | Annotations to add to the service account | `null` |
| `serviceAccount.name` | The name of the service account to use | `portainer-sa-clusteradmin` |
| `service.type` | Service Type for the main Portainer Service; ClusterIP, NodePort and LoadBalancer | `LoadBalancer` |
| `service.httpPort` | HTTP port for accessing Portainer Web | `9000` |
| `service.httpNodePort` | Static NodePort for accessing Portainer Web. Specify only if the type is NodePort | `nil` |
| `service.edgePort` | TCP port for accessing Portainer Edge | `8000` |
| `service.edgeNodePort` | Static NodePort for accessing Portainer Edge. Specify only if the type is NodePort | `nil` |
| `ingress.enabled` | Create an ingress for Portainer | `false` |
| `ingress.annotations` | Annotations to add to the ingress. For instane, `kubernetes.io/ingress.class: nginx` | `{}` |
| `ingress.hosts.host` | URL for Portainer Web. For instance, `portainer.example.io` | `nil` |
| `ingress.hosts.paths.path` | Path for the Portainer Web. | `/` |
| `ingress.hosts.paths.port` | Port for the Portainer Web. | `9000` |
| `ingress.tls` | TLS support on ingress. Must create a secret with TLS certificates in advance | `[]` |
| `resources` | Portainer resource requests and limits | `{}` |
| `persistence.enabled` | Whether to enable data persistence | `true` |
| `persistence.existingClaim` | Name of an existing PVC to use for data persistence | `nil` |
| `persistence.size` | Size of the PVC used for persistence | `1Gi` |
| `persistence.annotations` | Annotations to apply to PVC used for persistence | `{}` |
| `persistence.storageClass` | StorageClass to apply to PVC used for persistence | `default` |
| `persistence.accessMode` | AccessMode for persistence | `ReadWriteOnce` |
| `persistence.selector` | Selector for persistence | `nil` |

View File

@ -0,0 +1,21 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ if .port }}:{{ .port }}{{ else }}:9000{{ end }}{{.path}}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "portainer.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "portainer.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "portainer.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.httpPort }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "portainer.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9000 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9000:9000
{{- end }}

View File

@ -0,0 +1,74 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "portainer.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "portainer.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "portainer.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "portainer.labels" -}}
helm.sh/chart: {{ include "portainer.chart" . }}
{{ include "portainer.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "portainer.selectorLabels" -}}
app.kubernetes.io/name: {{ include "portainer.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "portainer.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "portainer.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Provide a pre-defined claim or a claim based on the Release
*/}}
{{- define "portainer.pvcName" -}}
{{- if .Values.persistence.existingClaim }}
{{- .Values.persistence.existingClaim }}
{{- else -}}
{{- template "portainer.fullname" . }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,51 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "portainer.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
io.portainer.kubernetes.application.stack: portainer
{{- include "portainer.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "portainer.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "portainer.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "portainer.serviceAccountName" . }}
volumes:
- name: "data"
persistentVolumeClaim:
claimName: {{ template "portainer.pvcName" . }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: data
mountPath: /data
ports:
- name: http
containerPort: 9000
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9000
readinessProbe:
httpGet:
path: /
port: 9000
resources:
{{- toYaml .Values.resources | nindent 12 }}

View File

@ -0,0 +1,41 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "portainer.fullname" . -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "portainer.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path | default "/" }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ .port | default 9000 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,30 @@
{{- if not .Values.persistence.existingClaim -}}
---
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: {{ template "portainer.fullname" . }}
namespace: {{ .Release.Namespace }}
annotations:
{{- if .Values.persistence.storageClass }}
volume.beta.kubernetes.io/storage-class: {{ .Values.persistence.storageClass | quote }}
{{- else }}
volume.alpha.kubernetes.io/storage-class: "generic"
{{- end }}
{{- if .Values.persistence.annotations }}
{{ toYaml .Values.persistence.annotations | indent 2 }}
{{ end }}
labels:
io.portainer.kubernetes.application.stack: portainer
{{- include "portainer.labels" . | nindent 4 }}
spec:
accessModes:
- {{ default "ReadWriteOnce" .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.selector }}
selector:
{{ toYaml .Values.persistence.selector | indent 4 }}
{{ end }}
{{- end }}

View File

@ -0,0 +1,14 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "portainer.fullname" . }}
labels:
{{- include "portainer.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: {{ include "portainer.serviceAccountName" . }}

View File

@ -0,0 +1,27 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "portainer.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
io.portainer.kubernetes.application.stack: portainer
{{- include "portainer.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.httpPort }}
targetPort: 9000
protocol: TCP
name: http
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.httpNodePort))) }}
nodePort: {{ .Values.service.httpNodePort}}
{{- end }}
- port: {{ .Values.service.edgePort }}
targetPort: 8000
protocol: TCP
name: edge
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.edgeNodePort))) }}
nodePort: {{ .Values.service.edgeNodePort }}
{{- end }}
selector:
{{- include "portainer.selectorLabels" . | nindent 4 }}

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "portainer.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "portainer.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,18 @@
{{- if not .Values.disableTest -}}
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "portainer.fullname" . }}-test-connection"
namespace: {{ .Release.Namespace }}
labels:
{{- include "portainer.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "portainer.fullname" . }}:{{ .Values.service.httpPort }}']
restartPolicy: Never
{{ end }}

View File

@ -0,0 +1,40 @@
# Default values for portainer.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: portainer/portainer-ce
tag: latest
pullPolicy: IfNotPresent
imagePullSecrets: []
serviceAccount:
annotations: {}
name: portainer-sa-clusteradmin
service:
# Set the httpNodePort and edgeNodePort only if the type is NodePort
# For Ingress, set the type to be ClusterIP and set ingress.enabled to true
# For Cloud Providers, set the type to be LoadBalancer
type: ClusterIP
httpPort: 9000
httpNodePort:
edgePort: 8000
edgeNodePort:
ingress:
enabled: false
annotations: {}
hosts:
- host:
paths: []
tls: []
resources: {}
persistence:
size: "1Gi"
annotations: {}

View File

@ -0,0 +1,25 @@
version: '3.3'
services:
agent:
image: portainer/agent:2.0.0
ports:
- target: 9001
published: 9001
protocol: tcp
volumes:
- type: npipe
source: \\.\pipe\docker_engine
target: \\.\pipe\docker_engine
- type: bind
source: C:\ProgramData\docker\volumes
target: C:\ProgramData\docker\volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == windows]
networks:
agent_network:
driver: overlay

View File

@ -0,0 +1,24 @@
version: '3.2'
services:
agent:
image: portainer/agent:2.0.0
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- target: 9001
published: 9001
protocol: tcp
mode: host
networks:
- portainer_agent
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
networks:
portainer_agent:
driver: overlay
attachable: true

View File

@ -0,0 +1,25 @@
version: '3.3'
services:
agent:
image: portainer/agent:2.4.0
ports:
- target: 9001
published: 9001
protocol: tcp
volumes:
- type: npipe
source: \\.\pipe\docker_engine
target: \\.\pipe\docker_engine
- type: bind
source: C:\ProgramData\docker\volumes
target: C:\ProgramData\docker\volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == windows]
networks:
agent_network:
driver: overlay

View File

@ -0,0 +1,24 @@
version: '3.2'
services:
agent:
image: portainer/agent:2.4.0
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- target: 9001
published: 9001
protocol: tcp
mode: host
networks:
- portainer_agent
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
networks:
portainer_agent:
driver: overlay
attachable: true

View File

@ -0,0 +1,25 @@
version: '3.3'
services:
agent:
image: portainer/agent:2.33.1
ports:
- target: 9001
published: 9001
protocol: tcp
volumes:
- type: npipe
source: \\.\pipe\docker_engine
target: \\.\pipe\docker_engine
- type: bind
source: C:\ProgramData\docker\volumes
target: C:\ProgramData\docker\volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == windows]
networks:
agent_network:
driver: overlay

View File

@ -0,0 +1,24 @@
version: '3.2'
services:
agent:
image: portainer/agent:2.33.1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- target: 9001
published: 9001
protocol: tcp
mode: host
networks:
- portainer_agent
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
networks:
portainer_agent:
driver: overlay
attachable: true

View File

@ -65,7 +65,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -0,0 +1,95 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
# Optional: can be added to expose the agent port 80 to associate an Edge key.
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: portainer-agent
# namespace: portainer
# spec:
# type: LoadBalancer
# selector:
# app: portainer-agent
# ports:
# - name: http
# protocol: TCP
# port: 80
# targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EDGE
value: "1"
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent"
- name: EDGE_ID
valueFrom:
configMapKeyRef:
name: portainer-agent-edge-id
key: edge.id
- name: EDGE_KEY
valueFrom:
secretKeyRef:
name: portainer-agent-edge-key
key: edge.key
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 80
protocol: TCP

View File

@ -0,0 +1,80 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: LoadBalancer
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: DEBUG
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,81 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: NodePort
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
nodePort: 30778
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: DEBUG
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,100 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
# Optional: can be added to expose the agent port 80 to associate an Edge key.
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: portainer-agent
# namespace: portainer
# spec:
# type: LoadBalancer
# selector:
# app: portainer-agent
# ports:
# - name: http
# protocol: TCP
# port: 80
# targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.10.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EDGE
value: "1"
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent"
- name: EDGE_ID
valueFrom:
configMapKeyRef:
name: portainer-agent-edge
key: edge.id
- name: EDGE_INSECURE_POLL
valueFrom:
configMapKeyRef:
name: portainer-agent-edge
key: edge.insecure_poll
- name: EDGE_KEY
valueFrom:
secretKeyRef:
name: portainer-agent-edge-key
key: edge.key
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 80
protocol: TCP

View File

@ -0,0 +1,80 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: LoadBalancer
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.10.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,81 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: NodePort
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
nodePort: 30778
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.4.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,95 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
# Optional: can be added to expose the agent port 80 to associate an Edge key.
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: portainer-agent
# namespace: portainer
# spec:
# type: LoadBalancer
# selector:
# app: portainer-agent
# ports:
# - name: http
# protocol: TCP
# port: 80
# targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.4.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EDGE
value: "1"
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent"
- name: EDGE_ID
valueFrom:
configMapKeyRef:
name: portainer-agent-edge-id
key: edge.id
- name: EDGE_KEY
valueFrom:
secretKeyRef:
name: portainer-agent-edge-key
key: edge.key
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 80
protocol: TCP

View File

@ -0,0 +1,80 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: LoadBalancer
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.4.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,81 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: NodePort
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
nodePort: 30778
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.4.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -64,7 +64,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -65,7 +65,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

View File

@ -55,7 +55,7 @@ main() {
[[ "$(command -v kubectl)" ]] || errorAndExit "Unable to find kubectl binary. Please ensure kubectl is installed before running this script."
info "Downloading agent manifest..."
curl -L https://portainer.github.io/k8s/deploy/manifests/agent/portainer-agent-edge-k8s.yaml -o portainer-agent-edge-k8s.yaml || errorAndExit "Unable to download agent manifest"
curl -L https://portainer.github.io/k8s/deploy/manifests/agent/ee/portainer-agent-ee20-edge-k8s.yaml -o portainer-agent-edge-k8s.yaml || errorAndExit "Unable to download agent manifest"
info "Creating Portainer namespace..."
kubectl create namespace portainer

View File

@ -0,0 +1,80 @@
#!/usr/bin/env bash
# Script used to deploy the Portainer Edge agent inside a Kubernetes cluster.
# Requires:
# curl
# kubectl
### COLOR OUTPUT ###
ESeq="\x1b["
RCol="$ESeq"'0m' # Text Reset
# Regular Bold Underline High Intensity BoldHigh Intens Background High Intensity Backgrounds
Bla="$ESeq"'0;30m'; BBla="$ESeq"'1;30m'; UBla="$ESeq"'4;30m'; IBla="$ESeq"'0;90m'; BIBla="$ESeq"'1;90m'; On_Bla="$ESeq"'40m'; On_IBla="$ESeq"'0;100m';
Red="$ESeq"'0;31m'; BRed="$ESeq"'1;31m'; URed="$ESeq"'4;31m'; IRed="$ESeq"'0;91m'; BIRed="$ESeq"'1;91m'; On_Red="$ESeq"'41m'; On_IRed="$ESeq"'0;101m';
Gre="$ESeq"'0;32m'; BGre="$ESeq"'1;32m'; UGre="$ESeq"'4;32m'; IGre="$ESeq"'0;92m'; BIGre="$ESeq"'1;92m'; On_Gre="$ESeq"'42m'; On_IGre="$ESeq"'0;102m';
Yel="$ESeq"'0;33m'; BYel="$ESeq"'1;33m'; UYel="$ESeq"'4;33m'; IYel="$ESeq"'0;93m'; BIYel="$ESeq"'1;93m'; On_Yel="$ESeq"'43m'; On_IYel="$ESeq"'0;103m';
Blu="$ESeq"'0;34m'; BBlu="$ESeq"'1;34m'; UBlu="$ESeq"'4;34m'; IBlu="$ESeq"'0;94m'; BIBlu="$ESeq"'1;94m'; On_Blu="$ESeq"'44m'; On_IBlu="$ESeq"'0;104m';
Pur="$ESeq"'0;35m'; BPur="$ESeq"'1;35m'; UPur="$ESeq"'4;35m'; IPur="$ESeq"'0;95m'; BIPur="$ESeq"'1;95m'; On_Pur="$ESeq"'45m'; On_IPur="$ESeq"'0;105m';
Cya="$ESeq"'0;36m'; BCya="$ESeq"'1;36m'; UCya="$ESeq"'4;36m'; ICya="$ESeq"'0;96m'; BICya="$ESeq"'1;96m'; On_Cya="$ESeq"'46m'; On_ICya="$ESeq"'0;106m';
Whi="$ESeq"'0;37m'; BWhi="$ESeq"'1;37m'; UWhi="$ESeq"'4;37m'; IWhi="$ESeq"'0;97m'; BIWhi="$ESeq"'1;97m'; On_Whi="$ESeq"'47m'; On_IWhi="$ESeq"'0;107m';
printSection() {
echo -e "${BIYel}>>>> ${BIWhi}${1}${RCol}"
}
info() {
echo -e "${BIWhi}${1}${RCol}"
}
success() {
echo -e "${BIGre}${1}${RCol}"
}
error() {
echo -e "${BIRed}${1}${RCol}"
}
errorAndExit() {
echo -e "${BIRed}${1}${RCol}"
exit 1
}
### !COLOR OUTPUT ###
main() {
if [[ $# -lt 2 ]]; then
error "Not enough arguments"
error "Usage: ${0} <EDGE_ID> <EDGE_KEY> <EDGE_INSECURE_POLL:optional>"
exit 1
fi
local EDGE_ID="$1"
local EDGE_KEY="$2"
local EDGE_INSECURE_POLL="$3"
[[ "$(command -v curl)" ]] || errorAndExit "Unable to find curl binary. Please ensure curl is installed before running this script."
[[ "$(command -v kubectl)" ]] || errorAndExit "Unable to find kubectl binary. Please ensure kubectl is installed before running this script."
info "Downloading agent manifest..."
curl -L https://portainer.github.io/k8s/deploy/manifests/agent/ee/portainer-agent-ee210-edge-k8s.yaml -o portainer-agent-edge-k8s.yaml || errorAndExit "Unable to download agent manifest"
info "Creating Portainer namespace..."
kubectl create namespace portainer
info "Creating agent configuration..."
kubectl create configmap portainer-agent-edge --from-literal="edge.id=$EDGE_ID" --from-literal="edge.insecure_poll=$EDGE_INSECURE_POLL" -n portainer
info "Creating agent secret..."
kubectl create secret generic portainer-agent-edge-key "--from-literal=edge.key=$EDGE_KEY" -n portainer
info "Deploying agent..."
kubectl apply -f portainer-agent-edge-k8s.yaml || errorAndExit "Unable to deploy agent manifest"
success "Portainer Edge agent successfully deployed"
exit 0
}
main "$@"

View File

@ -0,0 +1,76 @@
#!/usr/bin/env bash
# Script used to deploy the Portainer Edge agent inside a Kubernetes cluster.
# Requires:
# curl
# kubectl
### COLOR OUTPUT ###
ESeq="\x1b["
RCol="$ESeq"'0m' # Text Reset
# Regular Bold Underline High Intensity BoldHigh Intens Background High Intensity Backgrounds
Bla="$ESeq"'0;30m'; BBla="$ESeq"'1;30m'; UBla="$ESeq"'4;30m'; IBla="$ESeq"'0;90m'; BIBla="$ESeq"'1;90m'; On_Bla="$ESeq"'40m'; On_IBla="$ESeq"'0;100m';
Red="$ESeq"'0;31m'; BRed="$ESeq"'1;31m'; URed="$ESeq"'4;31m'; IRed="$ESeq"'0;91m'; BIRed="$ESeq"'1;91m'; On_Red="$ESeq"'41m'; On_IRed="$ESeq"'0;101m';
Gre="$ESeq"'0;32m'; BGre="$ESeq"'1;32m'; UGre="$ESeq"'4;32m'; IGre="$ESeq"'0;92m'; BIGre="$ESeq"'1;92m'; On_Gre="$ESeq"'42m'; On_IGre="$ESeq"'0;102m';
Yel="$ESeq"'0;33m'; BYel="$ESeq"'1;33m'; UYel="$ESeq"'4;33m'; IYel="$ESeq"'0;93m'; BIYel="$ESeq"'1;93m'; On_Yel="$ESeq"'43m'; On_IYel="$ESeq"'0;103m';
Blu="$ESeq"'0;34m'; BBlu="$ESeq"'1;34m'; UBlu="$ESeq"'4;34m'; IBlu="$ESeq"'0;94m'; BIBlu="$ESeq"'1;94m'; On_Blu="$ESeq"'44m'; On_IBlu="$ESeq"'0;104m';
Pur="$ESeq"'0;35m'; BPur="$ESeq"'1;35m'; UPur="$ESeq"'4;35m'; IPur="$ESeq"'0;95m'; BIPur="$ESeq"'1;95m'; On_Pur="$ESeq"'45m'; On_IPur="$ESeq"'0;105m';
Cya="$ESeq"'0;36m'; BCya="$ESeq"'1;36m'; UCya="$ESeq"'4;36m'; ICya="$ESeq"'0;96m'; BICya="$ESeq"'1;96m'; On_Cya="$ESeq"'46m'; On_ICya="$ESeq"'0;106m';
Whi="$ESeq"'0;37m'; BWhi="$ESeq"'1;37m'; UWhi="$ESeq"'4;37m'; IWhi="$ESeq"'0;97m'; BIWhi="$ESeq"'1;97m'; On_Whi="$ESeq"'47m'; On_IWhi="$ESeq"'0;107m';
printSection() {
echo -e "${BIYel}>>>> ${BIWhi}${1}${RCol}"
}
info() {
echo -e "${BIWhi}${1}${RCol}"
}
success() {
echo -e "${BIGre}${1}${RCol}"
}
error() {
echo -e "${BIRed}${1}${RCol}"
}
errorAndExit() {
echo -e "${BIRed}${1}${RCol}"
exit 1
}
### !COLOR OUTPUT ###
main() {
if [[ $# -ne 2 ]]; then
error "Not enough arguments"
error "Usage: ${0} <EDGE_ID> <EDGE_KEY>"
exit 1
fi
[[ "$(command -v curl)" ]] || errorAndExit "Unable to find curl binary. Please ensure curl is installed before running this script."
[[ "$(command -v kubectl)" ]] || errorAndExit "Unable to find kubectl binary. Please ensure kubectl is installed before running this script."
info "Downloading agent manifest..."
curl -L https://portainer.github.io/k8s/deploy/manifests/agent/ee/portainer-agent-ee24-edge-k8s.yaml -o portainer-agent-edge-k8s.yaml || errorAndExit "Unable to download agent manifest"
info "Creating Portainer namespace..."
kubectl create namespace portainer
info "Creating agent configuration..."
kubectl create configmap portainer-agent-edge-id "--from-literal=edge.id=$1" -n portainer
info "Creating agent secret..."
kubectl create secret generic portainer-agent-edge-key "--from-literal=edge.key=$2" -n portainer
info "Deploying agent..."
kubectl apply -f portainer-agent-edge-k8s.yaml || errorAndExit "Unable to deploy agent manifest"
success "Portainer Edge agent successfully deployed"
exit 0
}
main "$@"

View File

@ -65,7 +65,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -64,7 +64,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -65,7 +65,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -1,23 +1,9 @@
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
# Source: portainer/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/serviceaccount.yaml
apiVersion: v1
@ -28,7 +14,27 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
@ -38,15 +44,15 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
@ -58,7 +64,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
type: NodePort
ports:
@ -67,6 +73,11 @@ spec:
protocol: TCP
name: http
nodePort: 30777
- port: 9443
targetPort: 9443
protocol: TCP
name: https
nodePort: 30779
- port: 30776
targetPort: 30776
protocol: TCP
@ -86,7 +97,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
replicas: 1
strategy:
@ -101,16 +112,19 @@ spec:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
nodeSelector:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer
- name: "data"
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ee:2.0.0"
imagePullPolicy: IfNotPresent
args: [ '--tunnel-port','30776' ]
image: "portainer/portainer-ee:2.33.1"
imagePullPolicy: Always
args:
- '--tunnel-port=30776'
volumeMounts:
- name: data
mountPath: /data
@ -118,17 +132,21 @@ spec:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
resources:
{}

View File

@ -1,23 +1,9 @@
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
# Source: portainer/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/serviceaccount.yaml
apiVersion: v1
@ -28,7 +14,27 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
@ -38,15 +44,15 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
@ -58,7 +64,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
type: LoadBalancer
ports:
@ -66,6 +72,10 @@ spec:
targetPort: 9000
protocol: TCP
name: http
- port: 9443
targetPort: 9443
protocol: TCP
name: https
- port: 8000
targetPort: 8000
protocol: TCP
@ -84,7 +94,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
replicas: 1
strategy:
@ -99,15 +109,18 @@ spec:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
nodeSelector:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer
- name: "data"
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ee:2.0.0"
imagePullPolicy: IfNotPresent
image: "portainer/portainer-ee:2.33.1"
imagePullPolicy: Always
args:
volumeMounts:
- name: data
mountPath: /data
@ -115,17 +128,21 @@ spec:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
resources:
{}

View File

@ -1,23 +1,9 @@
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
# Source: portainer/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/serviceaccount.yaml
apiVersion: v1
@ -28,7 +14,27 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
@ -38,15 +44,15 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
@ -58,7 +64,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
type: LoadBalancer
ports:
@ -66,6 +72,10 @@ spec:
targetPort: 9000
protocol: TCP
name: http
- port: 9443
targetPort: 9443
protocol: TCP
name: https
- port: 8000
targetPort: 8000
protocol: TCP
@ -84,7 +94,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
replicas: 1
strategy:
@ -99,15 +109,18 @@ spec:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
nodeSelector:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer
- name: "data"
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ce:latest"
imagePullPolicy: IfNotPresent
image: "portainer/portainer-ce:2.33.1"
imagePullPolicy: Always
args:
volumeMounts:
- name: data
mountPath: /data
@ -115,17 +128,21 @@ spec:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
resources:
{}

View File

@ -1,23 +1,9 @@
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
# Source: portainer/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/serviceaccount.yaml
apiVersion: v1
@ -28,7 +14,27 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
@ -38,15 +44,15 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
@ -58,7 +64,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
type: NodePort
ports:
@ -67,6 +73,11 @@ spec:
protocol: TCP
name: http
nodePort: 30777
- port: 9443
targetPort: 9443
protocol: TCP
name: https
nodePort: 30779
- port: 30776
targetPort: 30776
protocol: TCP
@ -86,7 +97,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
replicas: 1
strategy:
@ -101,16 +112,19 @@ spec:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
nodeSelector:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer
- name: "data"
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ce:latest"
imagePullPolicy: IfNotPresent
args: [ '--tunnel-port','30776' ]
image: "portainer/portainer-ce:2.33.1"
imagePullPolicy: Always
args:
- '--tunnel-port=30776'
volumeMounts:
- name: data
mountPath: /data
@ -118,17 +132,21 @@ spec:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
resources:
{}

148
index.yaml Normal file
View File

@ -0,0 +1,148 @@
apiVersion: v1
entries:
portainer:
- apiVersion: v2
appVersion: 2.0.0
created: "2020-12-01T21:51:37.367634957Z"
description: Helm chart used to deploy the Portainer for Kubernetes
digest: f0e13dd3e7a05d17cb35c7879ffa623fd43b2c10ca968203e302b7a6c2764ddb
home: https://www.portainer.io
icon: https://github.com/portainer/portainer/raw/develop/app/assets/ico/apple-touch-icon.png
maintainers:
- email: davidy@funkypenguin.co.nz
name: funkypenguin
url: https://www.funkypenguin.co.nz
name: portainer
sources:
- https://github.com/portainer/k8s
type: application
urls:
- https://github.com/portainer/k8s/releases/download/portainer-1.0.6/portainer-1.0.6.tgz
version: 1.0.6
- apiVersion: v2
appVersion: 2.0.0
created: "2020-12-01T21:44:52.41671014Z"
description: Helm chart used to deploy the Portainer for Kubernetes
digest: 07c1f1bd60fe0a87f4ecf3be6d24f956cf3fe22aa8604797a9974baedc686b95
home: https://www.portainer.io
icon: https://github.com/portainer/portainer/raw/develop/app/assets/ico/apple-touch-icon.png
maintainers:
- email: davidy@funkypenguin.co.nz
name: funkypenguin
url: https://www.funkypenguin.co.nz
name: portainer
sources:
- https://github.com/portainer/k8s
type: application
urls:
- https://github.com/portainer/k8s/releases/download/portainer-1.0.6-pre1/portainer-1.0.6-pre1.tgz
version: 1.0.6-pre1
- apiVersion: v2
appVersion: 2.0.0
created: "2020-11-17T22:22:39.524509999Z"
description: Helm chart used to deploy the Portainer for Kubernetes
digest: 86cced0084dc8732a25af3e3a2d60edaf886f49d97e35787407f0758c4119cd2
home: https://www.portainer.io
icon: https://github.com/portainer/portainer/raw/develop/app/assets/ico/apple-touch-icon.png
maintainers:
- email: davidy@funkypenguin.co.nz
name: funkypenguin
url: https://www.funkypenguin.co.nz
name: portainer
sources:
- https://github.com/portainer/k8s
type: application
urls:
- https://github.com/portainer/k8s/releases/download/portainer-1.0.4/portainer-1.0.4.tgz
version: 1.0.4
- apiVersion: v2
appVersion: 2.0.0
created: "2020-09-29T23:54:51.913236184Z"
description: Helm chart used to deploy the Portainer for Kubernetes
digest: d58cd497858380c0fd1a065c503393aacfeb2c4f46efc74f26c72738dae1becc
home: https://www.portainer.io
icon: https://github.com/portainer/portainer/raw/develop/app/assets/ico/apple-touch-icon.png
maintainers:
- email: davidy@funkypenguin.co.nz
name: funkypenguin
url: https://www.funkypenguin.co.nz
name: portainer
sources:
- https://github.com/portainer/k8s
type: application
urls:
- https://github.com/portainer/k8s/releases/download/portainer-1.0.3/portainer-1.0.3.tgz
version: 1.0.3
- apiVersion: v2
appVersion: 2.0.0
created: "2020-08-31T03:07:15.090536611Z"
description: Helm chart used to deploy the Portainer for Kubernetes
digest: 5c3c00397ca1364c2907ab8cc514974be788671121d5122324d9174dce4c1038
home: https://www.portainer.io
icon: https://github.com/portainer/portainer/raw/develop/app/assets/ico/apple-touch-icon.png
maintainers:
- email: davidy@funkypenguin.co.nz
name: funkypenguin
url: https://www.funkypenguin.co.nz
name: portainer
sources:
- https://github.com/portainer/k8s
type: application
urls:
- https://github.com/portainer/k8s/releases/download/portainer-1.0.2/portainer-1.0.2.tgz
version: 1.0.2
- apiVersion: v2
appVersion: 2.0.0
created: "2020-08-28T03:09:57.770163753Z"
description: Helm chart used to deploy the Portainer for Kubernetes
digest: 94e86def2616805735d6e0ea454ade9343df91809f5a9cb16ae48e16356dba98
home: https://www.portainer.io
icon: https://github.com/portainer/portainer/raw/develop/app/assets/ico/apple-touch-icon.png
maintainers:
- email: davidy@funkypenguin.co.nz
name: funkypenguin
url: https://www.funkypenguin.co.nz
name: portainer
sources:
- https://github.com/portainer/k8s
type: application
urls:
- https://github.com/portainer/k8s/releases/download/portainer-1.0.1/portainer-1.0.1.tgz
version: 1.0.1
- apiVersion: v2
appVersion: 2.0.0
created: "2020-08-28T02:16:26.042169083Z"
description: Helm chart used to deploy the Portainer for Kubernetes
digest: cb9ee53a4148518685763b2411e33a1ce3c61024d1e3ba275f6637a1e7e57527
home: https://www.portainer.io
icon: https://github.com/portainer/portainer/raw/develop/app/assets/ico/apple-touch-icon.png
maintainers:
- email: davidy@funkypenguin.co.nz
name: funkypenguin
url: https://www.funkypenguin.co.nz
name: portainer
sources:
- https://github.com/portainer/k8s
type: application
urls:
- https://github.com/portainer/k8s/releases/download/portainer-1.0.0/portainer-1.0.0.tgz
version: 1.0.0
- apiVersion: v2
appVersion: 1.0.0
created: "2020-08-19T00:00:46.898833763Z"
description: Helm chart used to deploy the Portainer for Kubernetes
digest: edb0b7d0943bc6a97705c83c94bc8daa8fe842bd9fa56c133286f07e1be5fe42
home: https://www.portainer.io
icon: https://github.com/portainer/portainer/raw/develop/app/assets/ico/apple-touch-icon.png
maintainers:
- email: davidy@funkypenguin.co.nz
name: funkypenguin
url: https://www.funkypenguin.co.nz
name: portainer
sources:
- https://github.com/portainer/portainer-k8s
type: application
urls:
- https://github.com/portainer/charts/releases/download/portainer-0.0.1-fp1/portainer-0.0.1-fp1.tgz
version: 0.0.1-fp1
generated: "2020-08-19T00:00:46.754739363Z"