Compare commits

...

97 Commits

Author SHA1 Message Date
Steven Kang 47cac8b428
chore: version bump 2.33.1 (#188) 2025-08-27 17:39:38 +12:00
samdulam 3fa625a897
Release 2.33.0 (#187) 2025-08-20 10:50:41 +05:30
Yajith Dayarathna 7e6f805da9
Merge pull request #181 from portainer/chore/workflow-permissions-pull-requests
adjusting workflow permissions - pull-requests: write
2025-07-03 08:23:29 +12:00
Yajith Dayarathna afb12adcb6
adjusting workflow permissions 2025-07-03 08:21:35 +12:00
Yajith Dayarathna 00322e4686
Merge pull request #180 from portainer/chore/workflow-permissions
adjusting workflow permissions - 2
2025-07-03 08:15:44 +12:00
Yajith Dayarathna 66ca5a437b
Merge branch 'master' into chore/workflow-permissions 2025-07-03 08:13:42 +12:00
Yajith Dayarathna c481b0a01a
adjusting workflow permissions 2025-07-03 08:12:58 +12:00
Yajith Dayarathna fd81adc8ec
Merge pull request #179 from portainer/chore/workflow-permissions
adjusting workflow permissions
2025-07-03 08:09:41 +12:00
Yajith Dayarathna 46e3aa61ed
workflow update 2025-07-03 08:01:44 +12:00
samdulam 55aea28d1c
2.27.9 with trusted-origins added (#178)
* 2.27.9 with trusted-origins added

* typo fix
2025-07-02 20:43:19 +12:00
samdulam aa64a6225b
Release 2.27.8 (#177) 2025-06-25 14:29:03 +12:00
samdulam b36c9d2e86
release 2.27.7 (#175) 2025-06-17 19:36:21 +12:00
samdulam c3af48aa52
updates for release 2.27.6 (#171) 2025-05-09 07:15:43 +05:30
samdulam 6f039e99d8
2.27.5 Release Update (#170) 2025-05-02 10:42:14 +05:30
samdulam bc906681d9
release update (#167) 2025-04-15 12:22:03 +05:30
samdulam 7d76af4fc5
release 2.27.3 (#166) 2025-03-25 08:33:39 +05:30
samdulam c425562ecb
release 2.27.2 (#165) 2025-03-19 09:53:27 +05:30
samdulam 0e14d24fe8
2.27.1 Release + Ci Workflow Updates (#164)
* release 2.27.0

* 2.27.1 Release + ci workflow updates

* add helm version

* update actions

* update workflow

* change helm version to be compatible
2025-02-27 11:44:45 +05:30
samdulam 70eac4ed30
release 2.27.0 (#163) 2025-02-20 10:46:38 +05:30
samdulam c935295160
Release 2.21.5 (#158)
* Release 2.21.5

* python version fix
2024-12-20 14:25:59 +13:00
mwoudstra 962188051e
Add option to set tolerations (#155)
* Add option to set tolerations

* Bump chart version
2024-11-07 10:24:41 +05:30
samdulam aba1aa8f56
2.21.4 release (#156) 2024-10-25 08:33:01 +05:30
samdulam e0206df0d2
bump chart version (#153) 2024-10-18 15:09:14 +05:30
eMagiz a0248dda9f
Optional field for rbac resources (#151)
* Optional field for rbac resources

* Feedback changes

* Feedback II

* rbac to localMgmt in the values file

---------

Co-authored-by: Omar Gadelmawla <o.gadelmawla@emagiz.com>
2024-10-18 19:25:33 +13:00
James Carppe 5087dd9170
Update Helm chart for 2.21.3 (#152) 2024-10-08 17:21:35 +13:00
James Carppe 4272809504
Update for Release 2.21.2 and kind cluster version upgrade for test (#149)
Co-authored-by: samdulam <sam.dulam@portainer.io>
2024-09-24 14:02:48 +12:00
James Carppe c3a4bb19c5
Update for 2.21.1 (#148) 2024-09-10 08:02:05 +05:30
samdulam b9723d814d
Update for 2.21.0 (#147) 2024-08-27 07:53:38 +05:30
James Carppe 70d24842cb
Version bump for 2.19.5 (#143) 2024-04-22 10:04:47 +05:30
samdulam 6f34933803
add periodSeconds:30 to liveness and readiness probes (#142)
* add periodSeconds:30 to liveness and readiness probes to stop pod from restarting before its ready.

* change feature flags to an array and adjust the template

* change feature.flags to list

* fix template

* use range instead of toyaml so we can use squote

* increase probe times to 5
2024-04-08 15:28:19 +05:30
James Carppe 4923718d73
Version bump for 2.19.4 (#140) 2023-12-06 12:05:47 +13:00
James Carppe c90ad06472
Version bump for 2.19.3 (#139) 2023-11-22 12:00:37 +13:00
James Carppe 5c2ea01097
Version bump for 2.19.2 (#138) 2023-11-13 07:13:12 +05:30
samdulam 5f6fc03ec0
Version bump for 2.19.1 (#136) 2023-09-20 07:51:14 +05:30
samdulam e92f0da498
2.19.0 Release update (#135)
* 2.19.0 Release

* remove deprecated ways of specifying storage class @pchang388

* storageclassname to be only used when a storageclass is specified
2023-08-31 10:43:38 +05:30
Nicholas Malcolm 582a6f376f
Fix some whitepsace and formatting issues (#123)
Co-authored-by: samdulam <sam.dulam@portainer.io>
2023-08-31 09:57:53 +05:30
samdulam 50b1ba3f55
2.18.4 Release (#132) 2023-07-07 12:14:46 +12:00
samdulam 310d8f757e
2.18.3 (#130) 2023-05-22 16:50:16 +05:30
Steven Kang 9b1309fd80
feat(helm): update to `2.18.2` (#129) 2023-05-01 12:13:28 +12:00
samdulam edf9ad7fbe
2.18.1 with mtls and updated test workflow (#128)
* 2.18.1 with mtls

* update tests
2023-04-18 22:59:30 +12:00
James Carppe 6555aa9ac3
Update version to 2.17.1 (#126) 2023-02-22 15:27:48 +13:00
samdulam 8adaa4d4d0
update github workflow, add k8s 1.23/4/5 (#125)
* update and bump for 2.17

* Update testing, add 1.23/4/5
2023-02-07 07:43:44 +05:30
samdulam 8f9fd0bf4f
update and bump for 2.17 (#124) 2023-02-07 14:21:38 +13:00
samdulam 9dfed2a871
2.16.2 (#121) 2022-11-21 08:57:49 +05:30
James Carppe dd5a8e0ae0
Update for 2.16.1 (#120) 2022-11-09 18:06:35 +13:00
samdulam 1f9740d078
Update for 2.16 (#119) 2022-10-31 08:00:12 +05:30
James Carppe 6dcd5bd9e9
Update version to 2.15.1 (#118) 2022-09-16 12:55:58 +12:00
samdulam 136d8d7ec3
release 2.15 (#117) 2022-09-06 11:45:46 +12:00
samdulam 389ee1a163
Updates for 2.14.2 release (#116) 2022-07-26 22:09:12 +05:30
James Carppe e783f7b498
Update to 2.14.1 and chart bump (#115) 2022-07-12 14:15:06 +12:00
samdulam a45f047a03
Update to 2.14.0 and chart bump (#114) 2022-06-28 09:39:56 +05:30
samdulam f96411134a
2.13.1 Update (#112) 2022-05-12 15:43:22 +12:00
Steven Kang d230f5ddb2
Make Persistency Optional (#110) 2022-05-12 09:16:20 +12:00
samdulam 1d0aa74dcd
Chart and Manifest Update for 2.13 ce/ee (#111) 2022-05-09 21:51:46 +12:00
samdulam 046a02d6c2
Chart upgrade for EE Release 2.12.2 (#109)
* EE 2.12.0 Release Updates

* manifest updates

* Updates for 2.12.1

* update for 2.12.2
2022-04-04 11:09:45 +12:00
Oscar Zhou 1d3bd8b979
fix(k8s/helm): add semantic version string check (#108) 2022-03-29 14:58:11 +13:00
Oscar Zhou e7aa7b564b
fix(k8s/helm): change to https only causing service crash with helm install (#101) 2022-03-29 10:27:37 +13:00
samdulam 645923289e
Rel2.12.0 (#102)
* EE 2.12.0 Release Updates

* manifest updates

* Updates for 2.12.1
2022-03-15 10:40:10 +13:00
samdulam eb469ca85a
Update Chart and manifests for release EE-2.12.0 (#100)
* EE 2.12.0 Release Updates

* manifest updates
2022-03-09 02:24:55 +13:00
Steven Kang 607750b973
Removing CODEOWNERS 2022-02-19 10:53:59 +13:00
James Carppe f32ffbbf68
Bump version to 2.11.1 (#96) 2022-02-08 11:44:59 +13:00
samdulam 33666e00e8
Fix ingress 1.2x (#91)
* add condition for 1.21>=

* fix ingress object for v1

* fix

* https by default

* chart version bump

* 9443 only if tls.force is true

* typo

* typo

* fix indent

* lint

* tlsforced var

* fix version conditions

* typo

* typo

* typo

* typo

* remove extra end statement

* feat(helm): improved `ingress` to support a various of Kubernetes versions and added `ingressClass` support

* Update values.yaml

Co-authored-by: Steven Kang <stevenk@Stevens-MacBook-Pro.local>
2022-01-24 15:35:14 +13:00
Jevon Tane fb6da8e019
Merge pull request #85 from portainer/feat/ptd272/add-feature-flag
Introduce Portainer Feature Flag
2021-12-16 17:18:52 +13:00
Steven Kang b2ccdc7b4d Merge branch 'feat/ptd272/add-feature-flag' of https://github.com/portainer/k8s into feat/ptd272/add-feature-flag 2021-12-13 16:24:02 +13:00
Steven Kang 6e3cbc55c9 feat(helm): introduced `feature.flags` - `PTD-272` 2021-12-13 16:23:59 +13:00
Steven Kang 1e1a3693cd feat(helm): introduced `feature.flags` - `PTD-272` 2021-12-13 16:23:59 +13:00
samdulam 621b722666
Rel2.11 - Manifest Updates (#82)
* Release CE 2.11 Manifest Updates

* Manifest Updates for CE2.11

Co-authored-by: yi-portainer <yi.chen@portainer.io>
2021-12-09 13:42:42 +13:00
Steven Kang 6eecb3f387 feat(helm): introduced `feature.flags` - `PTD-272` 2021-12-01 17:26:36 +13:00
Steven Kang 834e513ef7 feat(helm): introduced `feature.flags` - `PTD-272` 2021-12-01 17:25:26 +13:00
Steven Kang cba266202b
feat(helm): introduce TLS only flag (#81)
Co-authored-by: samdulam <sam.dulam@portainer.io>
Co-authored-by: ssbkang <steven.kang@portainer.io>
2021-12-01 14:01:05 +13:00
samdulam 6e01446bc0
Update portainer-agent-edge-k8s.yaml (#80)
* Update portainer-agent-edge-k8s.yaml

* Update portainer-agent-k8s-lb.yaml

* Update portainer-agent-k8s-nodeport.yaml

* Release 2.9.3 Update
2021-11-22 11:33:19 +13:00
samdulam 56ee20b679
Manifest and Helm Updates for EE-2.10.0 (#78)
* Manifest and Helm Updates for EE-2.10.0

* Create portainer-agent-ee210-k8s-nodeport.yaml

* updates for ee2.10

* manifest files

Co-authored-by: Ubuntu <ubuntu@ip-172-31-17-39.ec2.internal>
2021-11-16 09:02:38 +13:00
Anthony Lapenna f6ca6c01b4
Fix EE manifests and Helm deployment template (#76)
* Fix EE manifests and Helm deployment template

* bump chart version

* fix invalid condition for the exposed ports

* add condition to only publish 9443 if ce

Co-authored-by: Sam Dulam <Sam.Dulam@portainer.io>
2021-10-12 10:57:06 +13:00
samdulam 78294f83dc
fix ee manifests and update for ce 2.9.1 (#75) 2021-10-12 08:48:55 +13:00
samdulam 0190fa934f
Merge pull request #73 from portainer/add-ssl
Update chart to support BYO SSL certificates
2021-09-27 15:23:50 +13:00
samdulam a158f5557a
Update on-push-lint-charts.yml
change uses: helm/kind-action@v1.1.0 to 1.2.0
2021-09-27 15:16:28 +13:00
David Young 41f944d116
Bump chart version for ssl changes
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-09-16 17:08:53 +12:00
David Young d62f43b5a1
Update httpsNodePort to 30779
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-09-16 17:08:24 +12:00
David Young 026f1c3dea
Only override ssl cert/key path if using existing cert
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-09-13 10:29:51 +12:00
David Young ce1dfc6b23
Switch probes to HTTPS scheme
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-09-13 09:47:19 +12:00
David Young facec87b81
First cut at chart supporting SSL
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-08-19 13:24:52 +12:00
David Young 143789a0bd
Add basic CODEOWNERS
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-08-12 10:36:31 +12:00
samdulam 85dba3221b
change versions in manifest files (#70)
* bump chart ver, agent tag in manifests for ce 2.6

* EE 2.7.0 and chart version bump

* Change Versions in manifest files for 2.7
2021-08-02 12:15:25 +12:00
samdulam 3ee87c0b99
Prep Chart for EE2.7 (#68)
* bump chart ver, agent tag in manifests for ce 2.6

* EE 2.7.0 and chart version bump
2021-07-29 18:29:38 +12:00
Stéphane Busso 4158132537
Merge pull request #60 from portainer/feat/EE-332/EE-562/edge-insecure-poll
feat(agent): support insecure poll flag
2021-07-15 17:17:24 +12:00
Chaim Lev-Ari 5f6b237169 feat: update to use the same filenames 2021-07-14 14:10:16 +03:00
Chaim Lev-Ari f3f7c426aa refactor: rename edge agent scripts 2021-07-14 13:31:42 +03:00
Chaim Lev-Ari 5dc1b40490 fix(agent): change version of agent 2021-07-14 13:31:42 +03:00
Chaim Lev-Ari 05ff2ecf3a feat(agent): add version number to files 2021-07-14 13:31:42 +03:00
Chaim Lev-Ari 0d72ea6b65 feat(agent): version files 2021-07-14 13:31:42 +03:00
Chaim Lev-Ari 810891be98 feat(agent): support insecure poll flag 2021-07-14 13:31:42 +03:00
samdulam 33e1410f50
bump chart ver, agent tag in manifests for ce 2.6 (#65) 2021-06-25 14:17:24 +12:00
Neil Cresswell 52d67fec4e
Update portainer-agent-edge-k8s.yaml 2021-05-24 17:52:44 +12:00
Yi Chen f59a62efda
Update ee version to 2.4.0 (#61)
* * update ee version to 2.4.0
* use file versioning for ee agent
* update ee agent version to 2.4.0

* * fix stack ee agent versions
2021-04-30 19:47:20 +12:00
samdulam 52a05b429f
Update Notes and Bump Chart Ver (#57) 2021-03-19 15:00:06 +13:00
Yi Chen 75ae994f57
* update chart to use ee 2.0.2 (#55) 2021-03-12 11:02:18 +13:00
David Young cdeaec80b2
Update EE in chart to 2.0.1 - Fixes #52 (#54)
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-02-23 20:48:36 +13:00
45 changed files with 1515 additions and 148 deletions

0
.ci/scripts/local-ct-lint.sh Executable file → Normal file
View File

0
.ci/scripts/local-kube-score.sh Executable file → Normal file
View File

View File

@ -4,22 +4,27 @@ on:
push:
paths:
- 'charts/**'
- '.github/**'
- '.github/**'
pull_request:
branches:
- master
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.4.1
HELM_VERSION: v3.10.1
jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v1
uses: azure/setup-helm@v4.2.0
with:
version: ${{ env.HELM_VERSION }}
@ -40,12 +45,14 @@ jobs:
--enable-optional-test container-security-context-privileged
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v2
- uses: actions/setup-python@v5.3.0
with:
python-version: 3.7
python-version: 3.13.1
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.0.1
uses: helm/chart-testing-action@v2.6.1
with:
version: v3.10.1
- name: Run chart-testing (list-changed)
id: list-changed
@ -59,32 +66,42 @@ jobs:
run: ct lint --config .ci/ct-config.yaml
# Refer to https://github.com/kubernetes-sigs/kind/releases when updating the node_images
- name: Create 1.20 kind cluster
uses: helm/kind-action@v1.1.0
- name: Create 1.29 kind cluster
uses: helm/kind-action@v1.12.0
with:
node_image: kindest/node:v1.20.2@sha256:8f7ea6e7642c0da54f04a7ee10431549c0257315b3a634f6ef2fecaaedb19bab
cluster_name: kubernetes-1.20
node_image: kindest/node:v1.29.14@sha256:8703bd94ee24e51b778d5556ae310c6c0fa67d761fae6379c8e0bb480e6fea29
cluster_name: kubernetes-1.29
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install) against 1.20
- name: Run chart-testing (install) against 1.29
run: ct install --config .ci/ct-config.yaml
- name: Create 1.19 kind cluster
uses: helm/kind-action@v1.1.0
- name: Create 1.30 kind cluster
uses: helm/kind-action@v1.12.0
with:
node_image: kindest/node:v1.19.7@sha256:a70639454e97a4b733f9d9b67e12c01f6b0297449d5b9cbbef87473458e26dca
cluster_name: kubernetes-1.19
node_image: kindest/node:v1.30.10@sha256:4de75d0e82481ea846c0ed1de86328d821c1e6a6a91ac37bf804e5313670e507
cluster_name: kubernetes-1.30
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install) against 1.19
- name: Run chart-testing (install) against 1.30
run: ct install --config .ci/ct-config.yaml
- name: Create 1.18 kind cluster
uses: helm/kind-action@v1.1.0
- name: Create 1.31 kind cluster
uses: helm/kind-action@v1.12.0
with:
node_image: kindest/node:v1.18.15@sha256:5c1b980c4d0e0e8e7eb9f36f7df525d079a96169c8a8f20d8bd108c0d0889cc4
cluster_name: kubernetes-1.18
node_image: kindest/node:v1.31.6@sha256:28b7cbb993dfe093c76641a0c95807637213c9109b761f1d422c2400e22b8e87
cluster_name: kubernetes-1.31
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install) against 1.18
- name: Run chart-testing (install) against 1.31
run: ct install --config .ci/ct-config.yaml
- name: Create 1.32 kind cluster
uses: helm/kind-action@v1.12.0
with:
node_image: kindest/node:v1.32.2@sha256:f226345927d7e348497136874b6d207e0b32cc52154ad8323129352923a3142f
cluster_name: kubernetes-1.32
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install) against 1.32
run: ct install --config .ci/ct-config.yaml

View File

@ -11,9 +11,11 @@ on:
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: write
pages: write
pull-requests: write
steps:
- uses: actions/checkout@v2
@ -35,4 +37,4 @@ jobs:
with:
destination_branch: "gh-pages"
github_token: "${{ secrets.GITHUB_TOKEN }}"
pr_allow_empty: false
pr_allow_empty: false

View File

@ -16,16 +16,16 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 1.0.10
version: 2.33.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: ce-latest-ee-2.0.0
appVersion: ce-latest-ee-2.33.1
sources:
- https://github.com/portainer/k8s
maintainers:
- name: funkypenguin
email: davidy@funkypenguin.co.nz
url: https://www.funkypenguin.co.nz
- name: Portainer
email: platform-team@portainer.io
url: https://www.portainer.io

View File

@ -64,25 +64,30 @@ The following table lists the configurable parameters of the Portainer chart and
| `nodeSelector` | Used to apply a nodeSelector to the deployment | `{}` |
| `serviceAccount.annotations` | Annotations to add to the service account | `null` |
| `serviceAccount.name` | The name of the service account to use | `portainer-sa-clusteradmin` |
| `localMgmt` | Enables or disables the creation of SA, Roles in local cluster where Portainer runs, only change when you don't need to manage the local cluster through this Portainer instance | `true` |
| `service.type` | Service Type for the main Portainer Service; ClusterIP, NodePort and LoadBalancer | `LoadBalancer` |
| `service.httpPort` | HTTP port for accessing Portainer Web | `9000` |
| `service.httpNodePort` | Static NodePort for accessing Portainer Web. Specify only if the type is NodePort | `30777` |
| `service.edgePort` | TCP port for accessing Portainer Edge | `8000` |
| `service.edgeNodePort` | Static NodePort for accessing Portainer Edge. Specify only if the type is NodePort | `30776` |
| `service.annotations` | Annotations to add to the service | `{}` |
| `feature.flags` | Enable one or more features separated by spaces. For instance, `--feat=open-amt` | `nil` |
| `ingress.enabled` | Create an ingress for Portainer | `false` |
| `ingress.ingressClassName` | For Kubernetes >= 1.18 you should specify the ingress-controller via the field `ingressClassName`. For instance, `nginx` | `nil` |
| `ingress.annotations` | Annotations to add to the ingress. For instane, `kubernetes.io/ingress.class: nginx` | `{}` |
| `ingress.hosts.host` | URL for Portainer Web. For instance, `portainer.example.io` | `nil` |
| `ingress.hosts.paths.path` | Path for the Portainer Web. | `/` |
| `ingress.hosts.paths.port` | Port for the Portainer Web. | `9000` |
| `ingress.tls` | TLS support on ingress. Must create a secret with TLS certificates in advance | `[]` |
| `resources` | Portainer resource requests and limits | `{}` |
| `tls.force` | Force Portainer to be configured to use TLS only | `false` |
| `tls.existingSecret` | Mount the existing TLS secret into the pod | `""` |
| `mtls.enable` | Option to specicy mtls Certs to be used by Portainer | `false` |
| `mtls.existingSecret` | Mount the existing mtls secret into the pod | `""` |
| `persistence.enabled` | Whether to enable data persistence | `true` |
| `persistence.existingClaim` | Name of an existing PVC to use for data persistence | `nil` |
| `persistence.size` | Size of the PVC used for persistence | `10Gi` |
| `persistence.annotations` | Annotations to apply to PVC used for persistence | `{}` |
| `persistence.storageClass` | StorageClass to apply to PVC used for persistence | `default` |
| `persistence.accessMode` | AccessMode for persistence | `ReadWriteOnce` |
| `persistence.selector` | Selector for persistence | `nil` |
| `persistence.selector` | Selector for persistence | `nil` |

View File

@ -1,21 +1,27 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ if .port }}:{{ .port }}{{ else }}:9000{{ end }}{{.path}}
Use the URL below to access the application
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ if .port }}:{{ .port }}{{ else }}{{ end }}{{.path}}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "portainer.fullname" . }})
Get the application URL by running these commands:
{{- if .Values.tls.force }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "portainer.fullname" . }})
{{- else }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[1].nodePort}" services {{ include "portainer.fullname" . }})
{{- end}}
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
echo https://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "portainer.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "portainer.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.httpPort }}
echo https://$SERVICE_IP:{{ .Values.service.httpsPort }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "portainer.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9000 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9000:9000
Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "portainer.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].me$ echo "Visit http://127.0.0.1:9443 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9443:9443
{{- end }}

View File

@ -71,4 +71,17 @@ Provide a pre-defined claim or a claim based on the Release
{{- else -}}
{{- template "portainer.fullname" . }}
{{- end -}}
{{- end -}}
{{/*
Generate a right Ingress apiVersion
*/}}
{{- define "ingress.apiVersion" -}}
{{- if semverCompare ">=1.20-0" .Capabilities.KubeVersion.GitVersion -}}
networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
networking.k8s.io/v1beta1
{{- else -}}
extensions/v1
{{- end }}
{{- end -}}

View File

@ -18,16 +18,31 @@ spec:
labels:
{{- include "portainer.selectorLabels" . | nindent 8 }}
spec:
nodeSelector: {{- toYaml .Values.nodeSelector | nindent 8 -}}
{{- with .Values.imagePullSecrets }}
nodeSelector: {{- toYaml .Values.nodeSelector | nindent 8 }}
tolerations: {{- toYaml .Values.tolerations | nindent 8 -}}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- if .Values.localMgmt }}
serviceAccountName: {{ include "portainer.serviceAccountName" . }}
{{- end }}
volumes:
- name: "data"
persistentVolumeClaim:
claimName: {{ template "portainer.pvcName" . }}
{{- if .Values.persistence.enabled }}
- name: "data"
persistentVolumeClaim:
claimName: {{ template "portainer.pvcName" . }}
{{- end }}
{{- if .Values.tls.existingSecret }}
- name: certs
secret:
secretName: {{ .Values.tls.existingSecret }}
{{- end }}
{{- if .Values.mtls.existingSecret }}
- name: mtlscerts
secret:
secretName: {{ .Values.mtls.existingSecret }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
{{- if .Values.enterpriseEdition.enabled }}
@ -37,26 +52,150 @@ spec:
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- end }}
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.edgeNodePort))) }}
args: [ '--tunnel-port','{{ .Values.service.edgeNodePort }}' ]
{{- end }}
args:
{{- if .Values.tls.force }}
- --http-disabled
{{- end }}
{{- if .Values.tls.existingSecret }}
- --sslcert=/certs/tls.crt
- --sslkey=/certs/tls.key
{{- end }}
{{- if .Values.mtls.existingSecret }}
- --mtlscacert=/certs/mtls/mtlsca.crt
- --mtlscert=/certs/mtls/mtlscert.crt
- --mtlskey=/certs/mtls/mtlskey.key
{{- end }}
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.edgeNodePort))) }}
- '--tunnel-port={{ .Values.service.edgeNodePort }}'
{{- end }}
{{- if and .Values.trusted_origins.enabled (not (empty .Values.trusted_origins.domains)) }}
- '--trusted-origins={{ .Values.trusted_origins.domains | trim | quote }}'
{{- end }}
{{- range .Values.feature.flags }}
- {{ . | squote }}
{{- end }}
volumeMounts:
{{- if .Values.persistence.enabled }}
- name: data
mountPath: /data
{{- end }}
{{- if .Values.tls.existingSecret }}
- name: certs
mountPath: /certs
readOnly: true
{{- end }}
{{- if .Values.mtls.existingSecret }}
- name: mtlscerts
mountPath: /certs/mtls
readOnly: true
{{- end }}
ports:
{{- if not .Values.tls.force }}
- name: http
containerPort: 9000
protocol: TCP
{{- end }}
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
failureThreshold: 5
initialDelaySeconds: 45
periodSeconds: 30
httpGet:
path: /
{{- if .Values.tls.force }}
port: 9443
scheme: HTTPS
{{- else }}
{{- if .Values.enterpriseEdition.enabled }}
{{- if regexMatch "^[0-9]+\\.[0-9]+\\.[0-9]+$" .Values.enterpriseEdition.image.tag }}
{{- if eq (semver .Values.enterpriseEdition.image.tag | (semver "2.7.0").Compare) -1 }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- else }}
{{- if eq .Values.enterpriseEdition.image.tag "latest" }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- end}}
{{- else }}
{{- if regexMatch "^[0-9]+\\.[0-9]+\\.[0-9]+$" .Values.image.tag }}
{{- if eq (semver .Values.image.tag | (semver "2.6.0").Compare) -1 }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end}}
{{- else }}
{{- if eq .Values.image.tag "latest" }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- end }}
{{- end }}
{{- end }}
readinessProbe:
failureThreshold: 5
initialDelaySeconds: 45
periodSeconds: 30
httpGet:
path: /
{{- if .Values.tls.force }}
port: 9443
scheme: HTTPS
{{- else }}
{{- if .Values.enterpriseEdition.enabled }}
{{- if regexMatch "^[0-9]+\\.[0-9]+\\.[0-9]+$" .Values.enterpriseEdition.image.tag }}
{{- if eq (semver .Values.enterpriseEdition.image.tag | (semver "2.7.0").Compare) -1 }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- else }}
{{- if eq .Values.enterpriseEdition.image.tag "latest" }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- end}}
{{- else }}
{{- if regexMatch "^[0-9]+\\.[0-9]+\\.[0-9]+$" .Values.image.tag }}
{{- if eq (semver .Values.image.tag | (semver "2.6.0").Compare) -1 }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end}}
{{- else }}
{{- if eq .Values.image.tag "latest" }}
port: 9443
scheme: HTTPS
{{- else }}
port: 9000
scheme: HTTP
{{- end }}
{{- end }}
{{- end }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}

View File

@ -1,10 +1,8 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "portainer.fullname" . -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
{{- $tlsforced := .Values.tls.force -}}
{{- $apiVersion := include "ingress.apiVersion" . -}}
apiVersion: {{ $apiVersion }}
kind: Ingress
metadata:
name: {{ $fullName }}
@ -16,6 +14,9 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- with .Values.ingress.ingressClassName }}
ingressClassName: {{ . }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
@ -33,9 +34,27 @@ spec:
paths:
{{- range .paths }}
- path: {{ .path | default "/" }}
{{- if eq $apiVersion "networking.k8s.io/v1" }}
pathType: Prefix
{{- end }}
backend:
{{- if eq $apiVersion "networking.k8s.io/v1" }}
service:
name: {{ $fullName }}
port:
{{- if $tlsforced }}
number: {{ .port | default 9443 }}
{{- else }}
number: {{ .port | default 9000 }}
{{- end }}
{{- else }}
serviceName: {{ $fullName }}
{{- if $tlsforced }}
servicePort: {{ .port | default 9443 }}
{{- else }}
servicePort: {{ .port | default 9000 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -3,4 +3,6 @@ apiVersion: v1
kind: Namespace
metadata:
name: portainer
labels:
pod-security.kubernetes.io/enforce: privileged
{{ end }}

View File

@ -1,30 +1,30 @@
{{- if .Values.persistence.enabled -}}
{{- if not .Values.persistence.existingClaim -}}
---
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: {{ template "portainer.fullname" . }}
namespace: {{ .Release.Namespace }}
namespace: {{ .Release.Namespace }}
annotations:
{{- if .Values.persistence.storageClass }}
volume.beta.kubernetes.io/storage-class: {{ .Values.persistence.storageClass | quote }}
{{- else }}
volume.alpha.kubernetes.io/storage-class: "generic"
{{- end }}
{{- if .Values.persistence.annotations }}
{{ toYaml .Values.persistence.annotations | indent 2 }}
{{ end }}
labels:
io.portainer.kubernetes.application.stack: portainer
{{- include "portainer.labels" . | nindent 4 }}
{{- include "portainer.labels" . | nindent 4 }}
spec:
accessModes:
- {{ default "ReadWriteOnce" .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
storageClassName: {{ .Values.persistence.storageClass | quote }}
{{ end }}
{{- if .Values.persistence.selector }}
selector:
{{ toYaml .Values.persistence.selector | indent 4 }}
{{ end }}
{{- end }}
{{- end }}

View File

@ -1,3 +1,4 @@
{{- if .Values.localMgmt }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
@ -11,4 +12,5 @@ roleRef:
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: {{ include "portainer.serviceAccountName" . }}
name: {{ include "portainer.serviceAccountName" . }}
{{- end }}

View File

@ -15,6 +15,7 @@ metadata:
spec:
type: {{ .Values.service.type }}
ports:
{{- if not .Values.tls.force }}
- port: {{ .Values.service.httpPort }}
targetPort: 9000
protocol: TCP
@ -22,7 +23,15 @@ spec:
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.httpNodePort))) }}
nodePort: {{ .Values.service.httpNodePort}}
{{- end }}
{{- if (eq .Values.service.type "NodePort") }}
{{- end }}
- port: {{ .Values.service.httpsPort }}
targetPort: 9443
protocol: TCP
name: https
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.httpsNodePort))) }}
nodePort: {{ .Values.service.httpsNodePort}}
{{- end }}
{{- if (eq .Values.service.type "NodePort") }}
- port: {{ .Values.service.edgeNodePort }}
targetPort: {{ .Values.service.edgeNodePort }}
{{- else }}
@ -33,6 +42,6 @@ spec:
name: edge
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.edgeNodePort))) }}
nodePort: {{ .Values.service.edgeNodePort }}
{{- end }}
{{- end }}
selector:
{{- include "portainer.selectorLabels" . | nindent 4 }}

View File

@ -1,3 +1,4 @@
{{- if .Values.localMgmt }}
apiVersion: v1
kind: ServiceAccount
metadata:
@ -9,3 +10,4 @@ metadata:
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -9,44 +9,81 @@ enterpriseEdition:
enabled: false
image:
repository: portainer/portainer-ee
tag: 2.0.0
tag: 2.33.1
pullPolicy: Always
image:
repository: portainer/portainer-ce
tag: latest
tag: 2.33.1
pullPolicy: Always
imagePullSecrets: []
nodeSelector: {}
tolerations: []
serviceAccount:
annotations: {}
name: portainer-sa-clusteradmin
# This flag provides the ability to enable or disable RBAC-related resources during the deployment of the Portainer application
# If you are using Portainer to manage the K8s cluster it is deployed to, this flag must be set to true
localMgmt: true
service:
# Set the httpNodePort and edgeNodePort only if the type is NodePort
# For Ingress, set the type to be ClusterIP and set ingress.enabled to true
# For Cloud Providers, set the type to be LoadBalancer
type: NodePort
httpPort: 9000
httpsPort: 9443
httpNodePort: 30777
httpsNodePort: 30779
edgePort: 8000
edgeNodePort: 30776
annotations: {}
tls:
# If set, Portainer will be configured to use TLS only
force: false
# If set, will mount the existing secret into the pod
existingSecret: ""
trusted_origins:
# If set, Portainer will be configured to trust the domains specified in domains
enabled: false
# specify (in a comma-separated list) the domain(s) used to access Portainer when it is behind a reverse proxy
# example: portainer.mydomain.com,portainer.example.com
domains: ""
mtls:
# If set, Portainer will be configured to use mTLS only
enable: false
# If set, will mount the existing secret into the pod
existingSecret: ""
feature:
flags: []
ingress:
enabled: false
ingressClassName: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# Only use below if tls.force=true
# nginx.ingress.kubernetes.io/backend-protocol: HTTPS
# Note: Hosts and paths are of type array
hosts:
- host:
paths: []
# - path: "/"
tls: []
resources: {}
persistence:
enabled: true
size: "10Gi"
annotations: {}
storageClass:
existingClaim:

View File

@ -0,0 +1,25 @@
version: '3.3'
services:
agent:
image: portainer/agent:2.0.0
ports:
- target: 9001
published: 9001
protocol: tcp
volumes:
- type: npipe
source: \\.\pipe\docker_engine
target: \\.\pipe\docker_engine
- type: bind
source: C:\ProgramData\docker\volumes
target: C:\ProgramData\docker\volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == windows]
networks:
agent_network:
driver: overlay

View File

@ -0,0 +1,24 @@
version: '3.2'
services:
agent:
image: portainer/agent:2.0.0
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- target: 9001
published: 9001
protocol: tcp
mode: host
networks:
- portainer_agent
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
networks:
portainer_agent:
driver: overlay
attachable: true

View File

@ -0,0 +1,25 @@
version: '3.3'
services:
agent:
image: portainer/agent:2.4.0
ports:
- target: 9001
published: 9001
protocol: tcp
volumes:
- type: npipe
source: \\.\pipe\docker_engine
target: \\.\pipe\docker_engine
- type: bind
source: C:\ProgramData\docker\volumes
target: C:\ProgramData\docker\volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == windows]
networks:
agent_network:
driver: overlay

View File

@ -0,0 +1,24 @@
version: '3.2'
services:
agent:
image: portainer/agent:2.4.0
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- target: 9001
published: 9001
protocol: tcp
mode: host
networks:
- portainer_agent
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
networks:
portainer_agent:
driver: overlay
attachable: true

View File

@ -0,0 +1,25 @@
version: '3.3'
services:
agent:
image: portainer/agent:2.33.1
ports:
- target: 9001
published: 9001
protocol: tcp
volumes:
- type: npipe
source: \\.\pipe\docker_engine
target: \\.\pipe\docker_engine
- type: bind
source: C:\ProgramData\docker\volumes
target: C:\ProgramData\docker\volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == windows]
networks:
agent_network:
driver: overlay

View File

@ -0,0 +1,24 @@
version: '3.2'
services:
agent:
image: portainer/agent:2.33.1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- target: 9001
published: 9001
protocol: tcp
mode: host
networks:
- portainer_agent
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
networks:
portainer_agent:
driver: overlay
attachable: true

View File

@ -65,7 +65,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -0,0 +1,95 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
# Optional: can be added to expose the agent port 80 to associate an Edge key.
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: portainer-agent
# namespace: portainer
# spec:
# type: LoadBalancer
# selector:
# app: portainer-agent
# ports:
# - name: http
# protocol: TCP
# port: 80
# targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EDGE
value: "1"
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent"
- name: EDGE_ID
valueFrom:
configMapKeyRef:
name: portainer-agent-edge-id
key: edge.id
- name: EDGE_KEY
valueFrom:
secretKeyRef:
name: portainer-agent-edge-key
key: edge.key
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 80
protocol: TCP

View File

@ -0,0 +1,80 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: LoadBalancer
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: DEBUG
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,81 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: NodePort
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
nodePort: 30778
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: DEBUG
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,100 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
# Optional: can be added to expose the agent port 80 to associate an Edge key.
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: portainer-agent
# namespace: portainer
# spec:
# type: LoadBalancer
# selector:
# app: portainer-agent
# ports:
# - name: http
# protocol: TCP
# port: 80
# targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.10.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EDGE
value: "1"
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent"
- name: EDGE_ID
valueFrom:
configMapKeyRef:
name: portainer-agent-edge
key: edge.id
- name: EDGE_INSECURE_POLL
valueFrom:
configMapKeyRef:
name: portainer-agent-edge
key: edge.insecure_poll
- name: EDGE_KEY
valueFrom:
secretKeyRef:
name: portainer-agent-edge-key
key: edge.key
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 80
protocol: TCP

View File

@ -0,0 +1,80 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: LoadBalancer
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.10.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,81 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: NodePort
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
nodePort: 30778
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.4.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,95 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
# Optional: can be added to expose the agent port 80 to associate an Edge key.
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: portainer-agent
# namespace: portainer
# spec:
# type: LoadBalancer
# selector:
# app: portainer-agent
# ports:
# - name: http
# protocol: TCP
# port: 80
# targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.4.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EDGE
value: "1"
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent"
- name: EDGE_ID
valueFrom:
configMapKeyRef:
name: portainer-agent-edge-id
key: edge.id
- name: EDGE_KEY
valueFrom:
secretKeyRef:
name: portainer-agent-edge-key
key: edge.key
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 80
protocol: TCP

View File

@ -0,0 +1,80 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: LoadBalancer
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.4.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,81 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: NodePort
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
nodePort: 30778
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.4.0
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -64,7 +64,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -65,7 +65,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

View File

@ -55,7 +55,7 @@ main() {
[[ "$(command -v kubectl)" ]] || errorAndExit "Unable to find kubectl binary. Please ensure kubectl is installed before running this script."
info "Downloading agent manifest..."
curl -L https://portainer.github.io/k8s/deploy/manifests/agent/portainer-agent-edge-k8s.yaml -o portainer-agent-edge-k8s.yaml || errorAndExit "Unable to download agent manifest"
curl -L https://portainer.github.io/k8s/deploy/manifests/agent/ee/portainer-agent-ee20-edge-k8s.yaml -o portainer-agent-edge-k8s.yaml || errorAndExit "Unable to download agent manifest"
info "Creating Portainer namespace..."
kubectl create namespace portainer

View File

@ -0,0 +1,80 @@
#!/usr/bin/env bash
# Script used to deploy the Portainer Edge agent inside a Kubernetes cluster.
# Requires:
# curl
# kubectl
### COLOR OUTPUT ###
ESeq="\x1b["
RCol="$ESeq"'0m' # Text Reset
# Regular Bold Underline High Intensity BoldHigh Intens Background High Intensity Backgrounds
Bla="$ESeq"'0;30m'; BBla="$ESeq"'1;30m'; UBla="$ESeq"'4;30m'; IBla="$ESeq"'0;90m'; BIBla="$ESeq"'1;90m'; On_Bla="$ESeq"'40m'; On_IBla="$ESeq"'0;100m';
Red="$ESeq"'0;31m'; BRed="$ESeq"'1;31m'; URed="$ESeq"'4;31m'; IRed="$ESeq"'0;91m'; BIRed="$ESeq"'1;91m'; On_Red="$ESeq"'41m'; On_IRed="$ESeq"'0;101m';
Gre="$ESeq"'0;32m'; BGre="$ESeq"'1;32m'; UGre="$ESeq"'4;32m'; IGre="$ESeq"'0;92m'; BIGre="$ESeq"'1;92m'; On_Gre="$ESeq"'42m'; On_IGre="$ESeq"'0;102m';
Yel="$ESeq"'0;33m'; BYel="$ESeq"'1;33m'; UYel="$ESeq"'4;33m'; IYel="$ESeq"'0;93m'; BIYel="$ESeq"'1;93m'; On_Yel="$ESeq"'43m'; On_IYel="$ESeq"'0;103m';
Blu="$ESeq"'0;34m'; BBlu="$ESeq"'1;34m'; UBlu="$ESeq"'4;34m'; IBlu="$ESeq"'0;94m'; BIBlu="$ESeq"'1;94m'; On_Blu="$ESeq"'44m'; On_IBlu="$ESeq"'0;104m';
Pur="$ESeq"'0;35m'; BPur="$ESeq"'1;35m'; UPur="$ESeq"'4;35m'; IPur="$ESeq"'0;95m'; BIPur="$ESeq"'1;95m'; On_Pur="$ESeq"'45m'; On_IPur="$ESeq"'0;105m';
Cya="$ESeq"'0;36m'; BCya="$ESeq"'1;36m'; UCya="$ESeq"'4;36m'; ICya="$ESeq"'0;96m'; BICya="$ESeq"'1;96m'; On_Cya="$ESeq"'46m'; On_ICya="$ESeq"'0;106m';
Whi="$ESeq"'0;37m'; BWhi="$ESeq"'1;37m'; UWhi="$ESeq"'4;37m'; IWhi="$ESeq"'0;97m'; BIWhi="$ESeq"'1;97m'; On_Whi="$ESeq"'47m'; On_IWhi="$ESeq"'0;107m';
printSection() {
echo -e "${BIYel}>>>> ${BIWhi}${1}${RCol}"
}
info() {
echo -e "${BIWhi}${1}${RCol}"
}
success() {
echo -e "${BIGre}${1}${RCol}"
}
error() {
echo -e "${BIRed}${1}${RCol}"
}
errorAndExit() {
echo -e "${BIRed}${1}${RCol}"
exit 1
}
### !COLOR OUTPUT ###
main() {
if [[ $# -lt 2 ]]; then
error "Not enough arguments"
error "Usage: ${0} <EDGE_ID> <EDGE_KEY> <EDGE_INSECURE_POLL:optional>"
exit 1
fi
local EDGE_ID="$1"
local EDGE_KEY="$2"
local EDGE_INSECURE_POLL="$3"
[[ "$(command -v curl)" ]] || errorAndExit "Unable to find curl binary. Please ensure curl is installed before running this script."
[[ "$(command -v kubectl)" ]] || errorAndExit "Unable to find kubectl binary. Please ensure kubectl is installed before running this script."
info "Downloading agent manifest..."
curl -L https://portainer.github.io/k8s/deploy/manifests/agent/ee/portainer-agent-ee210-edge-k8s.yaml -o portainer-agent-edge-k8s.yaml || errorAndExit "Unable to download agent manifest"
info "Creating Portainer namespace..."
kubectl create namespace portainer
info "Creating agent configuration..."
kubectl create configmap portainer-agent-edge --from-literal="edge.id=$EDGE_ID" --from-literal="edge.insecure_poll=$EDGE_INSECURE_POLL" -n portainer
info "Creating agent secret..."
kubectl create secret generic portainer-agent-edge-key "--from-literal=edge.key=$EDGE_KEY" -n portainer
info "Deploying agent..."
kubectl apply -f portainer-agent-edge-k8s.yaml || errorAndExit "Unable to deploy agent manifest"
success "Portainer Edge agent successfully deployed"
exit 0
}
main "$@"

View File

@ -0,0 +1,76 @@
#!/usr/bin/env bash
# Script used to deploy the Portainer Edge agent inside a Kubernetes cluster.
# Requires:
# curl
# kubectl
### COLOR OUTPUT ###
ESeq="\x1b["
RCol="$ESeq"'0m' # Text Reset
# Regular Bold Underline High Intensity BoldHigh Intens Background High Intensity Backgrounds
Bla="$ESeq"'0;30m'; BBla="$ESeq"'1;30m'; UBla="$ESeq"'4;30m'; IBla="$ESeq"'0;90m'; BIBla="$ESeq"'1;90m'; On_Bla="$ESeq"'40m'; On_IBla="$ESeq"'0;100m';
Red="$ESeq"'0;31m'; BRed="$ESeq"'1;31m'; URed="$ESeq"'4;31m'; IRed="$ESeq"'0;91m'; BIRed="$ESeq"'1;91m'; On_Red="$ESeq"'41m'; On_IRed="$ESeq"'0;101m';
Gre="$ESeq"'0;32m'; BGre="$ESeq"'1;32m'; UGre="$ESeq"'4;32m'; IGre="$ESeq"'0;92m'; BIGre="$ESeq"'1;92m'; On_Gre="$ESeq"'42m'; On_IGre="$ESeq"'0;102m';
Yel="$ESeq"'0;33m'; BYel="$ESeq"'1;33m'; UYel="$ESeq"'4;33m'; IYel="$ESeq"'0;93m'; BIYel="$ESeq"'1;93m'; On_Yel="$ESeq"'43m'; On_IYel="$ESeq"'0;103m';
Blu="$ESeq"'0;34m'; BBlu="$ESeq"'1;34m'; UBlu="$ESeq"'4;34m'; IBlu="$ESeq"'0;94m'; BIBlu="$ESeq"'1;94m'; On_Blu="$ESeq"'44m'; On_IBlu="$ESeq"'0;104m';
Pur="$ESeq"'0;35m'; BPur="$ESeq"'1;35m'; UPur="$ESeq"'4;35m'; IPur="$ESeq"'0;95m'; BIPur="$ESeq"'1;95m'; On_Pur="$ESeq"'45m'; On_IPur="$ESeq"'0;105m';
Cya="$ESeq"'0;36m'; BCya="$ESeq"'1;36m'; UCya="$ESeq"'4;36m'; ICya="$ESeq"'0;96m'; BICya="$ESeq"'1;96m'; On_Cya="$ESeq"'46m'; On_ICya="$ESeq"'0;106m';
Whi="$ESeq"'0;37m'; BWhi="$ESeq"'1;37m'; UWhi="$ESeq"'4;37m'; IWhi="$ESeq"'0;97m'; BIWhi="$ESeq"'1;97m'; On_Whi="$ESeq"'47m'; On_IWhi="$ESeq"'0;107m';
printSection() {
echo -e "${BIYel}>>>> ${BIWhi}${1}${RCol}"
}
info() {
echo -e "${BIWhi}${1}${RCol}"
}
success() {
echo -e "${BIGre}${1}${RCol}"
}
error() {
echo -e "${BIRed}${1}${RCol}"
}
errorAndExit() {
echo -e "${BIRed}${1}${RCol}"
exit 1
}
### !COLOR OUTPUT ###
main() {
if [[ $# -ne 2 ]]; then
error "Not enough arguments"
error "Usage: ${0} <EDGE_ID> <EDGE_KEY>"
exit 1
fi
[[ "$(command -v curl)" ]] || errorAndExit "Unable to find curl binary. Please ensure curl is installed before running this script."
[[ "$(command -v kubectl)" ]] || errorAndExit "Unable to find kubectl binary. Please ensure kubectl is installed before running this script."
info "Downloading agent manifest..."
curl -L https://portainer.github.io/k8s/deploy/manifests/agent/ee/portainer-agent-ee24-edge-k8s.yaml -o portainer-agent-edge-k8s.yaml || errorAndExit "Unable to download agent manifest"
info "Creating Portainer namespace..."
kubectl create namespace portainer
info "Creating agent configuration..."
kubectl create configmap portainer-agent-edge-id "--from-literal=edge.id=$1" -n portainer
info "Creating agent secret..."
kubectl create secret generic portainer-agent-edge-key "--from-literal=edge.key=$2" -n portainer
info "Deploying agent..."
kubectl apply -f portainer-agent-edge-k8s.yaml || errorAndExit "Unable to deploy agent manifest"
success "Portainer Edge agent successfully deployed"
exit 0
}
main "$@"

View File

@ -65,7 +65,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -64,7 +64,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -65,7 +65,7 @@ spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.0.0
image: portainer/agent:2.33.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL

View File

@ -14,21 +14,21 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
accessModes:
- "ReadWriteOnce"
@ -44,15 +44,15 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
@ -64,7 +64,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
type: NodePort
ports:
@ -73,6 +73,11 @@ spec:
protocol: TCP
name: http
nodePort: 30777
- port: 9443
targetPort: 9443
protocol: TCP
name: https
nodePort: 30779
- port: 30776
targetPort: 30776
protocol: TCP
@ -92,7 +97,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
replicas: 1
strategy:
@ -111,14 +116,15 @@ spec:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer
- name: "data"
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ee:2.0.0"
image: "portainer/portainer-ee:2.33.1"
imagePullPolicy: Always
args: [ '--tunnel-port','30776' ]
args:
- '--tunnel-port=30776'
volumeMounts:
- name: data
mountPath: /data
@ -126,17 +132,21 @@ spec:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
resources:
{}

View File

@ -14,21 +14,21 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
accessModes:
- "ReadWriteOnce"
@ -44,15 +44,15 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
@ -64,7 +64,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
type: LoadBalancer
ports:
@ -72,6 +72,10 @@ spec:
targetPort: 9000
protocol: TCP
name: http
- port: 9443
targetPort: 9443
protocol: TCP
name: https
- port: 8000
targetPort: 8000
protocol: TCP
@ -90,7 +94,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
replicas: 1
strategy:
@ -109,13 +113,14 @@ spec:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer
- name: "data"
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ee:2.0.0"
imagePullPolicy: Always
image: "portainer/portainer-ee:2.33.1"
imagePullPolicy: Always
args:
volumeMounts:
- name: data
mountPath: /data
@ -123,17 +128,21 @@ spec:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
resources:
{}

View File

@ -14,21 +14,21 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
accessModes:
- "ReadWriteOnce"
@ -44,15 +44,15 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
@ -64,7 +64,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
type: LoadBalancer
ports:
@ -72,6 +72,10 @@ spec:
targetPort: 9000
protocol: TCP
name: http
- port: 9443
targetPort: 9443
protocol: TCP
name: https
- port: 8000
targetPort: 8000
protocol: TCP
@ -90,7 +94,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
replicas: 1
strategy:
@ -109,13 +113,14 @@ spec:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer
- name: "data"
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ce:latest"
imagePullPolicy: Always
image: "portainer/portainer-ce:2.33.1"
imagePullPolicy: Always
args:
volumeMounts:
- name: data
mountPath: /data
@ -123,17 +128,21 @@ spec:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
resources:
{}

View File

@ -14,21 +14,21 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
accessModes:
- "ReadWriteOnce"
@ -44,15 +44,15 @@ metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
@ -64,7 +64,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
type: NodePort
ports:
@ -73,6 +73,11 @@ spec:
protocol: TCP
name: http
nodePort: 30777
- port: 9443
targetPort: 9443
protocol: TCP
name: https
nodePort: 30779
- port: 30776
targetPort: 30776
protocol: TCP
@ -92,7 +97,7 @@ metadata:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.0.0"
app.kubernetes.io/version: "ce-latest-ee-2.33.1"
spec:
replicas: 1
strategy:
@ -111,14 +116,15 @@ spec:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer
- name: "data"
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ce:latest"
image: "portainer/portainer-ce:2.33.1"
imagePullPolicy: Always
args: [ '--tunnel-port','30776' ]
args:
- '--tunnel-port=30776'
volumeMounts:
- name: data
mountPath: /data
@ -126,17 +132,21 @@ spec:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9000
port: 9443
scheme: HTTPS
resources:
{}