c935295160
* Release 2.21.5 * python version fix |
||
---|---|---|
.ci | ||
.github/workflows | ||
charts/portainer | ||
deploy | ||
LICENSE | ||
README.md | ||
index.yaml |
README.md
This repo contains helm and YAML for deploying Portainer into a Kubernetes environment. Follow the applicable instructions for your edition / deployment methodology below:
Deploying with Helm
Ensure you're using at least helm v3.2, which includes support for the --create-namespace
argument.
Install the repository:
helm repo add portainer https://portainer.github.io/k8s/
helm repo update
Community Edition
Install the helm chart:
Using NodePort on a local/remote cluster
helm install --create-namespace -n portainer portainer portainer/portainer
Using a cloud provider's loadbalancer
helm install --create-namespace -n portainer portainer portainer/portainer \
--set service.type=LoadBalancer
Using ClusterIP with an ingress
helm install --create-namespace -n portainer portainer portainer/portainer \
--set service.type=ClusterIP
For advanced helm customization, see the chart README
Enterprise Edition
Using NodePort on a local/remote cluster
helm install --create-namespace -n portainer portainer portainer/portainer \
--set enterpriseEdition.enabled=true
Using a cloud provider's loadbalancer
helm install --create-namespace -n portainer portainer portainer/portainer \
--set enterpriseEdition.enabled=true \
--set service.type=LoadBalancer
Using ClusterIP with an ingress
helm install --create-namespace -n portainer portainer portainer/portainer \
--set enterpriseEdition.enabled=true \
--set service.type=ClusterIP
For advanced helm customization, see the chart README
Deploying with manifests
If you're not using helm, you can install Portainer using manifests directly, as follows
Community Edition
Using NodePort on a local/remote cluster
kubectl apply -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
Using a cloud provider's loadbalancer
kubectl apply -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-lb.yaml
Enterprise Edition
Using NodePort on a local/remote cluster
kubectl apply- f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-ee.yaml
Using a cloud provider's loadbalancer
kubectl apply -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-lb-ee.yaml
Note re persisting data
The charts/manifests will create a persistent volume for storing Portainer data, using the default StorageClass.
In some Kubernetes clusters (microk8s), the default Storage Class simply creates hostPath volumes, which are not explicitly tied to a particular node. In a multi-node cluster, this can create an issue when the pod is terminated and rescheduled on a different node, "leaving" all the persistent data behind and starting the pod with an "empty" volume.
While this behaviour is inherently a limitation of using hostPath volumes, a suitable workaround is to use add a nodeSelector to the deployment, which effectively "pins" the portainer pod to a particular node.
The nodeSelector can be added in the following ways:
- Edit your own values.yaml and set the value of nodeSelector like this:
nodeSelector:
kubernetes.io/hostname: <YOUR NODE NAME>
-
Explicictly set the target node when deploying/updating the helm chart on the CLI, by including
--set nodeSelector.kubernetes.io/hostname=<YOUR NODE NAME>
-
If you've deployed Portainer via manifests, without Helm, run the following one-liner to "patch" the deployment, forcing the pod to always be scheduled on the node it's currently running on:
kubectl patch deployments -n portainer portainer -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "'$(kubectl get pods -n portainer -o jsonpath='{ ..nodeName }')'"}}}}}' || (echo Failed to identify current node of portainer pod; exit 1)