This repository contains a Helm chart template designed specifically for Lido Finance applications. It provides a standardized way to deploy and manage Lido Finance services on Kubernetes clusters.
The template includes pre-configured settings for:
- Deployment configurations
- Service definitions
- Health checks and probes
- Resource management
- Ingress configurations
- Prometheus monitoring integration
- Pod Disruption Budget
- Horizontal Pod Autoscaler
- Service Monitor for Prometheus
- Security Context configurations
- Persistent Volume Claims for storage
- OpenBao (Vault) Agent Injector for secret management
- Kubernetes cluster (version 1.19+)
- Helm 3.x
- Access to Lido Finance container registry
- Prometheus Operator (for ServiceMonitor support)
-
Testing
- Run Helm lint:
helm lint helm-chart/
- Test template rendering:
helm template lido-app helm-chart/
- Validate values:
helm template lido-app helm-chart/ --values helm-chart/values.yaml
- Run Helm lint:
-
Build and Package
- Package the chart:
helm package helm-chart/
- Create index file:
helm repo index . --url https://lido-artifactory/lido-app-template
- Package the chart:
The following table lists the configurable parameters of the chart and their default values.
| Parameter | Description | Default |
|---|---|---|
name |
Application name | OVERRIDE-ME |
replicas |
Number of replicas | 1 |
maxSurge |
Max surge for deployment | 1 |
maxUnavailable |
Max unavailable for deployment | 1 |
minAvailable |
Max available for deployment | 1 |
image.name |
Container registry/image | OVERRIDE-ME |
image.tag |
Container image tag | OVERRIDE-ME |
image.pullPolicy |
Image pull policy | IfNotPresent |
service.type |
Kubernetes service type | ClusterIP |
service.ports |
Service ports configuration | See values.yaml |
resources |
CPU/Memory resource requests/limits | See values.yaml |
terminationGracePeriodSeconds |
Pod termination grace period | 30 |
securityContext |
Pod security context settings | See values.yaml |
serviceAccount.name |
Service account name | sa-lido-default |
pvc.enabled |
Enable or disable PVC | false |
pvcs |
List of PVCs, see values.yaml | See values.yaml |
containers |
List of containers with params | See values.yaml |
servicemonitor.endpoints |
List of ServiceMonitors | See values.yaml |
openbao.enabled |
Enable OpenBao secret injection | false |
openbao.annotations |
OpenBao agent annotations | {} |
The chart includes pre-configured health checks:
- Startup probe:
/healthzendpoint (port 8080)- failureThreshold: 3
- periodSeconds: 3
- Liveness probe:
/healthzendpoint (port 8080)- initialDelaySeconds: 3
- periodSeconds: 3
- Readiness probe:
/healthzendpoint (port 8080)- initialDelaySeconds: 3
- periodSeconds: 3
Prometheus monitoring is enabled by default with the following features:
- Service Monitor for Prometheus Operator integration (Can be configured with additional endpoints)
- Default metrics endpoint:
/_metrics - Liveness probe metrics:
/_livenessProbe - Prometheus scrape annotations on deployment
Pod Disruption Budget is enabled by default with:
- maxUnavailable: 1
It should be configured on a per-app per-env basis. For example apps in critical should probably have minAvailable >=1. But there are some exceptions like singlton apps. Keep in mind that you can't set up both maxUnavailable and minAvailable.
Horizontal Pod Autoscaler is enabled by default with:
- minReplicas: 1
- maxReplicas: 3
- averageUtilization: 70%
PersistentVolumeClaim is disabled by default. To enable it:
- Set
pvc.enabledtotrue - Set list of PVCs with params under the
pvcsvalue.
Please keep in mind that readOnlyRootFilesystem: true will be enforced in the future. So if your contianers need read-write access to some directories (e.g. cache or temp files) you need to mount them separately, please see values.yaml for examples.
Ingress is disabled by default. To enable it:
- Set
ingress.enabledtotrue - Configure your host and paths in the
ingress.rulessection - Optionally configure TLS
- Default ingress class:
nginx-internal
The template supports multiple containers within one Pod. You can set a list of containers under the cotainers value with their own name, image, env, tags, probes, volumes, etc. See values.yaml for examples.
OpenBao Agent Injector is disabled by default. To enable it:
- Set
openbao.enabledtotrue - Configure annotations in the
openbao.annotationssection
Example configuration:
openbao:
enabled: true
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "<TEAMNAME>-team-ro"
vault.hashicorp.com/agent-inject-secret-app: "secret/data/<TEAMNAME>-team/<APPNAME>-app/<SECRETS>"
vault.hashicorp.com/agent-pre-populate: "true"
vault.hashicorp.com/template-static-secret-render-interval: "30s"
vault.hashicorp.com/agent-inject-template-app: |
{{`{{- with secret "secret/data/<TEAMNAME>-team/<APPNAME>-app/<SECRETS>" -}}`}}
{{`{{- range $k, $v := .Data.data -}}`}}
{{`export `}}{{`{{ $k }}`}}{{`="{{ $v }}"`}}
{{`{{ end -}}`}}
{{`{{- end -}}`}}Using secrets in your container:
containers:
- name: my-app
image:
name: nginx
tag: 1.29.3
command: ["/bin/bash", "-c"]
args:
- |
set -euo pipefail
# Wait for secrets to be injected
while [ ! -f /vault/secrets/app ]; do
sleep 0.1
done
# Load secrets as environment variables
. /vault/secrets/app
# Start your application
exec nginx -g 'daemon off;'Optional: Reload application on secret update
To reload your application when secrets are updated, add the reload command annotation:
openbao:
enabled: true
annotations:
# ... other annotations ...
vault.hashicorp.com/agent-inject-command-app: |
kill -HUP $(pidof nginx)Default security context settings:
- runAsUser: 65534
- runAsGroup: 65534
- fsGroup: 65534
- fsGroupChangePolicy: OnRootMismatch
- readOnlyRootFilesystem: true (controls whether the container's root filesystem is mounted as read-only)
- runAsNonRoot: true (force non-root user)
- allowPrivilegeEscalation: false (block
setuidorsudoactions) - capabilities: drop: ["ALL"] (drop all capabilities)
- seccompProfile: type: RuntimeDefault (default seccomp profile)
- appArmorProfile: type: RuntimeDefault (default apparmor profile)
To customize the deployment, create a custom values file:
# custom-values.yaml
name: my-service
replicas: 2
image:
name: my-service
tag: v1.0.0Then install using:
helm install lido-app oci://ghcr.io/lidofinance/helm-charts --version 1.3.6 --values lido_app_value.yamlInstallation as a helm dependency(Chart.yaml example):
apiVersion: v2
name: lido-app
version: 1.0.0
type: application
dependencies:
- name: k8s-helm-charts-template
alias: overrides
version: 1.3.6
repository: "oci://ghcr.io/lidofinance/helm-charts"- Implement automated version bumping (bumpversion)
- Implement automated documentation updates (helm-docs)
- Add support for multiple environments (dev, staging, prod)
This project is licensed under the MIT License - see the LICENSE file for details.
For support, please contact the Lido Finance DevOps team or create an issue in this repository.