Learn

/

Helm Charts

Helm Charts

4 patterns

Chart structure, values.yaml, templates, helpers, and chart dependencies. You'll hit this when you copy-paste Kubernetes manifests across environments instead of parameterizing them.

Avoid
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.2.3
          resources:
            limits:
              memory: 256Mi
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.2.3
          resources:
            limits:
              memory: 256Mi

Prefer
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-web
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    spec:
      containers:
        - name: web
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-web
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    spec:
      containers:
        - name: web
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
Why avoid

Hardcoded values in templates defeat the purpose of Helm. You can't install the same chart with different configurations without editing the template files. Every environment needs its own copy of the manifests, creating maintenance burden and drift.

Why prefer

Templating with {{ .Values.* }} lets you customize deployments per environment by overriding values.yaml. The same chart works for dev, staging, and production. {{ .Release.Name }} prevents name collisions when installing multiple releases.

Helm docs: Values files
Avoid
# templates/deployment.yaml
metadata:
  labels:
    app.kubernetes.io/name: myapp
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: {{ .Chart.AppVersion }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}

# templates/service.yaml
metadata:
  labels:
    app.kubernetes.io/name: myapp
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: {{ .Chart.AppVersion }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
# templates/deployment.yaml
metadata:
  labels:
    app.kubernetes.io/name: myapp
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: {{ .Chart.AppVersion }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}

# templates/service.yaml
metadata:
  labels:
    app.kubernetes.io/name: myapp
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: {{ .Chart.AppVersion }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}

Prefer
# templates/_helpers.tpl
{{- define "myapp.labels" -}}
app.kubernetes.io/name: {{ .Chart.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

# templates/deployment.yaml
metadata:
  labels:
    {{- include "myapp.labels" . | nindent 4 }}

# templates/service.yaml
metadata:
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
# templates/_helpers.tpl
{{- define "myapp.labels" -}}
app.kubernetes.io/name: {{ .Chart.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

# templates/deployment.yaml
metadata:
  labels:
    {{- include "myapp.labels" . | nindent 4 }}

# templates/service.yaml
metadata:
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
Why avoid

Duplicating labels across every template file means updating them in multiple places when the label scheme changes. It's easy to miss a file, leading to inconsistent labels that break label selectors and monitoring queries.

Why prefer

Named templates in _helpers.tpl define reusable snippets like standard labels and selectors. Changing the label scheme requires editing one place. The include function inserts the template and nindent handles YAML indentation correctly.

Helm docs: Named templates
Avoid
# Run migration in init container
# Runs on EVERY pod restart
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      initContainers:
        - name: migrate
          image: myapp:1.0
          command: ["./migrate", "up"]
      containers:
        - name: web
          image: myapp:1.0
# Run migration in init container
# Runs on EVERY pod restart
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      initContainers:
        - name: migrate
          image: myapp:1.0
          command: ["./migrate", "up"]
      containers:
        - name: web
          image: myapp:1.0

Prefer
# Run migration once per upgrade
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ .Release.Name }}-migrate
  annotations:
    "helm.sh/hook": pre-upgrade
    "helm.sh/hook-weight": "-1"
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: migrate
          image: myapp:1.0
          command: ["./migrate", "up"]
# Run migration once per upgrade
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ .Release.Name }}-migrate
  annotations:
    "helm.sh/hook": pre-upgrade
    "helm.sh/hook-weight": "-1"
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: migrate
          image: myapp:1.0
          command: ["./migrate", "up"]
Why avoid

Init containers run every time a Pod starts or restarts. With 3 replicas, the migration runs 3 times concurrently, which can cause race conditions or lock contention. Pod restarts (OOMKill, node drain) trigger unnecessary migration attempts.

Why prefer

Helm hooks run Jobs at specific lifecycle points. pre-upgrade runs the migration once before new Pods start. hook-delete-policy: hook-succeeded cleans up the Job after success. The migration runs exactly once per upgrade, not per Pod restart.

Helm docs: Chart hooks
Avoid
# Always creates Ingress and HPA
# even when not needed

# templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ .Release.Name }}
spec:
  rules:
    - host: {{ .Values.host }}

# templates/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: {{ .Release.Name }}
# Always creates Ingress and HPA
# even when not needed

# templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ .Release.Name }}
spec:
  rules:
    - host: {{ .Values.host }}

# templates/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: {{ .Release.Name }}

Prefer
# templates/ingress.yaml
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ .Release.Name }}
  {{- with .Values.ingress.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
spec:
  rules:
    - host: {{ .Values.ingress.host }}
{{- end }}

# templates/hpa.yaml
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: {{ .Release.Name }}
{{- end }}
# templates/ingress.yaml
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ .Release.Name }}
  {{- with .Values.ingress.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
spec:
  rules:
    - host: {{ .Values.ingress.host }}
{{- end }}

# templates/hpa.yaml
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: {{ .Release.Name }}
{{- end }}
Why avoid

Always creating every resource means dev environments get unnecessary Ingress controllers and HPAs. It also means you can't install the chart in a cluster that lacks an Ingress controller or metrics server without errors.

Why prefer

Wrapping resources in {{- if .Values.*.enabled }} makes them optional. Dev environments can disable Ingress and autoscaling while production enables them. The chart adapts to each environment without maintaining separate templates.

Helm docs: Control structures