Learn

/

ConfigMaps & Secrets

ConfigMaps & Secrets

4 patterns

ConfigMaps, Secrets, resource requests and limits, and pod scheduling. You'll hit this when your pod gets OOMKilled, can't read its config, or lands on the wrong node.

Avoid
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: app
      image: myapp:1.0
      env:
        - name: LOG_LEVEL
          value: "info"
        - name: MAX_RETRIES
          value: "3"
        - name: CACHE_TTL
          value: "300"
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: app
      image: myapp:1.0
      env:
        - name: LOG_LEVEL
          value: "info"
        - name: MAX_RETRIES
          value: "3"
        - name: CACHE_TTL
          value: "300"

Prefer
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  LOG_LEVEL: "info"
  MAX_RETRIES: "3"
  CACHE_TTL: "300"
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: app
      image: myapp:1.0
      envFrom:
        - configMapRef:
            name: app-config
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  LOG_LEVEL: "info"
  MAX_RETRIES: "3"
  CACHE_TTL: "300"
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: app
      image: myapp:1.0
      envFrom:
        - configMapRef:
            name: app-config
Why avoid

Hardcoding environment variables in the Pod spec means every config change requires editing the Deployment and triggering a rollout. Shared config must be duplicated across every Deployment that needs it, leading to drift.

Why prefer

ConfigMaps separate configuration from Pod specs. Using envFrom injects all keys as environment variables automatically. You can update the ConfigMap independently, share it across Deployments, and manage it with GitOps tools.

Kubernetes docs: ConfigMap
Avoid
# No quotas on namespace
# Any deployment can claim
# unlimited resources

apiVersion: v1
kind: Namespace
metadata:
  name: dev-team
# No quotas on namespace
# Any deployment can claim
# unlimited resources

apiVersion: v1
kind: Namespace
metadata:
  name: dev-team

Prefer
apiVersion: v1
kind: Namespace
metadata:
  name: dev-team
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: dev-quota
  namespace: dev-team
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 8Gi
    limits.cpu: "8"
    limits.memory: 16Gi
    pods: "20"
apiVersion: v1
kind: Namespace
metadata:
  name: dev-team
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: dev-quota
  namespace: dev-team
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 8Gi
    limits.cpu: "8"
    limits.memory: 16Gi
    pods: "20"
Why avoid

Without quotas, a runaway deployment or a developer testing with 100 replicas can exhaust cluster resources. Other teams' workloads get evicted or can't schedule. Quotas are essential in multi-tenant clusters.

Why prefer

ResourceQuotas cap the total resources a namespace can consume. This prevents one team from monopolizing cluster capacity. It also forces developers to set resource requests/limits on their Pods, since Pods without them are rejected.

Kubernetes docs: Resource quotas
Avoid
apiVersion: v1
kind: ConfigMap
metadata:
  name: feature-flags
data:
  ENABLE_BETA: "true"
  MAX_UPLOAD_SIZE: "10mb"
  # Mutable: any kubectl edit
  # takes effect cluster-wide
apiVersion: v1
kind: ConfigMap
metadata:
  name: feature-flags
data:
  ENABLE_BETA: "true"
  MAX_UPLOAD_SIZE: "10mb"
  # Mutable: any kubectl edit
  # takes effect cluster-wide

Prefer
apiVersion: v1
kind: ConfigMap
metadata:
  name: feature-flags-v2
immutable: true
data:
  ENABLE_BETA: "true"
  MAX_UPLOAD_SIZE: "10mb"
  # Immutable: changes require
  # a new ConfigMap + rollout
apiVersion: v1
kind: ConfigMap
metadata:
  name: feature-flags-v2
immutable: true
data:
  ENABLE_BETA: "true"
  MAX_UPLOAD_SIZE: "10mb"
  # Immutable: changes require
  # a new ConfigMap + rollout
Why avoid

A mutable ConfigMap can be edited by anyone with access. Changes propagate to Pods automatically (when mounted as volumes), potentially breaking running applications. There's no audit trail of what changed and no easy rollback path.

Why prefer

Immutable ConfigMaps cannot be changed after creation. This prevents accidental edits that propagate to all consuming Pods. Changes require creating a new ConfigMap and updating Deployments, giving you a clear audit trail and the ability to roll back.

Kubernetes docs: Immutable ConfigMaps
Avoid
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 6
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: web
          image: myapp:1.0
      # No topology constraints
      # All pods might land on one node
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 6
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: web
          image: myapp:1.0
      # No topology constraints
      # All pods might land on one node

Prefer
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 6
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              app: web
      containers:
        - name: web
          image: myapp:1.0
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 6
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              app: web
      containers:
        - name: web
          image: myapp:1.0
Why avoid

Without topology constraints, the scheduler might place all 6 Pods on a single node for efficiency. If that node fails, you lose 100% of capacity. Even with multiple replicas, you have a single point of failure at the node level.

Why prefer

Topology spread constraints distribute Pods evenly across nodes (or zones). maxSkew: 1 ensures the difference in Pod count between any two nodes is at most 1. If a node goes down, only a fraction of your capacity is lost.

Kubernetes docs: Topology spread