Learn

/

Pods & Deployments

Pods & Deployments

4 patterns

Pod specs, Deployments, ReplicaSets, and rolling updates. You'll hit this when you need zero-downtime deploys or your pods keep crashing without clear reasons.

Avoid
apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
    - name: web
      image: myapp:1.0
      ports:
        - containerPort: 8080
apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
    - name: web
      image: myapp:1.0
      ports:
        - containerPort: 8080

Prefer
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: web
          image: myapp:1.0
          ports:
            - containerPort: 8080
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: web
          image: myapp:1.0
          ports:
            - containerPort: 8080
Why avoid

A bare Pod is not managed by any controller. If it crashes or its node goes down, nothing recreates it. You can't scale it, roll back a bad deploy, or do zero-downtime updates. Bare Pods should only be used for one-off debugging.

Why prefer

Deployments manage Pod replicas, rolling updates, and rollbacks. If a Pod crashes, the Deployment controller creates a replacement. Scaling is a single field change. This is the standard way to run stateless workloads in Kubernetes.

Kubernetes docs: Deployment
Avoid
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.0
          # No resource constraints
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.0
          # No resource constraints

Prefer
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.0
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.0
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
Why avoid

Without resource constraints, a single container can consume all available CPU and memory on a node, causing other Pods to be evicted or throttled. The scheduler can't make informed placement decisions, leading to overloaded nodes.

Why prefer

Resource requests guarantee minimum resources for scheduling. Limits cap maximum usage to prevent runaway containers from starving others. The scheduler uses requests to place Pods on nodes with enough capacity, ensuring stable performance.

Kubernetes docs: Resource management
Avoid
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 4
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:2.0
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 4
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:2.0

Prefer
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 4
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:2.0
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 4
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:2.0
Why avoid

Recreate kills all existing Pods before creating new ones. This causes downtime equal to the startup time of the new Pods. If the new version has a bug, you have zero running Pods until you roll back. This is only appropriate for workloads that can't run two versions simultaneously.

Why prefer

RollingUpdate with maxUnavailable: 0 ensures all existing Pods keep running while new ones start. maxSurge: 1 creates one extra Pod at a time. This gives you zero-downtime deployments. If the new version fails health checks, the rollout pauses automatically.

Kubernetes docs: Deployment strategy
Avoid
# No PDB defined
# During node drain, all pods
# can be evicted simultaneously

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.0
# No PDB defined
# During node drain, all pods
# can be evicted simultaneously

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.0

Prefer
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: web-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.0
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: web-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.0
Why avoid

Without a PDB, kubectl drain or a cluster autoscaler can evict all Pods simultaneously during node maintenance. This causes a complete outage even though you have 3 replicas, defeating the purpose of running multiple instances.

Why prefer

A PodDisruptionBudget guarantees that at least 2 Pods remain available during voluntary disruptions like node upgrades, cluster autoscaling, or kubectl drain. The API server blocks eviction requests that would violate the budget.

Kubernetes docs: PDB