Learn

/

Common Mistakes

Common Mistakes

5 patterns

Misusing latest tags, ignoring .dockerignore, running as root, and other orchestration anti-patterns. You'll hit this when a deploy fails in production but works on your machine.

Avoid
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  template:
    spec:
      containers:
        - name: web
          image: myapp:latest
          imagePullPolicy: Always
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  template:
    spec:
      containers:
        - name: web
          image: myapp:latest
          imagePullPolicy: Always

Prefer
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.2.3
          # Or use SHA:
          # image: myapp@sha256:abc123...
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  template:
    spec:
      containers:
        - name: web
          image: myapp:1.2.3
          # Or use SHA:
          # image: myapp@sha256:abc123...
Why avoid

latest is a mutable tag. Different nodes might pull different versions if the tag was updated between pulls. Rollbacks deploy whatever latest currently points to, not the previous version. imagePullPolicy: Always adds latency to every pod start.

Why prefer

Pinning a specific version tag (or image digest) ensures every deployment uses the exact same image. Rollbacks go to a known version. You can audit which version is running in each environment. Image digests provide cryptographic guarantees.

Kubernetes docs: Image names
Avoid
FROM python:3.12
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .

# Running as root (default)
CMD ["python", "app.py"]
FROM python:3.12
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .

# Running as root (default)
CMD ["python", "app.py"]

Prefer
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

RUN useradd --create-home appuser
USER appuser

CMD ["python", "app.py"]
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

RUN useradd --create-home appuser
USER appuser

CMD ["python", "app.py"]
Why avoid

Running as root gives the application (and any attacker who compromises it) full control over the container filesystem. With certain misconfigurations, this can escalate to host-level access. Most security scanning tools flag root containers.

Why prefer

Creating a dedicated user and switching with USER ensures the application runs with minimal privileges. If an attacker exploits a vulnerability, they can't modify system files, install packages, or access sensitive host resources.

Docker docs: USER best practices
Avoid
services:
  app:
    build: .
    # User uploads stored in container
    # Lost on restart or scaling

  # Uploaded files at /app/uploads/
  # Session data in memory
  # Generated reports in /tmp/
services:
  app:
    build: .
    # User uploads stored in container
    # Lost on restart or scaling

  # Uploaded files at /app/uploads/
  # Session data in memory
  # Generated reports in /tmp/

Prefer
services:
  app:
    build: .
    volumes:
      - uploads:/app/uploads
    environment:
      SESSION_STORE: redis://redis:6379
      REPORT_BUCKET: s3://reports

  redis:
    image: redis:7-alpine

volumes:
  uploads:
services:
  app:
    build: .
    volumes:
      - uploads:/app/uploads
    environment:
      SESSION_STORE: redis://redis:6379
      REPORT_BUCKET: s3://reports

  redis:
    image: redis:7-alpine

volumes:
  uploads:
Why avoid

Storing uploads, sessions, and files inside the container means data is lost on restart. Scaling to multiple replicas means each one has different data. Users get inconsistent behavior depending on which container handles their request.

Why prefer

Externalizing state to volumes, databases, and object storage lets containers be truly ephemeral. Any replica can handle any request because state lives outside the container. Scaling, restarts, and deployments don't lose data.

Twelve-Factor App: Processes
Avoid
# No .dockerignore
# Build context includes:
COPY . .

# Sends to daemon:
# node_modules/  (500 MB)
# .git/          (100 MB)
# .env           (secrets)
# build/         (stale output)
# __pycache__/   (bytecode)
# .vscode/       (editor config)
# No .dockerignore
# Build context includes:
COPY . .

# Sends to daemon:
# node_modules/  (500 MB)
# .git/          (100 MB)
# .env           (secrets)
# build/         (stale output)
# __pycache__/   (bytecode)
# .vscode/       (editor config)

Prefer
# .dockerignore
node_modules
.git
.env
.env.*
build
dist
__pycache__
*.pyc
.vscode
.idea
*.log
README.md
docker-compose*.yml
Dockerfile
# .dockerignore
node_modules
.git
.env
.env.*
build
dist
__pycache__
*.pyc
.vscode
.idea
*.log
README.md
docker-compose*.yml
Dockerfile
Why avoid

Without .dockerignore, every build sends the entire project directory to the Docker daemon. This includes secrets in .env, hundreds of MB of node_modules and .git, and editor configs. Builds are slow and images contain unnecessary files.

Why prefer

A comprehensive .dockerignore reduces build context size from hundreds of MB to just the files your image needs. Builds are faster, secrets don't leak into layers, and stale build artifacts don't override fresh ones inside the container.

Docker docs: .dockerignore
Avoid
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm ci

# npm wraps node, eats SIGTERM
CMD ["npm", "start"]
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm ci

# npm wraps node, eats SIGTERM
CMD ["npm", "start"]

Prefer
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm ci

# Node receives SIGTERM directly
# and can clean up gracefully
CMD ["node", "server.js"]

# In server.js:
# process.on('SIGTERM', () => {
#   server.close(() => process.exit(0));
# });
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm ci

# Node receives SIGTERM directly
# and can clean up gracefully
CMD ["node", "server.js"]

# In server.js:
# process.on('SIGTERM', () => {
#   server.close(() => process.exit(0));
# });
Why avoid

npm start spawns a shell that wraps the node process. The shell receives SIGTERM but doesn't forward it to the child process. After the 10-second grace period, Docker sends SIGKILL, abruptly terminating the app mid-request.

Why prefer

Running node directly as PID 1 ensures it receives SIGTERM from Docker. The app can finish in-flight requests, close database connections, and flush logs before exiting. Docker gives it 10 seconds (configurable) before sending SIGKILL.

Docker docs: CMD