Networking
Bridge networks, overlay networks, port mapping, and DNS resolution. You'll hit this when containers can't reach each other or your app is exposed on the wrong port.
services:
app:
build: .
environment:
# Hardcoded IP, will break
DB_HOST: "172.18.0.3"
db:
image: postgres:16services:
app:
build: .
environment:
# Hardcoded IP, will break
DB_HOST: "172.18.0.3"
db:
image: postgres:16services:
app:
build: .
environment:
# Docker DNS resolves service names
DB_HOST: "db"
db:
image: postgres:16services:
app:
build: .
environment:
# Docker DNS resolves service names
DB_HOST: "db"
db:
image: postgres:16Container IPs are assigned dynamically and change on every restart or recreation. Hardcoding an IP address means your app breaks as soon as the container gets a different IP, which happens frequently during development.
Docker Compose creates a DNS entry for each service name. Using db as the hostname lets Docker resolve it to the correct container IP automatically. This works even when containers are recreated and get new IPs.
services:
db:
image: postgres:16
ports:
# Exposed on all interfaces
- "5432:5432"services:
db:
image: postgres:16
ports:
# Exposed on all interfaces
- "5432:5432"services:
db:
image: postgres:16
ports:
# Only accessible from localhost
- "127.0.0.1:5432:5432"services:
db:
image: postgres:16
ports:
# Only accessible from localhost
- "127.0.0.1:5432:5432"Omitting the bind address exposes the port on 0.0.0.0, which means every network interface. Your development database becomes accessible to anyone on the same network, including Wi-Fi networks at coffee shops or coworking spaces.
Binding to 127.0.0.1 restricts the port to the host's loopback interface. The database is accessible from the host machine for development but not from the network. This prevents accidental exposure of development databases.
services:
backend:
build: .
# Published to host, but only
# other containers need access
ports:
- "8080:8080"
frontend:
build: ./frontend
ports:
- "3000:3000"services:
backend:
build: .
# Published to host, but only
# other containers need access
ports:
- "8080:8080"
frontend:
build: ./frontend
ports:
- "3000:3000"services:
backend:
build: .
# Only visible to other containers
expose:
- "8080"
frontend:
build: ./frontend
ports:
- "3000:3000"services:
backend:
build: .
# Only visible to other containers
expose:
- "8080"
frontend:
build: ./frontend
ports:
- "3000:3000"Publishing the backend port to the host lets users bypass the frontend and hit the API directly. This increases the attack surface unnecessarily. Internal services should only be reachable from other containers, not from the host network.
expose documents that a service listens on a port and makes it reachable from other containers on the same network, without publishing it to the host. Only the frontend needs a host-published port since it's the entry point for users.
# Using default bridge network
docker run -d --name app1 myapp
docker run -d --name app2 myapp
# Containers on default bridge
# must use --link or IP addresses
# No automatic DNS resolution# Using default bridge network
docker run -d --name app1 myapp
docker run -d --name app2 myapp
# Containers on default bridge
# must use --link or IP addresses
# No automatic DNS resolution# Create custom bridge network
docker network create mynet
docker run -d --name app1 --network mynet myapp
docker run -d --name app2 --network mynet myapp
# Containers resolve each other by name
# Isolated from other containers# Create custom bridge network
docker network create mynet
docker run -d --name app1 --network mynet myapp
docker run -d --name app2 --network mynet myapp
# Containers resolve each other by name
# Isolated from other containersThe default bridge network doesn't provide automatic DNS resolution. Containers must use --link (deprecated) or IP addresses to communicate. All containers on the default bridge can reach each other, providing no isolation between unrelated workloads.
Custom bridge networks provide automatic DNS resolution between containers, better isolation from unrelated containers, and the ability to connect/disconnect containers at runtime. Containers on different custom networks cannot communicate by default.