Docker Compose is a tool for defining and running multi-container Docker applications. It uses a YAML file to configure application services, networks, and volumes, allowing you to launch and manage multiple containers with a single command.
With Docker Compose, you define a complete application stack in a declarative way, making it easier to:
Deploy complex applications with interdependent services Maintain consistent development, testing, and production environments Simplify application configuration and deployment workflows Scale individual services as needed Manage the entire application lifecycle from a single file Individual containers that make up your application Defined in docker-compose.yml with their own configuration Can scale independently (run multiple instances) Share networks for communication and volumes for data Each service can have its own build instructions, environment variables, and runtime configurations Automatically assigned DNS names for service discovery Automatic network creation for each Compose project Service discovery built-in (services can communicate using service names) Custom network configuration (drivers, subnets, gateways) Isolated environments for security and resource management Support for external networks and network aliases Control over network attachments for each service Persistent data storage that survives container restarts Shared between services for data exchange Named volumes for easier management and backups Bind mounts support for development workflows Volume drivers for cloud and specialized storage Configuration options for performance and security The docker-compose.yml
file is the core configuration file that defines your multi-container application. It follows a structured YAML format with predefined keys and hierarchies.
Basic structure of a docker-compose.yml
file:
version : '3.8'
services :
web :
image : nginx:alpine
ports :
- "80:80"
volumes :
- ./www:/usr/share/nginx/html
depends_on :
- db
restart : unless-stopped
environment :
- NGINX_HOST=example.com
networks :
- frontend
- backend
db :
image : mysql:8.0
environment :
MYSQL_ROOT_PASSWORD : secret
MYSQL_DATABASE : myapp
MYSQL_USER : user
MYSQL_PASSWORD : password
volumes :
- db_data:/var/lib/mysql
networks :
- backend
healthcheck :
test : [ "CMD" , "mysqladmin" , "ping" , "-h" , "localhost" ]
interval : 30s
timeout : 10s
retries : 3
networks :
frontend :
# Default bridge network
backend :
# Network for internal communication
internal : true
volumes :
db_data :
# Persistent named volume for database
This example defines:
Two services (web
and db
) with their configurations Two networks (frontend
and backend
) with the backend being internal One named volume (db_data
) for persistent storage Dependencies between services with depends_on
Health checks to ensure service availability Network segmentation for security Docker Compose provides a rich set of commands to manage the complete lifecycle of your multi-container applications:
# Start services (interactive mode)
docker-compose up
# This command:
# - Builds or pulls images if needed
# - Creates networks and volumes
# - Starts containers
# - Attaches to containers' output
# - Ctrl+C stops all containers
# Start in detached mode (background)
docker-compose up -d
# Runs containers in the background
# Stop services but keep containers, networks, and volumes
docker-compose stop
# Stop and remove containers, networks, but preserve volumes
docker-compose down
# Stop and remove everything including volumes
docker-compose down --volumes
# View logs of all services
docker-compose logs
# Follow logs of specific service
docker-compose logs -f web
# Scale a specific service to multiple instances
docker-compose up -d --scale web= 3 --scale worker= 2
# List containers in the current project
docker-compose ps
# Execute command in running service container
docker-compose exec web sh
# Run one-off command in new container
docker-compose run --rm web npm test
# Rebuild services
docker-compose build
# Check config file validity
docker-compose config
# Show environment variables
docker-compose config --services
# List running processes in containers
docker-compose top
These commands can be combined with various options to provide precise control over your application stack.
Docker Compose supports environment variables in multiple ways:
.env
file in project directoryShell environment variables Environment file specified with --env-file
Environment section in compose file Variable substitution within the compose file Docker Compose has a specific order of precedence for environment variables:
Variables defined in the shell environment Variables set with --env-file
flag Variables defined in the .env
file Variables defined in the environment
section of services Default values in the Compose file Example .env
file:
# .env file
MYSQL_ROOT_PASSWORD = secret
MYSQL_DATABASE = myapp
MYSQL_USER = user
MYSQL_PASSWORD = password
NGINX_PORT = 80
APP_ENV = development
You can use environment variables throughout your compose file:
version : '3.8'
services :
web :
image : nginx:alpine
ports :
- "${NGINX_PORT:-80}:80" # Use NGINX_PORT or default to 80
environment :
- APP_ENV=${APP_ENV}
- DEBUG=${DEBUG:-false}
db :
image : mysql:${MYSQL_VERSION:-8.0}
environment :
MYSQL_ROOT_PASSWORD : ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE : ${MYSQL_DATABASE}
# For development
docker-compose --env-file .env.development up
# For production
docker-compose --env-file .env.production up
services :
app :
build :
context : ./app # Build context (directory containing Dockerfile)
dockerfile : Dockerfile.dev # Custom Dockerfile name
args : # Build arguments passed to Dockerfile
ENV : development
VERSION : 1.0
cache_from : # Images to use as cache sources
- myapp:builder
target : dev-stage # Specific stage to build in multi-stage Dockerfile
network : host # Network for build-time commands
shm_size : '2gb' # Shared memory size for build
services :
web :
networks :
frontend : # Attach to frontend network
ipv4_address : 172.16.238.10 # Fixed IP address
aliases :
- website # Additional DNS name
backend : # Also attach to backend network
priority : 1000 # Network connection priority
dns :
- 8.8.8.8 # Custom DNS servers
extra_hosts :
- "somehost:162.242.195.82" # Add entry to /etc/hosts
hostname : web-service # Container hostname
networks :
frontend :
driver : bridge
driver_opts :
com.docker.network.bridge.name : frontend_bridge
ipam :
driver : default
config :
- subnet : 172.16.238.0/24
gateway : 172.16.238.1
backend :
driver : bridge
internal : true # No external connectivity
services :
db :
volumes :
- db_data:/var/lib/mysql # Named volume
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro # Bind mount (read-only)
- type : tmpfs # In-memory tmpfs mount
target : /tmp
tmpfs :
size : 100M
mode : 1777
- type : volume # Alternative syntax
source : logs
target : /var/log
volume :
nocopy : true # Don't copy existing content
volumes :
db_data :
driver : local
driver_opts :
type : 'none'
o : 'bind'
device : '/mnt/db-data'
logs :
external : true # Use pre-existing volume
services :
web :
depends_on :
db : # Enhanced dependency with condition
condition : service_healthy # Wait for db to be healthy
redis :
condition : service_started # Just wait for redis to start
worker :
depends_on :
- db # Simple dependency (just wait for startup)
- redis
Health checks help Docker determine if a container is functioning properly:
services :
web :
image : nginx:alpine
healthcheck :
test : [ "CMD" , "curl" , "-f" , "http://localhost" ] # Command to run
interval : 1m30s # Time between checks
timeout : 10s # Time to wait for response
retries : 3 # Number of consecutive failures needed to report unhealthy
start_period : 40s # Grace period for startup
db :
image : postgres:13
healthcheck :
test : [ "CMD-SHELL" , "pg_isready -U postgres" ]
interval : 10s
timeout : 5s
retries : 5
redis :
image : redis:alpine
healthcheck :
test : [ "CMD" , "redis-cli" , "ping" ]
Control the resources each service can use:
services :
app :
image : myapp:latest
deploy :
resources :
limits :
cpus : '0.50' # Use at most 50% of a CPU core
memory : 512M # Use at most 512MB of memory
reservations :
cpus : '0.25' # Reserve at least 25% of a CPU core
memory : 256M # Reserve at least 256MB of memory
restart_policy :
condition : on-failure
delay : 5s
max_attempts : 3
window : 120s
Securely provide sensitive data to services:
services :
web :
image : nginx:alpine
secrets :
- source : site.key
target : /etc/nginx/ssl/site.key
mode : 0400
- site.crt
db :
image : postgres:13
environment :
POSTGRES_PASSWORD_FILE : /run/secrets/db_password
secrets :
- db_password
secrets :
site.key :
file : ./certs/site.key
site.crt :
file : ./certs/site.crt
db_password :
file : ./secrets/db_password.txt
# Or external: true to use pre-existing Docker secrets
Share configuration files with services:
services :
web :
image : nginx:alpine
configs :
- source : nginx_conf
target : /etc/nginx/nginx.conf
- source : site_config
target : /etc/nginx/conf.d/site.conf
mode : 0440
configs :
nginx_conf :
file : ./config/nginx.conf
site_config :
file : ./config/site.conf
Use version control for compose files Store all compose files in your project repository Include documentation about configuration options Track changes to your application stack over time Consider using git hooks to validate compose files Separate development and production configs Use base and override files (docker-compose.yml + docker-compose.override.yml) Create environment-specific files (docker-compose.prod.yml, docker-compose.dev.yml) Use the --file
flag to specify configurations: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
Keep development-specific settings (volumes for code, debug ports) separate Use environment variables for secrets and configuration Never hardcode sensitive information in compose files Use .env
files for local development (but don't commit them) Use environment variable substitution throughout configurations Consider using a secrets management tool for production Define service dependencies properly Use depends_on
with health checks to ensure proper startup order Implement retry logic in applications for resilience Document service interdependencies Use wait scripts for services that need precise startup timing Set resource limits and reservations Prevent resource contention between services Establish baseline resource requirements with reservations Avoid unbounded resource consumption with limits Monitor resource usage to refine settings over time Use health checks for all services Implement proper health check endpoints in applications Configure appropriate intervals and timeouts Use depends_on
conditions to respect health check results Add monitoring that leverages health check results Name your volumes and networks explicitly Use descriptive names that indicate purpose Implement consistent naming conventions Document volume contents and lifecycle Consider backup strategies for important volumes Optimize for container startup order Implement proper depends_on
relationships Use health checks to ensure services are ready Add retry logic in applications for resilience Consider using entrypoint scripts with connection checks Secure your Compose deployments Use internal networks for service-to-service communication Limit exposed ports to only what's necessary Implement least privilege principles for volumes and capabilities Use secrets for sensitive data Volume mounts for code to enable real-time changes
volumes :
- ./src:/app/src
Debug ports exposed for troubleshooting
ports :
- "9229:9229" # Node.js debug port
Development tools and dependencies included
build :
target : development
environment :
NODE_ENV : development
DEBUG : "app:*"
Hot reload enabled for faster development cycles
Minimal resource limits to simplify development
# Often no resource limits specified in dev
Verbose logging for debugging
environment :
LOG_LEVEL : debug
Local services instead of cloud services
# Use local database instead of RDS
db :
image : postgres:13
Built images used with specific version tags
image : mycompany/myapp:1.2.3
Limited port exposure for security
ports :
# Only expose essential ports
- "80:80"
No source code mounting (code baked into image)
# No development volume mounts
Resource limits strictly set
deploy :
resources :
limits :
cpus : '0.5'
memory : 512M
Health checks enabled for reliability
healthcheck :
test : [ "CMD" , "curl" , "-f" , "http://localhost/health" ]
interval : 30s
Restart policies configured
Proper logging configuration
logging :
driver : "json-file"
options :
max-size : "10m"
max-file : "3"
Secrets properly managed
# Base configuration
docker-compose.yml
# Development overrides (automatically applied)
docker-compose.override.yml
# Production configuration
docker-compose.prod.yml
# Start with production configuration
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Example development override:
# docker-compose.override.yml
services :
web :
build :
context : ./
target : development
volumes :
- ./src:/app/src
environment :
NODE_ENV : development
ports :
- "9229:9229"
command : npm run dev
Example production file:
# docker-compose.prod.yml
services :
web :
image : mycompany/myapp:${TAG:-latest}
restart : unless-stopped
deploy :
replicas : 2
resources :
limits :
cpus : '0.5'
memory : 512M
environment :
NODE_ENV : production
healthcheck :
test : [ "CMD" , "curl" , "-f" , "http://localhost/health" ]
interval : 30s
A typical web application with frontend, backend, database, and caching:
version : '3.8'
services :
nginx :
image : nginx:alpine
ports :
- "80:80"
- "443:443"
volumes :
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
depends_on :
- app
networks :
- frontend
restart : unless-stopped
app :
build : ./app
environment :
- NODE_ENV=production
- DB_HOST=db
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
- DB_NAME=${DB_NAME}
- REDIS_HOST=redis
depends_on :
db :
condition : service_healthy
redis :
condition : service_started
networks :
- frontend
- backend
restart : unless-stopped
healthcheck :
test : [ "CMD" , "curl" , "-f" , "http://localhost:3000/health" ]
interval : 30s
timeout : 10s
retries : 3
db :
image : postgres:13
volumes :
- postgres_data:/var/lib/postgresql/data
- ./db/init:/docker-entrypoint-initdb.d
environment :
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME}
networks :
- backend
restart : unless-stopped
healthcheck :
test : [ "CMD-SHELL" , "pg_isready -U postgres" ]
interval : 10s
timeout : 5s
retries : 5
redis :
image : redis:alpine
volumes :
- redis_data:/data
networks :
- backend
restart : unless-stopped
healthcheck :
test : [ "CMD" , "redis-cli" , "ping" ]
certbot :
image : certbot/certbot
volumes :
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
entrypoint : "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks :
frontend :
backend :
internal : true
volumes :
postgres_data :
redis_data :
A pattern for decomposed services:
version : '3.8'
services :
api-gateway :
image : nginx:alpine
ports :
- "80:80"
volumes :
- ./gateway/nginx.conf:/etc/nginx/nginx.conf:ro
networks :
- frontend
- backend
auth-service :
build : ./auth-service
environment :
- DB_HOST=auth-db
networks :
- backend
depends_on :
- auth-db
user-service :
build : ./user-service
environment :
- DB_HOST=user-db
networks :
- backend
depends_on :
- user-db
product-service :
build : ./product-service
environment :
- DB_HOST=product-db
networks :
- backend
depends_on :
- product-db
auth-db :
image : mongo:4
volumes :
- auth-db-data:/data/db
networks :
- backend
user-db :
image : postgres:13
volumes :
- user-db-data:/var/lib/postgresql/data
networks :
- backend
product-db :
image : mysql:8
volumes :
- product-db-data:/var/lib/mysql
networks :
- backend
networks :
frontend :
backend :
internal : true
volumes :
auth-db-data :
user-db-data :
product-db-data :
A pattern for data processing workflows:
version : '3.8'
services :
producer :
build : ./producer
volumes :
- ./data:/data
environment :
- KAFKA_BROKERS=kafka:9092
depends_on :
- kafka
networks :
- pipeline
kafka :
image : confluentinc/cp-kafka:7.0.0
environment :
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
depends_on :
- zookeeper
networks :
- pipeline
zookeeper :
image : confluentinc/cp-zookeeper:7.0.0
environment :
- ZOOKEEPER_CLIENT_PORT=2181
networks :
- pipeline
processor :
build : ./processor
environment :
- KAFKA_BROKERS=kafka:9092
- ELASTICSEARCH_HOST=elasticsearch:9200
depends_on :
- kafka
- elasticsearch
networks :
- pipeline
elasticsearch :
image : elasticsearch:7.17.0
environment :
- discovery.type=single-node
volumes :
- es-data:/usr/share/elasticsearch/data
networks :
- pipeline
dashboard :
image : kibana:7.17.0
ports :
- "5601:5601"
environment :
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on :
- elasticsearch
networks :
- pipeline
networks :
pipeline :
volumes :
es-data :
Service Won't Start Check service logs: docker-compose logs service_name
Verify image existence: docker-compose pull service_name
Check for port conflicts: docker-compose ps
and netstat -tulpn
Validate Compose file: docker-compose config
Network Connectivity Problems Check if services are on the same network: docker network inspect network_name
Verify container DNS resolution: docker-compose exec service_name ping other_service
Check for network isolation issues: docker-compose exec service_name curl http://other_service:port
Inspect network drivers and configuration: docker network ls
and docker network inspect
Environment Variable Issues Validate environment variables are set: docker-compose config
Check for variable substitution: echo $VARIABLE
to see if shell has the variable Verify .env file is in the correct location (same directory as your compose file) Check variable priority and overrides Volume Permission Problems Check ownership and permissions: ls -la ./volume-path
Fix permissions: sudo chown -R user:group ./volume-path
Check if SELinux is blocking access (on Linux): ls -Z ./volume-path
Verify bind mount paths exist on host Service Dependency Issues Use depends_on
with condition checks for proper startup order Implement retry logic in applications Check logs for connection errors: docker-compose logs service_name
Add health checks to ensure services are fully initialized Resource Constraints Check container resource usage: docker stats
Verify host has enough resources: free -m
, df -h
, etc. Adjust resource limits in compose file Check for CPU throttling or OOM (Out of Memory) kills: dmesg | grep -i kill
Build Issues Check build context path Verify Dockerfile exists: ls -la ./path/to/Dockerfile
Validate build arguments View build logs: docker-compose build --no-cache service_name
Unexpected Container Restarts Check for application crashes: docker-compose logs service_name
Verify healthcheck configuration Check for resource constraints (OOM kills) Review restart policy settings
# Validate Compose file
docker-compose config
# Check environment variable substitution
docker-compose config --resolve-environment-variables
# View detailed service information
docker-compose ps
# Follow logs from all services
docker-compose logs -f
# Check container health status
docker inspect --format= '{{.State.Health.Status}}' container_id
# Enter container for debugging
docker-compose exec service_name sh
# Check network connectivity between services
docker-compose exec service_name ping other_service
# View mounted volumes
docker-compose exec service_name mount | grep "/path/in/container"