Welcome to from-docker-to-kubernetes

Docker Development Workflows

Leveraging Docker to streamline development environments and workflows

Docker in Development Environments

Docker has fundamentally transformed how developers build, test, and deploy applications by providing consistent, reproducible environments across the entire development lifecycle. A well-designed Docker development workflow eliminates the notorious "works on my machine" problems that have plagued software development teams for decades. By containerizing applications and their dependencies, Docker creates portable environments that behave identically across developer workstations, CI/CD pipelines, staging environments, and production servers.

The containerization approach offers several transformative benefits for development teams:

  1. Environment Standardization: Every developer works with exactly the same versions of languages, libraries, and system dependencies, regardless of their local operating system or configuration.
  2. Accelerated Onboarding: New team members can become productive in hours rather than days, simply by running a few Docker commands to spin up the complete development environment.
  3. Isolated Experimentation: Developers can safely experiment with new dependencies or configurations without risking their primary development setup.
  4. Workflow Portability: Development workflows can be documented as Docker configurations, making them executable, testable, and version-controlled alongside application code.

The core benefit of Docker in development is environment parity—ensuring that development, testing, and production environments are identical at the foundational level. This parity dramatically reduces the "but it worked in development" surprises when deploying to production and helps catch environment-specific issues early in the development process. Instead of spending days debugging mysterious production issues that can't be reproduced locally, teams can focus on delivering features with confidence.

Environment parity addresses several key challenges:

  1. Dependency Hell: Eliminates conflicts between application dependencies by isolating them in containers.
  2. Operating System Differences: Minimizes issues caused by differences between developer operating systems (Windows, macOS, Linux) by standardizing on Linux containers.
  3. Infrastructure Configuration: Captures infrastructure requirements as code, making them transparent and reproducible.
  4. Service Integration: Enables consistent local testing with dependent services through container orchestration.
  5. Production Simulation: Allows developers to test against production-like environments locally, increasing confidence in deployments.

When implemented properly, Docker-based development workflows become a competitive advantage, significantly reducing development cycle time and improving software quality by eliminating an entire class of environment-related issues.

Implementing Docker effectively in development workflows requires understanding several key patterns and best practices that balance productivity, performance, and consistency. Each organization must find the right equilibrium between strict production parity and development velocity, as excessive container complexity can slow down developers, while oversimplified containers might not adequately represent production environments.

The most successful Docker development workflows tend to follow these principles:

  1. Simplicity First: Start with simple configurations and add complexity only when needed
  2. Performance Optimized: Ensure that container operations (builds, restarts) are fast enough for comfortable development
  3. Developer Experience: Prioritize usability and convenience for daily development tasks
  4. Production Relevance: Maintain appropriate similarity to production environments
  5. Flexibility: Allow customization for different developer needs and project types
  6. Standardization: Create consistent patterns that work across multiple projects
  7. Documentation: Thoroughly document the development workflow for new team members

The following sections explore proven implementation patterns that satisfy these principles for various development scenarios and technology stacks.

Setting Up Development Containers

Development containers provide isolated, consistent environments that can be easily shared among team members:

Dedicated Development Dockerfiles

  • Create separate Dockerfiles for development and production
  • Include development tools, debuggers, and live-reload capabilities
  • Keep development dependencies separate from production
  • Use build stages to maintain consistency between environments
  • Example development Dockerfile:
    FROM node:18
    
    WORKDIR /app
    
    # Install development dependencies
    COPY package*.json ./
    RUN npm install
    
    # Install development tools
    RUN npm install -g nodemon
    
    # Set development environment
    ENV NODE_ENV=development
    
    # Copy source code
    COPY . .
    
    # Expose development port
    EXPOSE 3000
    
    # Command with live-reloading
    CMD ["nodemon", "src/index.js"]
    

A well-crafted docker-compose.yml for development addresses several important requirements:

  1. Service Dependencies: Automatically starts and configures all required services like databases, caches, message queues, and mock APIs, eliminating the need for developers to manually configure these services.
  2. Volume Mapping: Enables real-time code changes without rebuilding containers, drastically improving development iteration speed.
  3. Environment Configuration: Manages environment variables in a central location, ensuring all services have consistent configuration.
  4. Network Configuration: Creates a virtual network that simulates production connectivity between services, allowing realistic inter-service communication testing.
  5. Resource Allocation: Controls memory and CPU allocation to simulate resource constraints or ensure adequate resources for development tools.
  6. Persistent Data: Preserves database contents between container restarts through named volumes, allowing developers to maintain state during development.

By combining all these aspects in a declarative configuration file, teams ensure that every developer works with an identical environment setup, regardless of their local machine configuration.

These development-specific Dockerfiles include additional tooling that wouldn't be appropriate in production, such as:

  • Debugging utilities that would increase the attack surface in production
  • Development-only packages like hot-reloading libraries
  • Build tools that aren't needed at runtime
  • Configuration settings optimized for development experience rather than security or performance
  • Verbose logging and error reporting

This separation ensures that development conveniences don't accidentally leak into production environments, while still providing developers with the tools they need to be productive.

Development-Specific Docker Compose

  • Create a docker-compose.yml file for local development
  • Configure service dependencies (databases, caches, etc.)
  • Define volumes for live code updates
  • Set development-specific environment variables
  • Example docker-compose.yml:
    version: '3.8'
    
    services:
      app:
        build:
          context: .
          dockerfile: Dockerfile.dev
        ports:
          - "3000:3000"
        volumes:
          - ./src:/app/src
          - ./package.json:/app/package.json
        environment:
          - DATABASE_URL=postgres://user:password@db:5432/devdb
          - REDIS_URL=redis://cache:6379
        depends_on:
          - db
          - cache
      
      db:
        image: postgres:14
        environment:
          - POSTGRES_USER=user
          - POSTGRES_PASSWORD=password
          - POSTGRES_DB=devdb
        volumes:
          - postgres-data:/var/lib/postgresql/data
      
      cache:
        image: redis:6
        volumes:
          - redis-data:/data
    
    volumes:
      postgres-data:
      redis-data:
    

VS Code's Dev Containers feature has revolutionized containerized development by allowing developers to work inside containers while maintaining the rich development experience they expect from modern IDEs. The benefits include:

  1. Consistent Toolchain: Every developer uses identical compiler versions, linters, formatters, and other language-specific tools.
  2. Extension Persistence: Team-recommended extensions are automatically installed for everyone, ensuring consistent code quality tools.
  3. Seamless Debugging: Integrated debugging works inside containers with breakpoints, variable inspection, and other debugging features.
  4. Terminal Integration: Integrated terminal sessions run inside the container with access to all development tools.
  5. Git Integration: Source control operations work seamlessly between the host and container.
  6. Performance Optimization: The extension intelligently syncs only necessary files between host and container to maintain performance.

This approach eliminates the "but it works with my extensions/tools" problem that can occur even when applications run in containers but development tools vary between team members.

VS Code Dev Containers

  • Use Visual Studio Code's Dev Containers extension
  • Edit code inside containerized environments
  • Integrate debugging, extensions, and terminals
  • Share consistent configurations across the team
  • Example .devcontainer/devcontainer.json:
    {
      "name": "My Project Dev Container",
      "dockerComposeFile": "../docker-compose.yml",
      "service": "app",
      "workspaceFolder": "/app",
      "extensions": [
        "dbaeumer.vscode-eslint",
        "esbenp.prettier-vscode",
        "ms-azuretools.vscode-docker"
      ],
      "settings": {
        "terminal.integrated.shell.linux": "/bin/bash",
        "editor.formatOnSave": true
      }
    }
    

GitHub Codespaces Integration

  • Configure Codespaces with Dev Containers
  • Provide cloud-based development environments
  • Enable instant onboarding for new team members
  • Create consistent environments for PR reviews
  • Configure with .devcontainer directory in repository
  • Eliminate local setup completely with browser-based development
  • Scale compute resources dynamically based on workload needs
  • Provide secure, ephemeral environments for contribution review
  • Enable collaborative development through shared cloud workspaces
  • Support development from any device with a web browser
  • Reduce onboarding time from days to minutes for new contributors
  • Enforce security policies and access controls centrally
  • Isolate development environments from production credentials

GitHub Codespaces takes containerized development to the next level by hosting the entire development environment in the cloud. This approach is particularly valuable for:

  1. Open Source Projects: Contributors can immediately begin working without complex local setup
  2. Security-Sensitive Projects: Development happens in isolated, controlled environments
  3. Resource-Intensive Applications: Development containers can access more computing resources than local machines
  4. Geographically Distributed Teams: Everyone gets the same experience regardless of their local infrastructure
  5. Complex Microservice Architectures: The entire system can be spun up consistently for any developer

Volume Mounting Strategies

Efficient volume mounting is crucial for a productive Docker development workflow:

# Basic volume mount for source code
docker run -v $(pwd):/app -p 3000:3000 myapp:dev

# Multiple targeted mounts for better performance
docker run \
  -v $(pwd)/src:/app/src \
  -v $(pwd)/public:/app/public \
  -v $(pwd)/package.json:/app/package.json \
  -p 3000:3000 myapp:dev

# Mount specific directories with different consistency settings
docker run \
  -v $(pwd)/src:/app/src:cached \
  -v $(pwd)/node_modules:/app/node_modules:delegated \
  -v $(pwd)/build:/app/build:delegated \
  -p 3000:3000 myapp:dev

# Use bind-mount consistency flags for macOS performance
docker run \
  # 'cached' optimizes for reads from container
  -v $(pwd)/config:/app/config:cached \
  # 'delegated' optimizes for writes from container
  -v $(pwd)/logs:/app/logs:delegated \
  # 'consistent' ensures perfect consistency (slower)
  -v $(pwd)/critical-data:/app/critical-data:consistent \
  -p 3000:3000 myapp:dev

# Named volumes for dependencies to preserve between runs
docker run \
  -v $(pwd)/src:/app/src \
  -v node_modules:/app/node_modules \
  -p 3000:3000 myapp:dev

Real-Time Code Reloading

Implementing real-time code reloading ensures developers can see changes without restarting containers:

Debugging Containerized Applications

Effective debugging is essential for productive development:

Remote Debugging

  • Configure debugger to connect to containerized application
  • Expose debugging ports in Docker configuration
  • Set up source maps for compiled languages
  • Example Node.js debugging in Docker:
    FROM node:18
    
    WORKDIR /app
    
    COPY package*.json ./
    RUN npm install
    
    COPY . .
    
    # Expose regular and debug ports
    EXPOSE 3000 9229
    
    # Start with debugging enabled
    # Expose multiple ports - application, debugging, and metrics
    EXPOSE 3000 9229 9545
    
    # Set NODE_OPTIONS to enable garbage collection metrics
    ENV NODE_OPTIONS="--inspect=0.0.0.0:9229 --expose-gc"
    
    # Start with debugging and live-reload enabled
    CMD ["node", "--inspect=0.0.0.0:9229", "src/index.js"]
    

    This enhanced debugging configuration provides:
    1. Multiple exposed ports: Application traffic (3000), debugging protocol (9229), and metrics (9545)
    2. Advanced debugging options: Remote debugging with garbage collection visibility
    3. Environment configuration: Development-specific environment settings
    4. Network accessibility: Makes the debug port available on all interfaces for remote debugging

    The combination of these settings enables developers to:
    • Connect debuggers from any device on the network
    • Monitor memory usage and garbage collection patterns
    • Profile application performance with standard tools
    • Use the same debugging workflow regardless of the host operating system

IDE Integration

  • Configure VS Code launch.json for Docker debugging
  • Set up JetBrains IDEs with Docker interpreters
  • Use language-specific debugging extensions
  • Example VS Code launch.json:
    {
      "version": "0.2.0",
      "configurations": [
        {
          "type": "node",
          "request": "attach",
          "name": "Docker: Attach to Node",
          "port": 9229,
          "address": "localhost",
          "localRoot": "${workspaceFolder}",
          "remoteRoot": "/app",
          "restart": true,
          "sourceMaps": true,
          "resolveSourceMapLocations": [
            "${workspaceFolder}/**",
            "!**/node_modules/**"
          ],
          "skipFiles": [
            "<node_internals>/**",
            "**/node_modules/**"
          ],
          "outFiles": [
            "${workspaceFolder}/dist/**/*.js"
          ],
          "trace": true,
          "smartStep": true,
          "internalConsoleOptions": "openOnSessionStart",
          "sourceMapPathOverrides": {
            "webpack:///./*": "${workspaceFolder}/*",
            "webpack:///src/*": "${workspaceFolder}/src/*"
          }
        },
        {
          "type": "chrome",
          "request": "attach",
          "name": "Docker: Attach to Chrome",
          "port": 9222,
          "webRoot": "${workspaceFolder}/public",
          "sourceMapPathOverrides": {
            "webpack:///./~/*": "${webRoot}/node_modules/*",
            "webpack:///./*": "${webRoot}/*",
            "webpack:///src/*": "${webRoot}/src/*"
          }
        }
      ],
      "compounds": [
        {
          "name": "Full-Stack Debug",
          "configurations": ["Docker: Attach to Node", "Docker: Attach to Chrome"]
        }
      ]
    }
    

    This enhanced debugging configuration provides sophisticated capabilities:
    1. Source map integration: Maps compiled/transpiled code back to original source
    2. Skip files configuration: Ignores third-party code when stepping through
    3. Smart stepping: Skips uninteresting code automatically
    4. Path overrides: Correctly maps paths in webpack-bundled code
    5. Compound debugging: Simultaneously debug both frontend and backend
    6. Trace mode: Provides detailed information about the debugging process
    7. Console integration: Automatically opens debug console when session starts

    This level of debugging configuration eliminates the friction between containerized development and the debugging experience developers expect from traditional local setups.

Log Collection and Analysis

  • Configure centralized logging in development
  • Use Docker's logging drivers for collection
  • Implement structured logging for easier analysis
  • Example docker-compose logging configuration:
    services:
      app:
        # ... other configuration
        logging:
          driver: "json-file"
          options:
            max-size: "10m"
            max-file: "3"
            labels: "app,environment,service"
            env: "HOSTNAME,NODE_ENV"
            tag: "{{.Name}}/{{.ImageName}}"
    
    # Enhanced ELK logging setup
    logging:
      driver: "fluentd"
      options:
        fluentd-address: "localhost:24224"
        fluentd-async: "true"
        fluentd-buffer-limit: "8MB"
        fluentd-retry-wait: "1s"
        fluentd-max-retries: "30"
        tag: "docker.{{.Name}}"
        labels: "app,component,environment"
    
    # Loki logging for Grafana integration
    logging:
      driver: "loki"
      options:
        loki-url: "http://localhost:3100/loki/api/v1/push"
        loki-retries: "5"
        loki-batch-size: "400"
        loki-external-labels: "job=dockerlogs,container_name={{.Name}},image_name={{.ImageName}}"
    

    This comprehensive logging configuration provides:
    1. Structured logging: Labels and environment variables included in logs
    2. Log rotation: Prevents disk space issues with size and file count limits
    3. Integration options: Multiple driver configurations for different scenarios
    4. Performance settings: Buffer limits and async options for high-volume logs
    5. Retry logic: Ensures logs aren't lost during temporary outages
    6. Contextual tagging: Makes logs easily filterable in aggregation systems

    Proper logging configuration is essential for troubleshooting containerized applications, as it preserves the context of when and where log messages originated across a distributed system.

Interactive Container Sessions

  • Connect to running containers for debugging
  • Inspect environment variables and file system
  • Execute diagnostic commands as needed
  • Example commands:
    # Connect to a running container
    docker exec -it container_name bash
    
    # View logs with follow
    docker logs -f container_name
    
    # Inspect network connections
    docker exec container_name netstat -tulpn
    

Multi-Service Development

Most modern applications consist of multiple services. Docker Compose is the primary tool for managing multi-service development environments:

# docker-compose.yml for multi-service development
version: '3.8'

services:
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile.dev
    volumes:
      - ./frontend/src:/app/src
    ports:
      - "3000:3000"
    environment:
      - API_URL=http://backend:4000
    depends_on:
      - backend

  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile.dev
    volumes:
      - ./backend/src:/app/src
    ports:
      - "4000:4000"
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/devdb
      - REDIS_URL=redis://cache:6379
    depends_on:
      - db
      - cache

  db:
    image: postgres:14
    volumes:
      - postgres-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=devdb
    ports:
      - "5432:5432"

  cache:
    image: redis:6
    volumes:
      - redis-data:/data
    ports:
      - "6379:6379"

volumes:
  postgres-data:
  redis-data:

Local Development Best Practices

Testing in Docker Environments

Comprehensive testing within Docker environments ensures consistency across the development lifecycle:

  1. Containerized test suites
    • Run tests inside containers
    • Execute tests as part of CI/CD pipelines
    • Ensure consistent test environments
    • Isolate test dependencies from development environment
    • Run parallel test suites in separate containers
    • Persist test results and coverage reports with volumes
    • Implement testing matrix with different configurations
    • Example comprehensive test Dockerfile:
      FROM node:18
      
      WORKDIR /app
      
      # Install Chrome for browser testing
      RUN apt-get update && apt-get install -y \
          wget gnupg \
          && wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
          && echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list \
          && apt-get update && apt-get install -y \
          google-chrome-stable \
          && rm -rf /var/lib/apt/lists/*
      
      # Install global test dependencies
      RUN npm install -g jest nyc codecov
      
      # Install project dependencies first (better caching)
      COPY package*.json ./
      RUN npm ci
      
      # Copy test configuration
      COPY jest.config.js .nycrc.json ./
      
      # Copy source and test files
      COPY src/ ./src/
      COPY tests/ ./tests/
      
      # Set environment variables
      ENV NODE_ENV=test
      ENV CI=true
      ENV JEST_JUNIT_OUTPUT_DIR=./test-results/
      
      # Create directory for test results
      RUN mkdir -p ./test-results ./coverage
      
      # Allow different test commands via Docker CMD override
      ENTRYPOINT ["npm"]
      CMD ["test"]
      

    This approach enables various testing scenarios:
    # Run unit tests
    docker run --rm -v ./test-results:/app/test-results myapp:test test:unit
    
    # Run integration tests
    docker run --rm -v ./test-results:/app/test-results myapp:test test:integration
    
    # Run with coverage reporting
    docker run --rm -v ./test-results:/app/test-results -v ./coverage:/app/coverage myapp:test test:coverage
    
    # Run specific test suite
    docker run --rm myapp:test test -- --testPathPattern=auth
    
    # Run with debugging enabled
    docker run --rm -p 9229:9229 myapp:test test:debug
    
  2. Integration testing with Docker Compose
    • Define test-specific compose configurations
    • Initialize test databases and dependencies
    • Execute end-to-end tests against containerized services
    • Implement parallel test execution across services
    • Simulate network conditions and failure scenarios
    • Create isolated testing networks
    • Capture and analyze test results automatically
    • Example comprehensive docker-compose.test.yml:
      version: '3.8'
      
      services:
        app:
          build:
            context: .
            dockerfile: Dockerfile.test
          depends_on:
            test-db:
              condition: service_healthy
            test-cache:
              condition: service_healthy
          environment:
            - NODE_ENV=test
            - DATABASE_URL=postgres://test:test@test-db:5432/testdb
            - REDIS_URL=redis://test-cache:6379
            - TEST_MODE=integration
            - LOG_LEVEL=info
            - API_TIMEOUT=5000
          volumes:
            - ./test-results:/app/test-results
            - ./coverage:/app/coverage
          networks:
            - test-network
          command: ["test:integration"]
        
        test-db:
          image: postgres:14
          environment:
            - POSTGRES_USER=test
            - POSTGRES_PASSWORD=test
            - POSTGRES_DB=testdb
          tmpfs:
            - /var/lib/postgresql/data
          volumes:
            - ./test/
      

Development to Production Workflow

A complete Docker development workflow seamlessly transitions between development and production:

Development Stage

  • Developers work with Docker Compose locally
  • Use development-specific Dockerfiles
  • Implement hot-reloading and debugging
  • Focus on fast iteration and developer experience

Continuous Integration

  • Build production images in CI pipeline
  • Run containerized tests against the built image
  • Implement security scanning and quality checks
  • Tag images with commit/build identifiers
  • Example CI pipeline (GitHub Actions):
    name: CI Pipeline
    
    on:
      push:
        branches: [ main, develop ]
      pull_request:
        branches: [ main, develop ]
    
    jobs:
      build-and-test:
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v3
          
          - name: Build Docker image
            run: docker build -t myapp:${{ github.sha }} .
          
          - name: Run tests
            run: |
              docker-compose -f docker-compose.test.yml up \
                --abort-on-container-exit --exit-code-from app
          
          - name: Security scan
            uses: aquasecurity/trivy-action@master
            with:
              image-ref: myapp:${{ github.sha }}
              format: 'table'
              exit-code: '1'
              severity: 'CRITICAL,HIGH'
    

Staging Environment

  • Deploy to staging using production containers
  • Validate in an environment similar to production
  • Test integration with external services
  • Perform user acceptance testing

Production Deployment

  • Deploy validated container images to production
  • Implement proper versioning and rollback strategies
  • Monitor deployed containers for performance and errors
  • Example deployment workflow:
    # Tag image for production
    docker tag myapp:$COMMIT_HASH myapp:production
    
    # Push to container registry
    docker push myapp:production
    
    # Deploy to production (using Kubernetes, for example)
    kubectl apply -f k8s/production/
    

Team Collaboration with Docker

Docker enhances team collaboration by providing consistent environments for all team members:

  1. Onboarding new developers
    • Document Docker-based setup in README
    • Provide single-command environment setup
    • Include sample data and initial configuration
    • Example onboarding instructions:
      # Development Setup
      
      1. Install Docker and Docker Compose
      2. Clone this repository
      3. Run `./dev.sh start`
      4. Run `./dev.sh seed` to populate development data
      5. Access the application at http://localhost:3000
      
  2. Consistent code review environments
    • Use Docker for PR preview environments
    • Ensure reviewers see the same environment
    • Simplify testing of changes across services
    • Example PR workflow with Docker:
      name: PR Preview
      
      on:
        pull_request:
          types: [opened, synchronize]
      
      jobs:
        deploy-preview:
          runs-on: ubuntu-latest
          steps:
            - uses: actions/checkout@v3
            
            - name: Build and tag PR image
              run: |
                docker build -t myapp:pr-${{ github.event.pull_request.number }} .
                docker push myapp:pr-${{ github.event.pull_request.number }}
            
            - name: Deploy preview environment
              run: |
                # Deploy to preview environment
                echo "Preview deployed to https://pr-${{ github.event.pull_request.number }}.preview.example.com"
      

Docker-based development workflows provide consistent, reproducible environments that improve developer productivity, team collaboration, and software quality. By implementing these patterns and practices, teams can minimize environment-related issues and focus on delivering value through their applications.