Welcome to from-docker-to-kubernetes

Docker Networking Advanced Patterns

Advanced networking configurations, patterns, and best practices for Docker container deployments

Understanding Docker Networking Architecture

Docker networking provides the communication layer that allows containers to interact with each other and with the outside world. While the basic networking modes (bridge, host, none) cover many use cases, advanced deployments often require more sophisticated networking patterns to address complex requirements for security, performance, and scalability.

At its core, Docker networking is built on a pluggable architecture that leverages several Linux kernel features:

  1. Network Namespaces: Provide isolation of network interfaces, routing tables, and firewall rules
  2. Virtual Ethernet Devices (veth): Create virtual network cable pairs to connect containers to bridges
  3. Linux Bridges: Act as virtual switches connecting multiple network interfaces
  4. IPtables: Provide network address translation (NAT) and firewall capabilities
  5. Routing: Enables packet forwarding between different networks

This pluggable architecture is implemented through the Container Network Interface (CNI) and libnetwork, allowing for extensive customization and extension of Docker's networking capabilities. Understanding these foundational components is essential for implementing advanced networking patterns effectively.

Network Overlay Technologies

Container overlay networks enable seamless communication between containers across multiple hosts, providing a foundation for distributed applications:

Docker Swarm Overlay Networks

  • Native multi-host networking solution for Docker Swarm
  • Uses VXLAN encapsulation for overlay traffic
  • Provides automatic encryption options for secure communication
  • Handles service discovery and load balancing automatically
  • Scales to thousands of nodes and tens of thousands of containers
  • Offers simplified management through Docker CLI
  • Example overlay network creation:
    # Create an encrypted overlay network
    docker network create --driver overlay --opt encrypted=true \
      --attachable --subnet=10.10.0.0/16 --gateway=10.10.0.1 \
      my-overlay-network
    

Flannel

  • Simple overlay network focused on Kubernetes compatibility
  • Uses various backend options (VXLAN, host-gw, UDP)
  • Provides IP-per-pod networking model
  • Offers straightforward setup with minimal configuration
  • Well-suited for smaller deployments and development environments
  • Supports multiple operating systems and environments
  • Example Flannel configuration:
    net-conf.json: |
      {
        "Network": "10.244.0.0/16",
        "Backend": {
          "Type": "vxlan",
          "VNI": 1,
          "Port": 8472
        }
      }
    

Calico

  • High-performance, scalable networking solution
  • Uses standard IP routing rather than overlay encapsulation
  • Provides advanced network policy enforcement
  • Offers excellent performance for large-scale deployments
  • Integrates with service meshes and other cloud-native technologies
  • Supports multiple data planes (Linux, Windows, eBPF)
  • Example Calico network policy:
    apiVersion: projectcalico.org/v3
    kind: NetworkPolicy
    metadata:
      name: allow-specific-traffic
    spec:
      selector: app == 'database'
      ingress:
      - action: Allow
        protocol: TCP
        source:
          selector: app == 'api'
        destination:
          ports:
            - 5432
      egress:
      - action: Allow
        protocol: TCP
        destination:
          selector: app == 'monitoring'
          ports:
            - 9090
    

Weave Net

  • Mesh overlay network for container communications
  • Provides automatic discovery and configuration
  • Features fast data path with encryption options
  • Includes DNS-based service discovery
  • Offers automatic IP address management (IPAM)
  • Supports partial network connectivity scenarios
  • Example Weave Net deployment:
    # Install Weave Net on Docker host
    docker run -d --name=weave \
      --privileged \
      --network=host \
      --pid=host \
      weaveworks/weave:latest launch
    
    # Connect a container to Weave network
    docker run --network=weave myapp
    

Advanced Network Configurations

For complex deployments, Docker offers several advanced networking configurations that enable precise control over container communications:

# Create a custom bridge network with specific subnet
docker network create --driver=bridge \
  --subnet=172.28.0.0/16 \
  --ip-range=172.28.5.0/24 \
  --gateway=172.28.5.254 \
  custom-bridge

# Run container with specific IP address
docker run --network=custom-bridge --ip=172.28.5.10 nginx

# Create a network with custom MTU
docker network create --driver=bridge \
  --opt com.docker.network.driver.mtu=1400 \
  low-mtu-network

# Create a network with isolated subnets
docker network create --driver=bridge \
  --internal \
  isolated-network

# Connect a container to multiple networks
docker run -d --name=multi-home-container \
  --network=frontend-network \
  nginx
docker network connect backend-network multi-home-container

# Create a macvlan network for direct connection to physical network
docker network create --driver=macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  --ip-range=192.168.1.128/25 \
  -o parent=eth0 \
  macvlan-network

Network Segmentation and Isolation Patterns

Implementing proper network segmentation is crucial for container security and performance optimization:

Service Discovery and DNS Patterns

Effective service discovery is essential for container orchestration and microservices architecture:

Docker DNS Service Discovery

  • Automatic DNS registration for containers and services
  • Built-in DNS server (127.0.0.11) in each container
  • Container name resolution within user-defined networks
  • Aliases for providing additional DNS names
  • Round-robin DNS for scaled services
  • Example service discovery usage:
    # Create a network with DNS
    docker network create app-network
    
    # Run services with specific names
    docker run -d --name api --network app-network api-service
    docker run -d --name redis --network app-network redis
    
    # Connect to services by name
    docker run --rm --network app-network alpine ping -c 1 api
    
    # Use DNS round-robin for replicas
    docker run -d --name api-1 --network app-network --net-alias=api api-service
    docker run -d --name api-2 --network app-network --net-alias=api api-service
    

External Service Discovery Integration

  • Integration with Consul for advanced service discovery
  • HashiCorp Consul for service registration and health checks
  • Centralized service discovery for complex deployments
  • Dynamic reconfiguration with service changes
  • Example Consul integration:
    version: '3.8'
    
    services:
      consul:
        image: consul:latest
        ports:
          - "8500:8500"
        command: agent -server -ui -bootstrap -client=0.0.0.0
    
      registrator:
        image: gliderlabs/registrator:latest
        volumes:
          - /var/run/docker.sock:/tmp/docker.sock
        command: -internal consul://consul:8500
        depends_on:
          - consul
    
      web:
        image: nginx:latest
        environment:
          - SERVICE_NAME=web
          - SERVICE_TAGS=production
        depends_on:
          - registrator
    

DNS Customization and Configuration

  • Custom DNS settings for containers
  • Override default nameservers and search domains
  • Implement DNS caching and forwarding
  • Configure DNS failover strategies
  • Example DNS customization:
    # Run container with custom DNS configuration
    docker run -d --name app \
      --dns 8.8.8.8 \
      --dns 8.8.4.4 \
      --dns-search example.com \
      --dns-opt ndots:2 \
      --dns-opt timeout:3 \
      nginx
    
    # Configure DNS globally in daemon.json
    {
      "dns": ["8.8.8.8", "8.8.4.4"],
      "dns-search": ["example.com"],
      "dns-opts": ["ndots:2", "timeout:3"]
    }
    

Advanced DNS Patterns

  • Split-horizon DNS for different network views
  • DNS-based blue/green deployments
  • Geographical DNS routing for distributed applications
  • Canary deployments using DNS weights
  • Example complex DNS setup with custom DNS server:
    version: '3.8'
    
    services:
      coredns:
        image: coredns/coredns:latest
        volumes:
          - ./coredns/Corefile:/etc/coredns/Corefile
          - ./coredns/zones:/etc/coredns/zones
        ports:
          - "53:53/udp"
          - "53:53/tcp"
        networks:
          - app-network
    
      app:
        image: myapp:latest
        dns:
          - 172.20.0.2  # CoreDNS container IP
        dns_search:
          - service.local
        networks:
          - app-network
    
    networks:
      app-network:
        driver: bridge
        ipam:
          config:
            - subnet: 172.20.0.0/16
    

Advanced Networking for Microservices

Microservices architectures require sophisticated networking patterns to handle complex inter-service communications:

  1. Service Mesh Integration
    • Deploy a service mesh for advanced traffic management
    • Implement fine-grained routing and load balancing
    • Enable mutual TLS between services
    • Add circuit breaking and retry logic
    • Example with Istio running on Docker:
      version: '3.8'
      
      services:
        istiod:
          image: istio/pilot:1.13.0
          ports:
            - "15010:15010"
            - "15012:15012"
          environment:
            - POD_NAMESPACE=istio-system
        
        app:
          image: myapp:latest
          depends_on:
            - istiod
          volumes:
            - ./istio-proxy-init.sh:/istio-proxy-init.sh
          entrypoint: ["/istio-proxy-init.sh"]
      
  2. Circuit Breaking and Bulkheading
    • Implement network-level circuit breakers
    • Isolate failures through network bulkheading
    • Configure timeouts and retries at the network layer
    • Monitor network health for circuit state decisions
    • Example with Envoy proxy sidecar:
      version: '3.8'
      
      services:
        app:
          image: myapp:latest
        
        envoy-proxy:
          image: envoyproxy/envoy:v1.20-latest
          volumes:
            - ./envoy.yaml:/etc/envoy/envoy.yaml
          network_mode: "service:app"  # Share network namespace
      
  3. API Gateway Patterns
    • Implement API gateway for edge routing
    • Configure rate limiting and request throttling
    • Add authentication and authorization at the gateway
    • Enable request transformation and normalization
    • Example with Traefik as API gateway:
      version: '3.8'
      
      services:
        traefik:
          image: traefik:v2.5
          command:
            - "--providers.docker=true"
            - "--providers.docker.exposedbydefault=false"
            - "--entrypoints.web.address=:80"
          ports:
            - "80:80"
            - "8080:8080"
          volumes:
            - /var/run/docker.sock:/var/run/docker.sock:ro
        
        api-service:
          image: myapi:latest
          labels:
            - "traefik.enable=true"
            - "traefik.http.routers.api.rule=PathPrefix(`/api`)"
            - "traefik.http.routers.api.entrypoints=web"
            - "traefik.http.routers.api.middlewares=api-ratelimit"
            - "traefik.http.middlewares.api-ratelimit.ratelimit.average=100"
            - "traefik.http.middlewares.api-ratelimit.ratelimit.burst=50"
      

Network Security and Encryption

Securing container networks is essential for protecting sensitive workloads:

Performance Optimization Techniques

Optimizing container networking performance is crucial for high-throughput and latency-sensitive applications:

Kernel Tuning for Container Networking

  • Optimize kernel parameters for network performance
  • Increase connection tracking table sizes
  • Tune TCP/IP stack parameters
  • Configure appropriate buffer sizes
  • Example sysctl configurations for networking:
    # Apply network optimizations to Docker host
    cat > /etc/sysctl.d/99-network-performance.conf << EOF
    # Increase Linux autotuning TCP buffer limits
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.core.rmem_default = 262144
    net.core.wmem_default = 262144
    
    # Increase TCP max buffer size
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    
    # Increase number of connections
    net.core.somaxconn = 4096
    net.ipv4.tcp_max_syn_backlog = 8192
    
    # Reuse TIME-WAIT sockets
    net.ipv4.tcp_tw_reuse = 1
    
    # Increase connection tracking table size
    net.netfilter.nf_conntrack_max = 1000000
    net.nf_conntrack_max = 1000000
    
    # Increase the local port range
    net.ipv4.ip_local_port_range = 1024 65535
    EOF
    
    # Apply settings
    sysctl -p /etc/sysctl.d/99-network-performance.conf
    

Network Driver Selection

  • Choose appropriate network drivers for specific workloads
  • Use host networking for maximum performance
  • Select macvlan for near-native performance with isolation
  • Apply ipvlan for high-density environments
  • Compare network driver performance:
    Driver Type      Throughput      Latency      Isolation      Use Case
    -----------------------------------------------------------------------------
    host             100%            Lowest       None           High-performance apps
    macvlan          95-99%          Very low     High           Near-native performance
    ipvlan           90-95%          Low          High           High-density environments
    bridge           70-85%          Medium       Medium         General purpose
    overlay          50-70%          Higher       High           Multi-host communication
    

CPU and NUMA Considerations

  • Pin container network processing to specific CPUs
  • Align network cards and containers to same NUMA nodes
  • Avoid CPU contention for network-intensive containers
  • Configure IRQ affinity for network interfaces
  • Example CPU pinning configuration:
    version: '3.8'
    
    services:
      network-intensive-app:
        image: myapp:latest
        deploy:
          resources:
            limits:
              cpus: '4'
            reservations:
              cpus: '2'
        cpuset: "0-3"  # Pin to first 4 CPUs
    

Advanced Network Performance Tools

  • Use eBPF for network performance monitoring
  • Implement XDP (eXpress Data Path) for packet processing
  • Configure DPDK for userspace network processing
  • Apply SR-IOV for direct hardware access
  • Example with eBPF performance monitoring:
    # Install bcc tools
    apt-get install -y bpfcc-tools
    
    # Monitor TCP connections with eBPF
    /usr/share/bcc/tools/tcpconnect
    
    # Monitor network latency
    /usr/share/bcc/tools/tcptracer
    
    # Analyze TCP retransmits
    /usr/share/bcc/tools/tcpretrans
    

Network Troubleshooting and Debugging

Effective troubleshooting techniques are essential for resolving container networking issues:

  1. Container Network Inspection Tools
    • Use specialized tools for container network debugging
    • Analyze network namespaces and virtual interfaces
    • Trace packet flows between containers
    • Inspect network configuration and routing
    • Example debugging session:
      # Find container network namespace
      docker inspect --format '{{.State.Pid}}' my-container
      
      # Enter container network namespace
      nsenter -t $(docker inspect --format '{{.State.Pid}}' my-container) -n ip addr
      
      # Trace packet path
      nsenter -t $(docker inspect --format '{{.State.Pid}}' my-container) -n \
        traceroute -n google.com
      
      # Capture packets in container network namespace
      nsenter -t $(docker inspect --format '{{.State.Pid}}' my-container) -n \
        tcpdump -i eth0 -n
      
  2. Common Network Issues and Solutions
    • DNS resolution problems
    • Container-to-container connectivity issues
    • External network access failures
    • Port mapping conflicts
    • Overlay network encapsulation problems
    • Network policy misconfiguration
    • MTU mismatches between networks
    • Example MTU troubleshooting:
      # Check MTU on host
      ip link show
      
      # Check MTU inside container
      docker exec my-container ip link show
      
      # Test with different packet sizes
      docker exec my-container ping -c 1 -s 1472 -M do google.com
      
      # Configure custom MTU for Docker network
      docker network create --opt com.docker.network.driver.mtu=1400 low-mtu-net
      
  3. Diagnostic Container Pattern
    • Deploy specialized diagnostic containers
    • Use network troubleshooting tools in sidecar containers
    • Create dedicated network inspection environments
    • Implement network visibility dashboards
    • Example diagnostic container:
      # Run a network diagnostic container
      docker run -it --rm --network container:target-container \
        nicolaka/netshoot
      
      # Run comprehensive network diagnostics
      docker run --rm --cap-add NET_ADMIN \
        --network container:target-container \
        nicolaka/netshoot \
        /bin/bash -c "
          echo '=== Interface Information ===';
          ip addr;
          echo '=== Routing Table ===';
          ip route;
          echo '=== DNS Configuration ===';
          cat /etc/resolv.conf;
          echo '=== Connectivity Tests ===';
          ping -c 3 8.8.8.8 || echo 'Internet connectivity failed';
          echo '=== DNS Resolution ===';
          dig +short google.com || echo 'DNS resolution failed';
          echo '=== Listening Ports ===';
          netstat -tuln;
        "
      

Enterprise Network Integration Patterns

Integrating Docker networks with enterprise infrastructure requires specialized patterns:

Hybrid and Multi-Cloud Networking

Extending Docker networks across multiple environments creates unique challenges and solutions:

VPN Connectivity Between Environments

  • Connect container networks across clouds and data centers
  • Implement secure VPN tunnels for container traffic
  • Configure routing between different container networks
  • Enable transparent communication across environments
  • Example with WireGuard VPN connecting Docker networks:
    # Set up WireGuard on Docker host 1 (10.0.1.0/24)
    docker run -d --name wireguard \
      --cap-add=NET_ADMIN \
      --cap-add=SYS_MODULE \
      -e PUID=1000 -e PGID=1000 \
      -e SERVERURL=host1.example.com \
      -e PEERS=1 \
      -p 51820:51820/udp \
      -v ./wireguard:/config \
      --sysctl net.ipv4.ip_forward=1 \
      --sysctl net.ipv4.conf.all.src_valid_mark=1 \
      linuxserver/wireguard
    
    # Set up WireGuard on Docker host 2 (10.0.2.0/24)
    # (with peer configuration from host 1)
    
    # Add routes for container networks
    ip route add 10.0.2.0/24 via 10.9.0.2
    

Cloud-Native Networking Extensions

  • Integrate with AWS VPC, Azure VNET, and GCP VPC
  • Use cloud-native load balancers and gateways
  • Implement cloud-specific security controls
  • Leverage managed networking services
  • Example with AWS VPC integration:
    # Create Docker network using AWS VPC
    docker network create --driver=bridge \
      --subnet=172.31.0.0/16 \
      --opt com.docker.network.bridge.enable_icc=true \
      aws-integrated-network
    
    # Configure AWS security groups for container traffic
    aws ec2 authorize-security-group-ingress \
      --group-id sg-0123456789abcdef \
      --protocol tcp \
      --port 8080 \
      --cidr 172.31.0.0/16
    
    # Use AWS Transit Gateway to connect multiple VPCs
    # with Docker networks in different regions
    aws ec2 create-transit-gateway-vpc-attachment \
      --transit-gateway-id tgw-0123456789abcdef \
      --vpc-id vpc-0123456789abcdef \
      --subnet-ids subnet-0123456789abcdef
    

Multi-Region Service Mesh

  • Deploy service mesh across multiple regions/clouds
  • Implement cross-region service discovery
  • Configure traffic routing with latency awareness
  • Enable global load balancing for container services
  • Example with Istio multi-cluster setup:
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      name: region1-istiocontrolplane
    spec:
      # Global mesh network configuration
      meshConfig:
        accessLogFile: /dev/stdout
        enableTracing: true
      # Multi-primary configuration
      values:
        global:
          meshID: mesh1
          multiCluster:
            clusterName: region1
          network: network1
    

Hybrid Cloud Container Networking

  • Connect on-premises container environments to cloud
  • Implement consistent networking across environments
  • Configure traffic prioritization for hybrid connectivity
  • Enable disaster recovery between environments
  • Example hybrid cloud network configuration:
    # On-premises Docker host
    # Create overlay network with VXLAN ID
    docker network create \
      --driver overlay \
      --subnet=10.10.0.0/16 \
      --opt encrypted=true \
      --opt com.docker.network.driver.overlay.vxlanid_list=4097 \
      hybrid-overlay
    
    # Cloud Docker host (with VPN connection to on-premises)
    # Create compatible overlay network
    docker network create \
      --driver overlay \
      --subnet=10.20.0.0/16 \
      --opt encrypted=true \
      --opt com.docker.network.driver.overlay.vxlanid_list=4097 \
      hybrid-overlay
    
    # Configure routing between overlay networks
    ip route add 10.10.0.0/16 via $VPN_GATEWAY
    

Network Automation and Orchestration

Automating network configuration and management is essential for scalable container deployments:

  1. Infrastructure as Code for Networks
    • Define container networks as code
    • Version control network configurations
    • Implement CI/CD for network changes
    • Test network configurations before deployment
    • Example with Terraform for Docker networks:
      # Terraform configuration for Docker networks
      resource "docker_network" "frontend_network" {
        name       = "frontend"
        driver     = "bridge"
        internal   = false
        ipam_config {
          subnet   = "172.28.0.0/24"
          gateway  = "172.28.0.1"
        }
        options = {
          "com.docker.network.bridge.enable_icc" = "true"
          "com.docker.network.bridge.enable_ip_masquerade" = "true"
        }
      }
      
      resource "docker_network" "backend_network" {
        name       = "backend"
        driver     = "bridge"
        internal   = true
        ipam_config {
          subnet   = "172.28.1.0/24"
          gateway  = "172.28.1.1"
        }
      }
      
  2. Network Observability
    • Implement network monitoring for container traffic
    • Collect metrics on bandwidth, latency, and errors
    • Create dashboards for network performance
    • Set up alerts for network anomalies
    • Example with Prometheus and Grafana:
      version: '3.8'
      
      services:
        prometheus:
          image: prom/prometheus:latest
          volumes:
            - ./prometheus.yml:/etc/prometheus/prometheus.yml
          ports:
            - "9090:9090"
        
        grafana:
          image: grafana/grafana:latest
          volumes:
            - ./grafana-provisioning:/etc/grafana/provisioning
          ports:
            - "3000:3000"
        
        cadvisor:
          image: gcr.io/cadvisor/cadvisor:latest
          volumes:
            - /:/rootfs:ro
            - /var/run:/var/run:ro
            - /sys:/sys:ro
            - /var/lib/docker/:/var/lib/docker:ro
          ports:
            - "8080:8080"
        
        node-exporter:
          image: prom/node-exporter:latest
          ports:
            - "9100:9100"
          command:
            - "--path.procfs=/host/proc"
            - "--path.sysfs=/host/sys"
            - "--collector.netdev"
            - "--collector.netstat"
          volumes:
            - /proc:/host/proc:ro
            - /sys:/host/sys:ro
      
  3. Network Policy Automation
    • Generate network policies automatically
    • Implement intelligent traffic analysis
    • Apply machine learning for policy optimization
    • Create self-adjusting network configurations
    • Example with network policy generation:
      #!/bin/bash
      # Analyze container communications and generate network policies
      
      # Get all running containers
      CONTAINERS=$(docker ps --format "{{.Names}}")
      
      # Create connections map
      for CONTAINER in $CONTAINERS; do
        echo "Analyzing connections for $CONTAINER"
        
        # Capture container's connections
        CONNECTIONS=$(docker exec $CONTAINER netstat -tn | grep ESTABLISHED | awk '{print $5}')
        
        # Generate allowed connections list
        for CONN in $CONNECTIONS; do
          IP=$(echo $CONN | cut -d: -f1)
          PORT=$(echo $CONN | cut -d: -f2)
          
          # Find container for IP
          TARGET=$(docker ps --format "{{.Names}}" --filter "network=$NETWORK" | \
                   xargs -I{} docker inspect --format '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' {} | \
                   grep $IP | awk '{print $1}')
          
          if [ ! -z "$TARGET" ]; then
            echo "Connection from $CONTAINER to $TARGET:$PORT detected"
            echo "Generating network policy..."
            # Generate policy based on observed connections
          fi
        done
      done
      

Real-World Implementation Examples

Let's explore comprehensive real-world examples of advanced Docker networking configurations:

The container networking landscape continues to evolve with several emerging trends:

  1. eBPF for Container Networking
    • Leverage eBPF for high-performance packet processing
    • Implement programmable networking directly in the kernel
    • Create custom load balancing and routing logic
    • Enable advanced observability without performance overhead
    • Example with Cilium using eBPF:
      apiVersion: cilium.io/v1alpha1
      kind: CiliumNetworkPolicy
      metadata:
        name: api-service-policy
      spec:
        endpointSelector:
          matchLabels:
            app: api-service
        ingress:
        - fromEndpoints:
          - matchLabels:
              app: frontend
          toPorts:
          - ports:
            - port: "8080"
              protocol: TCP
            rules:
              http:
              - method: "GET"
                path: "/api/v1/.*"
      
  2. Serverless Networking Models
    • Implement event-driven network configurations
    • Create auto-scaling network capacity
    • Enable per-request network isolation
    • Apply dynamic security policies based on function identity
    • Example architecture for serverless networking:
      Client Request → API Gateway → Function Instance
                                    ↓
      Network Policy Enforcer → Dynamic Network Attachment
                                ↓
      Ephemeral Network Namespace with Just-in-Time Policies
      
  3. 5G and Edge Computing Integration
    • Connect container networks to 5G infrastructure
    • Implement network slicing for container workloads
    • Optimize for ultra-low latency communications
    • Enable location-aware container deployments
    • Example 5G-integrated container deployment:
      version: '3.8'
      
      services:
        edge-application:
          image: edge-app:latest
          networks:
            - edge-net
          deploy:
            placement:
              constraints:
                - node.labels.mec-zone == "zone1"
          labels:
            - "5g.network.slice=urllc"
            - "5g.qos.class=guaranteed"
            - "edge.latency.max=10ms"
      
      networks:
        edge-net:
          driver: macvlan
          driver_opts:
            parent: eth0
            mtu: 9000
          ipam:
            config:
              - subnet: 10.200.0.0/24
                gateway: 10.200.0.1
                ip_range: 10.200.0.128/25
      

Docker networking has evolved from simple container connectivity to a sophisticated ecosystem of networking technologies that can support the most demanding enterprise requirements. By understanding and implementing these advanced networking patterns, organizations can build container infrastructures that are secure, performant, and scalable across diverse environments.

The future of container networking will continue to evolve toward more programmable, automated, and intelligent networking capabilities, driven by the growing demands of distributed, cloud-native applications and edge computing workloads.