Welcome to from-docker-to-kubernetes

Multi-Architecture Builds

Learn how to create and distribute Docker images that run on multiple hardware architectures

Docker Multi-Architecture Builds

Multi-architecture (multi-arch) builds allow you to create Docker images that can run on different hardware architectures, enabling true "build once, run anywhere" capabilities across diverse environments. Instead of maintaining separate image repositories for each architecture, multi-arch images provide a seamless experience where the container runtime automatically selects the appropriate image variant for the host architecture.

Why Multi-Architecture Matters

Cross-Platform Compatibility

  • Support for x86_64 (Intel/AMD): Traditional server and desktop architecture, widely used in datacenters and enterprise environments
  • ARM64 support (Apple Silicon, AWS Graviton): Growing in popularity due to performance and power efficiency; critical for MacOS development and ARM-based cloud instances
  • 32-bit ARM support (IoT devices): Essential for edge computing, Raspberry Pi, and embedded systems
  • IBM Power and s390x architectures: Used in enterprise mainframe environments with specific performance characteristics
  • Single image reference for all platforms: Users can pull the same image tag regardless of their hardware platform

Cloud Flexibility

  • Run containers on cost-effective ARM instances: ARM-based instances like AWS Graviton can offer up to 40% better price-performance ratio
  • Seamless transition between cloud providers: Switch between providers or architectures without changing deployment configurations
  • No architecture-specific image tags needed: Eliminates confusion and simplifies CI/CD pipelines
  • Consistent development experience: Developers can work locally on x86 or ARM machines while ensuring production compatibility
  • Future-proof container strategy: Ready for emerging architectures without rebuilding your container infrastructure

BuildKit and Multi-Architecture Support

BuildKit is Docker's next-generation build system that enables advanced features like multi-architecture builds, enhanced caching, and parallel building. It's the foundation for buildx, which is Docker's CLI plugin for building multi-architecture images.

# Enable BuildKit (if not already enabled)
export DOCKER_BUILDX=1
# Or for per-command usage
export DOCKER_BUILDKIT=1

# Check available buildx builders
docker buildx ls
# Output shows your available builders and supported platforms

# Create a new builder instance with enhanced capabilities
docker buildx create --name mybuilder --use
# This creates a new builder that can build for multiple platforms

# Inspect available platforms and bootstrap the builder
docker buildx inspect --bootstrap
# Shows all architectures this builder supports (typically includes linux/amd64, linux/arm64, linux/arm/v7)

The --bootstrap flag initializes the builder, preparing it to build for all supported platforms. BuildKit uses either QEMU emulation or native builds to create images for different architectures.

Creating Multi-Architecture Images

Building with Docker Compose

Docker Compose can also be used to build multi-architecture images when used with Buildx. This is particularly useful for applications with multiple services:

# docker-compose.yml with platform support
version: '3.8'
services:
  app:
    build:
      context: .
      platforms:
        - linux/amd64  # Intel/AMD 64-bit
        - linux/arm64  # ARM 64-bit (Apple Silicon, Graviton)
    image: username/myapp:latest

# To build and push with docker-compose:
# DOCKER_BUILDKIT=1 docker-compose build
# docker-compose push

When using this approach, you need to:

  1. Ensure BuildKit is enabled
  2. Build the images with docker-compose build
  3. Push the images with docker-compose push
  4. For complex multi-service applications, you can specify different platform requirements per service

The Compose file can also include platform-specific build arguments or configurations if needed.

Manifest Lists

Manifest lists (also called "fat manifests") are the underlying mechanism that enables multi-architecture support. They act as pointers to architecture-specific image variants.

Creating Manifest Lists Manually

# Build architecture-specific images with separate tags
docker build -t username/myapp:amd64 --platform linux/amd64 .
docker build -t username/myapp:arm64 --platform linux/arm64 .

# Create a manifest list that references both architecture variants
docker manifest create username/myapp:latest \
  username/myapp:amd64 \
  username/myapp:arm64

# Push the manifest list to the registry
docker manifest push username/myapp:latest

This manual approach gives you fine-grained control when:

  • You need to build images separately for each architecture
  • Different architectures require different build processes
  • You want to test architecture-specific images before creating the manifest

Inspecting Manifests

# View manifest details including all architecture variants
docker manifest inspect username/myapp:latest

# Output shows details like:
# - Supported architectures and OS
# - Digest (content hash) for each variant
# - Size of each variant
# - Platform-specific annotations

The inspect command is valuable for verifying that your manifest includes all expected architectures and for debugging any issues with architecture-specific variants.

Base Image Considerations

Cross-Compilation vs. QEMU Emulation

Docker offers two main approaches for building multi-architecture images, each with different trade-offs:

# Building with QEMU emulation (simpler but slower)
# Requires: docker run --privileged --rm tonistiigi/binfmt --install all
docker buildx build --platform linux/arm64 \
  -t username/myapp:arm64 \
  --load .

QEMU emulation:

  • ✅ Works with any Dockerfile without modifications
  • ✅ Simpler to set up and use
  • ✅ Compatible with most build processes
  • ❌ Significantly slower (5-10x) than native builds
  • ❌ May have compatibility issues with some system calls
# Cross-compilation (faster but more complex)
# Example for Go application with native cross-compilation
FROM --platform=$BUILDPLATFORM golang:1.18 AS builder
ARG TARGETPLATFORM
ARG BUILDPLATFORM
WORKDIR /app
COPY . .
RUN echo "Building on $BUILDPLATFORM for $TARGETPLATFORM" && \
    case "$TARGETPLATFORM" in \
      "linux/amd64") GOARCH=amd64 ;; \
      "linux/arm64") GOARCH=arm64 ;; \
      "linux/arm/v7") GOARCH=arm ;; \
    esac && \
    CGO_ENABLED=0 GOOS=linux GOARCH=$GOARCH go build -o app .

FROM alpine:3.16
COPY --from=builder /app/app /app
ENTRYPOINT ["/app"]

Cross-compilation:

  • ✅ Much faster builds (near-native speed)
  • ✅ No emulation overhead
  • ✅ Better for large applications
  • ❌ Requires language/toolchain support for cross-compilation
  • ❌ More complex Dockerfile with multi-stage builds
  • ❌ May require platform-specific code paths

The example above demonstrates:

  1. Using $BUILDPLATFORM - the architecture where the build runs
  2. Using $TARGETPLATFORM - the architecture for which we're building
  3. Multi-stage build to keep final image size small
  4. Platform-specific compilation flags

CI/CD Integration for Multi-Architecture Builds

Integrating multi-architecture builds into CI/CD pipelines ensures consistent image creation and distribution across different platforms.

GitHub Actions Example

name: Build Multi-Arch Image

on:
  push:
    branches: [ main ]
  # Optionally trigger on tags for releases
  tags:
    - 'v*'

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v3
        
      - name: Set up QEMU
        uses: docker/setup-qemu-action@v2
        # This installs QEMU static binaries for multi-architecture builds
        
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        # Creates a new builder instance with multi-architecture support
        
      - name: Login to DockerHub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
          # Authenticate to enable pushing to Docker Hub
          
      - name: Extract metadata for Docker
        id: meta
        uses: docker/metadata-action@v4
        with:
          images: username/myapp
          # Automatically generate tags based on branches and version tags
          tags: |
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=ref,event=branch
          
      - name: Build and push
        uses: docker/build-push-action@v4
        with:
          context: .
          platforms: linux/amd64,linux/arm64  # Specify target architectures
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=registry,ref=username/myapp:buildcache
          cache-to: type=registry,ref=username/myapp:buildcache,mode=max
          # Caching improves build performance for frequent builds

The GitHub Actions example includes:

  • Automatic tag generation based on Git tags and branches
  • Build caching to speed up repeated builds
  • QEMU setup for architecture emulation
  • Registry authentication for pushing images

GitLab CI Example

build-multi-arch:
  image: docker:20.10.16
  services:
    - docker:20.10.16-dind  # Docker-in-Docker service
  variables:
    DOCKER_BUILDKIT: 1
    DOCKER_TLS_CERTDIR: "/certs"  # Enable TLS for Docker-in-Docker
    BUILDX_VERSION: "0.9.1"       # Specify buildx version
  before_script:
    # Login to registry
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    
    # Setup buildx with multi-architecture support
    - mkdir -p ~/.docker/cli-plugins
    - wget -O ~/.docker/cli-plugins/docker-buildx https://github.com/docker/buildx/releases/download/v${BUILDX_VERSION}/buildx-v${BUILDX_VERSION}.linux-amd64
    - chmod +x ~/.docker/cli-plugins/docker-buildx
    - docker context create builder-context
    - docker buildx create --name mybuilder --use builder-context
    - docker buildx inspect --bootstrap
    
    # Set tag based on CI_COMMIT_REF_NAME
    - |
      if [[ $CI_COMMIT_TAG ]]; then
        export IMAGE_TAG=$CI_COMMIT_TAG
      else
        export IMAGE_TAG=$CI_COMMIT_REF_NAME
      fi
  script:
    - docker buildx build --platform linux/amd64,linux/arm64 
      -t $CI_REGISTRY_IMAGE:$IMAGE_TAG 
      -t $CI_REGISTRY_IMAGE:latest 
      --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') 
      --build-arg VCS_REF=$CI_COMMIT_SHA 
      --push .
  # Run on both branch pushes and tags
  rules:
    - if: $CI_COMMIT_BRANCH || $CI_COMMIT_TAG

The GitLab CI example includes:

  • Manual setup of buildx (for fine-grained control)
  • Tag generation based on Git branches and tags
  • Build arguments for image metadata
  • TLS security for Docker-in-Docker service

Testing Multi-Architecture Images

Testing multi-architecture images is crucial to ensure they work correctly on all target platforms. Docker allows you to emulate different architectures for testing purposes:

# Test arm64 image on amd64 machine using emulation
docker run --platform linux/arm64 username/myapp:latest
# This forces Docker to use the ARM64 variant, even on an x86_64 host

# Check architecture inside container to verify emulation
docker run --platform linux/arm64 username/myapp:latest uname -m
# Should output: aarch64 (ARM64 architecture)

# Run architecture-specific tests
docker run --platform linux/arm64 username/myapp:latest ./run-tests.sh

# Performance testing (note: emulation will be slower than native execution)
docker run --platform linux/arm64 username/myapp:latest benchmark

# Test with different architectures to verify all variants
for arch in linux/amd64 linux/arm64 linux/arm/v7; do
  echo "Testing $arch..."
  docker run --platform $arch username/myapp:latest ./verify-platform.sh
done

Remember that testing under emulation has limitations:

  1. Performance will be significantly slower than native execution
  2. Some architecture-specific issues might only appear on real hardware
  3. System calls and hardware-specific features may behave differently
  4. Memory usage patterns might vary between emulated and native environments

For critical applications, consider testing on actual hardware of each target architecture in addition to emulation testing.

Best Practices

Advanced Techniques

Architecture-Specific Optimizations

FROM --platform=$TARGETPLATFORM python:3.10-slim

# Install architecture-specific optimizations
RUN case "$(uname -m)" in \
      "x86_64") \
        apt-get update && apt-get install -y --no-install-recommends \
        libjemalloc2 && \
        # Use jemalloc memory allocator for better performance on x86_64
        echo "/usr/lib/x86_64-linux-gnu/libjemalloc.so.2" > /etc/ld.so.preload \
        ;; \
      "aarch64") \
        apt-get update && apt-get install -y --no-install-recommends \
        # libatomic1 is needed for some operations on ARM64
        libatomic1 \
        # ARM-specific optimizations
        && apt-get install -y --no-install-recommends libneon27 \
        ;; \
      "armv7l") \
        # 32-bit ARM specific packages
        apt-get update && apt-get install -y --no-install-recommends \
        libatomic1 libarmmem-${TARGETARCH} \
        ;; \
      *) \
        echo "Architecture $(uname -m) not explicitly optimized, using defaults" \
        ;; \
    esac

# Continue with common setup
COPY requirements.txt .
RUN pip install -r requirements.txt

This Dockerfile demonstrates:

  • Architecture detection using uname -m
  • Installation of architecture-specific performance libraries
  • Fallback for unsupported architectures
  • Memory allocator optimizations for x86_64
  • ARM-specific libraries for better performance

Platform-Specific Builds

# docker-compose.override.yml
services:
  app:
    build:
      args:
        - TARGETARCH=${TARGETARCH:-amd64}
        # Default to amd64 if not specified
        
        # Additional platform-specific build args
        - EXTRA_FEATURES=${EXTRA_FEATURES:-}
        # Can be set differently per architecture in CI/CD
    
    # Conditionally apply platform-specific volumes or configurations
    volumes:
      - ${PLATFORM_SPECIFIC_VOLUME:-/tmp}:/opt/platform-specific

Platform-specific builds can also leverage:

# Build script to generate platform-specific configurations
#!/bin/bash
TARGETARCH=${TARGETARCH:-$(uname -m)}

case "$TARGETARCH" in
  "x86_64"|"amd64")
    # Generate x86-specific configuration
    echo "Generating x86_64 optimized configuration"
    cat base-config.json | jq '.optimizations.simd = true' > config.json
    ;;
  "aarch64"|"arm64")
    # Generate ARM64-specific configuration
    echo "Generating ARM64 optimized configuration"
    cat base-config.json | jq '.optimizations.neon = true' > config.json
    ;;
esac

# Run platform-specific build steps
docker-compose build

This approach allows you to:

  1. Generate platform-specific configuration files before building
  2. Pass different build arguments based on target architecture
  3. Apply conditional logic outside the Dockerfile
  4. Create customized deployments for each platform

Troubleshooting