Docker is a key component in modern CI/CD pipelines, enabling consistent builds, efficient testing, and reliable deployments across environments. By containerizing applications, Docker provides a consistent environment throughout the development lifecycle, from a developer's local machine through testing and production deployment.
Docker's role in CI/CD includes:
Providing isolated, reproducible build environments Standardizing application packaging across environments Enabling efficient distribution of application images Facilitating immutable infrastructure patterns Supporting microservices architecture and scalable deployments Same environment across development, testing, and production Identical runtime environment at every stage eliminates environment-specific bugs Container images contain all dependencies, libraries, and configurations Build once, deploy anywhere philosophy becomes practical reality Configuration differences managed through environment variables or config files Elimination of "works on my machine" problems Containers package all dependencies and runtime environments together Developers, testers, and ops teams work with identical containers System-level dependencies are encapsulated within the container Container orchestration ensures consistent deployment behavior Reproducible builds and tests Containerized build processes provide isolated, consistent build environments Build artifacts depend only on the source code, not the build machine Deterministic builds produce identical outputs from identical inputs Automated testing runs in containers matching production environment Deterministic deployments Immutable container images prevent drift between environments Container orchestration platforms enforce desired state Infrastructure-as-Code practices ensure consistent infrastructure Declarative configuration reduces manual intervention and errors Version-controlled infrastructure Dockerfiles and compose files stored in version control Image tags provide clear versioning and traceability Container configurations managed alongside application code Complete audit trail of environment changes Cached layers for faster builds Docker's layer caching dramatically accelerates repeated builds Only changed layers need to be rebuilt Optimization of Dockerfiles for efficient caching BuildKit parallel processing for multi-stage builds Parallel testing in containers Run multiple test suites simultaneously in isolated containers Matrix testing across different configurations or versions Resource-efficient test environments through containerization Ephemeral test environments spin up and tear down quickly Efficient image distribution Only changed layers are transferred when updating images Distributed registries improve pull performance Layer deduplication reduces storage requirements Content-addressable storage ensures integrity Rapid deployment with container orchestration Container orchestrators automate deployment processes Rolling updates minimize downtime Blue-green and canary deployment patterns Self-healing capabilities automatically recover from failures Fast rollbacks when needed Immutable images enable instant rollbacks to previous versions Version tagging provides clear rollback targets Registry history maintains all previous versions Orchestration platforms support automated rollback on failure Isolated build environments Each build runs in a clean, isolated container No interference between concurrent builds Environment variables control build-specific parameters Reproducible regardless of the underlying CI server Separate test environments Tests run in isolated containers that match production Test data isolation prevents cross-test contamination Parallel test execution without interference Resource limits prevent noisy neighbor problems Dependency isolation Application dependencies contained within images No conflicts between applications requiring different versions Explicit dependency declaration in Dockerfiles Containers can use different versions of the same dependency Environment-specific configurations Environment variables injected at runtime Config files mounted or included for different environments Secrets management integrated with container platforms Same image runs in different environments with different configurations Clean state for each build Every build and test starts from a known clean state No leftover artifacts from previous builds Ephemeral build environments prevent state accumulation Clear separation between persistent data and application code A typical Docker CI/CD pipeline includes:
Source code checkout Clone repository from version control system Fetch dependencies and submodules Retrieve build configuration files Apply branching strategies (feature branches, release branches) Validate source code integrity Image building Execute Docker build process using Dockerfile Implement multi-stage builds for optimization Apply appropriate tags based on branch/commit/version Leverage BuildKit and layer caching for efficiency Build platform-specific or multi-architecture images Automated testing Run unit tests within containers Execute integration tests with containerized dependencies Perform end-to-end testing with complete container stacks Validate image functionality with container-specific tests Conduct performance and load testing in isolated environments Security scanning Scan images for known vulnerabilities (CVEs) Check for sensitive data and secrets Validate image configuration against security best practices Verify image provenance and integrity Ensure compliance with security policies Image registry storage Push verified images to container registry Apply appropriate tags and metadata Sign images for authenticity verification Implement registry access controls Configure image retention policies Deployment to environments Deploy to development, staging, and production environments Implement container orchestration (Kubernetes, Swarm) Execute deployment strategies (rolling, blue/green, canary) Apply environment-specific configurations Validate successful deployment Monitoring and verification Monitor container health and performance Verify functionality through smoke tests Implement observability (logs, metrics, traces) Track deployment success metrics Trigger automated rollback if necessary
name : Docker CI/CD
on :
push :
branches : [ main ]
# Optionally trigger on tags for releases
tags : [ 'v*.*.*' ]
pull_request :
branches : [ main ]
jobs :
build-and-push :
runs-on : ubuntu-latest
steps :
- name : Checkout code
uses : actions/checkout@v3
with :
# Fetch all history for proper versioning
fetch-depth : 0
# Extract metadata for Docker
- name : Extract Docker metadata
id : meta
uses : docker/metadata-action@v4
with :
images : username/app
# Generate tags based on branch, git tag, and commit SHA
tags : |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha,format=short
- name : Set up Docker Buildx
uses : docker/setup-buildx-action@v2
with :
# Use multiple nodes for parallel building
driver-opts : network=host
- name : Login to DockerHub
# Only run this step when pushing to registry (not on PRs)
if : github.event_name != 'pull_request'
uses : docker/login-action@v2
with :
username : ${{ secrets.DOCKERHUB_USERNAME }}
password : ${{ secrets.DOCKERHUB_TOKEN }}
# Optionally add logout: true for security
# Cache Docker layers to speed up builds
- name : Cache Docker layers
uses : actions/cache@v3
with :
path : /tmp/.buildx-cache
key : ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys : |
${{ runner.os }}-buildx-
# Optional: Run tests before building
- name : Run tests
run : |
docker build -t username/app:test --target test .
docker run username/app:test
# Scan for vulnerabilities before pushing
- name : Run Trivy vulnerability scanner
uses : aquasecurity/trivy-action@master
with :
image-ref : 'username/app:test'
format : 'table'
exit-code : '1'
ignore-unfixed : true
severity : 'CRITICAL,HIGH'
- name : Build and push
uses : docker/build-push-action@v4
with :
context : .
# Only push if not a PR and tests/scans passed
push : ${{ github.event_name != 'pull_request' }}
# Use metadata for intelligent tagging
tags : ${{ steps.meta.outputs.tags }}
labels : ${{ steps.meta.outputs.labels }}
# Use registry cache for efficient builds
cache-from : type=registry,ref=username/app:buildcache
cache-to : type=registry,ref=username/app:buildcache,mode=max
# Build arguments if needed
build-args : |
BUILD_VERSION=${{ github.ref_name }}
BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ')
# Use BuildKit's efficient build capabilities
platforms : linux/amd64,linux/arm64
# Add provenance attestation
provenance : true
# Sign the image for security (if configured)
# sign: true
# Deploy to staging environment after successful build/push
- name : Deploy to staging
if : github.event_name != 'pull_request' && github.ref == 'refs/heads/main'
run : |
echo "Deploying to staging environment"
# Use kubectl, helm, or other deployment tools
# kubectl set image deployment/app-deployment app=username/app:${{ github.sha }}
# Define all pipeline stages
stages :
- build
- test
- security
- deploy
- verify
# Global variables used across jobs
variables :
# Use commit SHA for unique image tagging
DOCKER_IMAGE : $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
# Tag with semver if a tag is pushed
RELEASE_IMAGE : $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
# Use the Docker Hub mirror to avoid rate limits
DOCKER_HUB_MIRROR : https://mirror.gcr.io
# Configure Docker BuildKit for efficient builds
DOCKER_BUILDKIT : 1
# Custom build arguments
BUILD_ARGS : "--build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') --build-arg VCS_REF=$CI_COMMIT_SHORT_SHA"
# Build stage: Compile and package the application
build :
stage : build
# Use Docker-in-Docker to build images
image : docker:20.10.16
services :
- docker:20.10.16-dind
before_script :
# Install dependencies if needed
- apk add --no-cache git curl jq
# Set up Docker BuildX for multi-platform builds
- docker buildx create --use
# Login to GitLab Container Registry
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script :
# Cache from previously built images
- |
CACHE_FROM=""
if docker pull $CI_REGISTRY_IMAGE:latest &>/dev/null; then
CACHE_FROM="--cache-from $CI_REGISTRY_IMAGE:latest"
fi
# Build the application image with metadata
- >
docker build
$CACHE_FROM
$BUILD_ARGS
--build-arg CI_PIPELINE_ID=$CI_PIPELINE_ID
--label org.opencontainers.image.created="$(date -u +'%Y-%m-%dT%H:%M:%SZ')"
--label org.opencontainers.image.revision="$CI_COMMIT_SHA"
--label org.opencontainers.image.version="${CI_COMMIT_TAG:-$CI_COMMIT_SHORT_SHA}"
-t $DOCKER_IMAGE
-t $CI_REGISTRY_IMAGE:latest
.
# Push both tags to the registry
- docker push $DOCKER_IMAGE
- docker push $CI_REGISTRY_IMAGE:latest
# Tag with version if this is a tag build
- |
if [ -n "$CI_COMMIT_TAG" ]; then
docker tag $DOCKER_IMAGE $RELEASE_IMAGE
docker push $RELEASE_IMAGE
fi
# Create artifacts to pass to later stages
artifacts :
paths :
- docker-image-info.json
expire_in : 1 week
# Cache dependencies to speed up future builds
cache :
key : ${CI_COMMIT_REF_SLUG}
paths :
- node_modules/
- .npm/
only :
- main
- tags
- merge_requests
# Testing stage: Run automated tests
test :
stage : test
image : $DOCKER_IMAGE
# Define test environment variables
variables :
NODE_ENV : test
# Use in-memory database for tests
DATABASE_URL : "sqlite://:memory:"
before_script :
# Prepare test environment
- echo "Setting up test environment"
- npm install --only=dev
script :
# Run unit tests
- npm run test:unit
# Run integration tests
- npm run test:integration
# Run end-to-end tests if not a merge request (for speed)
- '[ "$CI_PIPELINE_SOURCE" = "merge_request_event" ] || npm run test:e2e'
# Generate code coverage report
- npm run coverage
# Publish test reports as artifacts
artifacts :
reports :
junit : junit-*.xml
coverage_report :
coverage_format : cobertura
path : coverage/cobertura-coverage.xml
paths :
- coverage/
coverage : '/Statements\s+:\s+(\d+.?\d*)%/'
only :
- main
- tags
- merge_requests
# Security scanning stage
security :
stage : security
image :
name : aquasec/trivy:latest
entrypoint : [ "" ]
variables :
# Fail on critical vulnerabilities
TRIVY_EXIT_CODE : 1
TRIVY_FORMAT : json
TRIVY_OUTPUT : trivy-results.json
# Skip dev dependencies
TRIVY_SEVERITY : CRITICAL,HIGH
script :
# Pull the image to scan
- docker pull $DOCKER_IMAGE
# Scan image for vulnerabilities
- trivy image --format $TRIVY_FORMAT --output $TRIVY_OUTPUT --exit-code $TRIVY_EXIT_CODE $DOCKER_IMAGE
# Generate security report
artifacts :
paths :
- trivy-results.json
reports :
container_scanning : trivy-results.json
allow_failure : true
only :
- main
- tags
# Deployment stage
deploy :
stage : deploy
image : docker:20.10.16
services :
- docker:20.10.16-dind
before_script :
# Install deployment tools
- apk add --no-cache curl bash
# Login to registry
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script :
# Pull the image to deploy
- docker pull $DOCKER_IMAGE
# For production releases, use the versioned tag
- |
if [ -n "$CI_COMMIT_TAG" ]; then
docker tag $DOCKER_IMAGE $CI_REGISTRY_IMAGE:stable
docker push $CI_REGISTRY_IMAGE:stable
DEPLOY_ENV="production"
else
DEPLOY_ENV="staging"
fi
# Deploy to the appropriate environment
- echo "Deploying to $DEPLOY_ENV environment"
# Example using docker-compose or kubectl
- |
if [ "$DEPLOY_ENV" = "production" ]; then
echo "Performing production deployment"
# kubectl set image deployment/app app=$DOCKER_IMAGE
else
echo "Performing staging deployment"
# docker-compose -f docker-compose.staging.yml up -d
fi
environment :
name : $DEPLOY_ENV
url : https://$DEPLOY_ENV.example.com
only :
- main
- tags
# Verification stage
verify :
stage : verify
image : curlimages/curl
script :
# Wait for deployment to stabilize
- sleep 30
# Perform health check
- curl -f https://$DEPLOY_ENV.example.com/health || exit 1
# Run smoke tests
- curl -f https://$DEPLOY_ENV.example.com/api/status | grep -q "OK"
environment :
name : $DEPLOY_ENV
url : https://$DEPLOY_ENV.example.com
only :
- main
- tags
pipeline {
// Use a dynamic agent with Docker capabilities
agent {
docker {
image 'docker:20.10.16-dind'
args '-v /var/run/docker.sock:/var/run/docker.sock -v jenkins-docker-certs:/certs/client -v jenkins-data:/var/jenkins_home'
}
}
// Environment variables for the pipeline
environment {
DOCKER_REGISTRY = 'docker.io'
DOCKER_IMAGE = 'myusername/myapp'
DOCKER_CREDENTIALS_ID = 'docker-hub-credentials'
// Use timestamped tags for versioning
BUILD_VERSION = "${env.BUILD_NUMBER}-${new Date().format('yyyyMMddHHmmss')}"
// Configure Docker BuildKit
DOCKER_BUILDKIT = '1'
}
// Define stages of the pipeline
stages {
// Prepare the environment
stage('Prepare') {
steps {
// Clean workspace
cleanWs()
// Checkout code from SCM
checkout scm
// Install any necessary tools
sh 'apk add --no-cache git curl jq'
// Setup Docker BuildX for multi-platform builds
sh 'docker buildx create --name cibuilder --use || true'
// Print environment information for debugging
sh 'docker version'
sh 'docker info'
sh 'git log -1'
// Extract version information from code
script {
// Extract semantic version from package.json if exists
if (fileExists('package.json')) {
def packageJson = readJSON file: 'package.json'
env.APP_VERSION = packageJson.version
echo "Building application version: ${env.APP_VERSION}"
}
}
}
}
// Build the Docker image
stage('Build') {
steps {
// Cache layers from previous builds if possible
sh '''
if docker pull ${DOCKER_IMAGE}:latest; then
CACHE_FROM="--cache-from ${DOCKER_IMAGE}:latest"
else
CACHE_FROM=""
fi
# Build the image with metadata
docker build \
$CACHE_FROM \
--build-arg BUILD_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ") \
--build-arg VCS_REF=$(git rev-parse --short HEAD) \
--build-arg VERSION=${APP_VERSION:-0.1.0} \
--label org.opencontainers.image.created=$(date -u +"%Y-%m-%dT%H:%M:%SZ") \
--label org.opencontainers.image.revision=$(git rev-parse HEAD) \
--label org.opencontainers.image.version=${APP_VERSION:-0.1.0} \
-t myapp:${BUILD_VERSION} \
-t myapp:latest \
.
'''
// Display image details
sh 'docker image ls myapp'
}
}
// Run tests inside the built container
stage('Test') {
steps {
// Run different types of tests
sh '''
# Unit tests
docker run --rm myapp:latest npm test
# Integration tests if available
if [ -f "integration-test.sh" ]; then
docker run --rm myapp:latest ./integration-test.sh
fi
# Run linting if available
if docker run --rm myapp:latest which eslint > /dev/null; then
docker run --rm myapp:latest eslint .
fi
'''
// Run security scan
sh '''
# Optional: Run Trivy scanner
if which trivy > /dev/null; then
trivy image --exit-code 1 --severity HIGH,CRITICAL myapp:latest
else
echo "Trivy not installed, skipping security scan"
fi
'''
}
}
// Push the image to the registry
stage('Push') {
steps {
// Use Jenkins credentials for secure login
withCredentials([usernamePassword(credentialsId: "${DOCKER_CREDENTIALS_ID}", usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASSWORD')]) {
sh '''
# Login to Docker Hub securely
echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USER}" --password-stdin ${DOCKER_REGISTRY}
# Tag with appropriate names for the registry
docker tag myapp:latest ${DOCKER_IMAGE}:latest
docker tag myapp:latest ${DOCKER_IMAGE}:${BUILD_VERSION}
# If this is a tag build, also tag with the git tag
if [ -n "${GIT_TAG}" ]; then
docker tag myapp:latest ${DOCKER_IMAGE}:${GIT_TAG}
docker push ${DOCKER_IMAGE}:${GIT_TAG}
fi
# Push all tags to the registry
docker push ${DOCKER_IMAGE}:latest
docker push ${DOCKER_IMAGE}:${BUILD_VERSION}
# Logout for security
docker logout ${DOCKER_REGISTRY}
'''
}
}
}
// Deploy the application to the target environment
stage('Deploy') {
steps {
// Different deployment strategies based on branch/environment
script {
if (env.BRANCH_NAME == 'main' || env.BRANCH_NAME == 'master') {
echo "Deploying to production environment"
// Use Docker Compose for production deployment
withCredentials([file(credentialsId: 'production-env-file', variable: 'ENV_FILE')]) {
sh '''
# Copy the environment file to the workspace
cp ${ENV_FILE} .env.production
# Update the image tag in the compose file
sed -i "s|image: ${DOCKER_IMAGE}:.*|image: ${DOCKER_IMAGE}:${BUILD_VERSION}|g" docker-compose.prod.yml
# Deploy with zero downtime
docker-compose -f docker-compose.prod.yml up -d --remove-orphans
# Wait for health check
timeout 60s bash -c 'until docker-compose -f docker-compose.prod.yml exec -T app wget -q -O- http://localhost:3000/health | grep -q "ok"; do sleep 2; done'
'''
}
} else if (env.BRANCH_NAME == 'staging') {
echo "Deploying to staging environment"
sh 'docker-compose -f docker-compose.staging.yml up -d'
} else {
echo "Branch ${env.BRANCH_NAME} doesn't trigger deployment"
}
}
}
}
}
// Post-build actions
post {
always {
// Clean up docker resources
sh '''
docker system prune -f
docker image rm -f myapp:${BUILD_VERSION} myapp:latest || true
'''
}
success {
echo "Pipeline completed successfully!"
// Notify on success (Slack, email, etc.)
slackSend(color: 'good', message: "Build Successful: ${env.JOB_NAME} #${env.BUILD_NUMBER} - ${env.BUILD_URL}")
}
failure {
echo "Pipeline failed!"
// Notify on failure
slackSend(color: 'danger', message: "Build Failed: ${env.JOB_NAME} #${env.BUILD_NUMBER} - ${env.BUILD_URL}")
}
}
}
pipeline {
agent any
environment {
DOCKER_REGISTRY = 'docker.io'
DOCKER_IMAGE = 'myusername/myapp'
DOCKER_CREDENTIALS_ID = 'docker-hub-credentials'
PLATFORMS = 'linux/amd64,linux/arm64,linux/arm/v7'
}
stages {
stage('Prepare BuildX') {
steps {
// Install Docker BuildX if needed
sh '''
# Set up Docker BuildX for multi-platform builds
docker buildx version || {
BUILDX_VERSION="v0.10.0"
mkdir -p ~/.docker/cli-plugins
curl -sSLo ~/.docker/cli-plugins/docker-buildx \
https://github.com/docker/buildx/releases/download/${BUILDX_VERSION}/buildx-${BUILDX_VERSION}.linux-amd64
chmod +x ~/.docker/cli-plugins/docker-buildx
}
# Create and use a new builder instance with proper drivers
docker buildx create --name multiarch-builder --driver docker-container --use || true
docker buildx inspect multiarch-builder --bootstrap
'''
}
}
stage('Login to Registry') {
steps {
withCredentials([usernamePassword(credentialsId: "${DOCKER_CREDENTIALS_ID}", usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASSWORD')]) {
sh 'echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USER}" --password-stdin ${DOCKER_REGISTRY}'
}
}
}
stage('Build Multi-arch Images') {
steps {
// Build and push multi-architecture image in one step
sh '''
# Extract version from git tag or generate timestamp-based version
if git describe --exact-match --tags HEAD > /dev/null 2>&1; then
VERSION=$(git describe --exact-match --tags HEAD)
else
VERSION=$(git rev-parse --short HEAD)
fi
echo "Building multi-architecture image version: $VERSION"
# Build and push with all platforms in one command
docker buildx build \
--platform ${PLATFORMS} \
--build-arg BUILD_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ") \
--build-arg VCS_REF=$(git rev-parse --short HEAD) \
--build-arg VERSION=${VERSION} \
--tag ${DOCKER_IMAGE}:${VERSION} \
--tag ${DOCKER_IMAGE}:latest \
--push \
.
'''
}
}
}
}
A comprehensive CircleCI pipeline for Docker-based applications:
version : 2.1
# Define reusable commands
commands :
docker_build :
description : "Build and tag Docker image"
parameters :
image_name :
type : string
default : "myapp"
dockerfile :
type : string
default : "Dockerfile"
build_args :
type : string
default : ""
steps :
- run :
name : Build Docker image
command : |
# Calculate tags
COMMIT_TAG=${CIRCLE_SHA1:0:8}
VERSION_TAG=${CIRCLE_TAG:-$COMMIT_TAG}
# Use cache from previous builds if available
CACHE_FROM=""
if docker pull $DOCKERHUB_USERNAME/<< parameters.image_name >>:latest &>/dev/null; then
CACHE_FROM="--cache-from $DOCKERHUB_USERNAME/<< parameters.image_name >>:latest"
fi
# Build with appropriate tags and labels
docker build \
$CACHE_FROM \
-f << parameters.dockerfile >> \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
--build-arg VCS_REF=${CIRCLE_SHA1} \
--build-arg VERSION=${VERSION_TAG} \
<< parameters.build_args >> \
-t << parameters.image_name >>:${COMMIT_TAG} \
-t << parameters.image_name >>:latest \
-t $DOCKERHUB_USERNAME/<< parameters.image_name >>:${COMMIT_TAG} \
-t $DOCKERHUB_USERNAME/<< parameters.image_name >>:latest \
.
# Tag with version if this is a tag build
if [ -n "${CIRCLE_TAG}" ]; then
docker tag << parameters.image_name >>:${COMMIT_TAG} $DOCKERHUB_USERNAME/<< parameters.image_name >>:${CIRCLE_TAG}
fi
# Define executor environments
executors :
docker-builder :
docker :
- image : cimg/base:2023.03
resource_class : medium+
# Define job definitions
jobs :
build :
executor : docker-builder
steps :
- checkout
- setup_remote_docker :
version : 20.10.14
docker_layer_caching : true
- docker_build :
image_name : myapp
build_args : "--build-arg NODE_ENV=production"
- run :
name : Save image for later jobs
command : |
COMMIT_TAG=${CIRCLE_SHA1:0:8}
mkdir -p /tmp/workspace
docker save myapp:${COMMIT_TAG} | gzip > /tmp/workspace/myapp-image.tar.gz
- persist_to_workspace :
root : /tmp/workspace
paths :
- myapp-image.tar.gz
test :
docker :
- image : cimg/base:2023.03
steps :
- setup_remote_docker :
version : 20.10.14
- attach_workspace :
at : /tmp/workspace
- run :
name : Load Docker image
command : |
docker load < /tmp/workspace/myapp-image.tar.gz
- run :
name : Run unit tests
command : |
COMMIT_TAG=${CIRCLE_SHA1:0:8}
docker run --rm myapp:${COMMIT_TAG} npm run test:unit
- run :
name : Run integration tests
command : |
COMMIT_TAG=${CIRCLE_SHA1:0:8}
docker run --rm myapp:${COMMIT_TAG} npm run test:integration
- run :
name : Run security scan
command : |
COMMIT_TAG=${CIRCLE_SHA1:0:8}
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy:latest image --severity HIGH,CRITICAL --exit-code 0 myapp:${COMMIT_TAG}
push :
executor : docker-builder
steps :
- checkout
- setup_remote_docker :
version : 20.10.14
- attach_workspace :
at : /tmp/workspace
- run :
name : Load Docker image
command : |
docker load < /tmp/workspace/myapp-image.tar.gz
- run :
name : Push to Docker Hub
command : |
COMMIT_TAG=${CIRCLE_SHA1:0:8}
echo "$DOCKERHUB_PASSWORD" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
docker push $DOCKERHUB_USERNAME/myapp:${COMMIT_TAG}
docker push $DOCKERHUB_USERNAME/myapp:latest
# Push version tag if this is a tag build
if [ -n "${CIRCLE_TAG}" ]; then
docker push $DOCKERHUB_USERNAME/myapp:${CIRCLE_TAG}
fi
# Always logout for security
docker logout
deploy-staging :
docker :
- image : cimg/base:2023.03
steps :
- checkout
- run :
name : Install deployment tools
command : |
sudo apt-get update
sudo apt-get install -y curl jq
# Install kubectl for Kubernetes deployments
curl -LO "https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
- run :
name : Deploy to staging
command : |
COMMIT_TAG=${CIRCLE_SHA1:0:8}
echo "Deploying $DOCKERHUB_USERNAME/myapp:${COMMIT_TAG} to staging"
# Example deployment using kubectl
# kubectl config use-context staging
# kubectl set image deployment/myapp myapp=$DOCKERHUB_USERNAME/myapp:${COMMIT_TAG} --record
# kubectl rollout status deployment/myapp
deploy-production :
docker :
- image : cimg/base:2023.03
steps :
- checkout
- run :
name : Install deployment tools
command : |
sudo apt-get update
sudo apt-get install -y curl jq
curl -LO "https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
- run :
name : Deploy to production
command : |
VERSION_TAG=${CIRCLE_TAG}
echo "Deploying $DOCKERHUB_USERNAME/myapp:${VERSION_TAG} to production"
# Example production deployment
# kubectl config use-context production
# kubectl set image deployment/myapp myapp=$DOCKERHUB_USERNAME/myapp:${VERSION_TAG} --record
# kubectl rollout status deployment/myapp
# Define workflow orchestration
workflows :
version : 2
build-test-deploy :
jobs :
- build :
filters :
tags :
only : /^v.*/
- test :
requires :
- build
filters :
tags :
only : /^v.*/
- push :
requires :
- test
filters :
tags :
only : /^v.*/
branches :
only : main
- deploy-staging :
requires :
- push
filters :
branches :
only : main
- approve-production :
type : approval
requires :
- push
filters :
tags :
only : /^v.*/
branches :
ignore : /.*/
- deploy-production :
requires :
- approve-production
filters :
tags :
only : /^v.*/
branches :
ignore : /.*/
Optimize your Docker builds in CI/CD:
Use BuildKit for faster, parallel builds Enable with DOCKER_BUILDKIT=1
environment variable Parallel execution of independent build stages More efficient caching mechanisms Secret mounting without leaking in layers Improved build output and status reporting Example: export DOCKER_BUILDKIT=1 && docker build .
Implement layer caching Store and reuse layers between builds Configure registry-based caching for CI/CD Optimize Dockerfile for effective layer caching Use BuildKit's inline cache metadata Set up CI-specific cache storage Example: docker build --cache-from myapp:cache --build-arg BUILDKIT_INLINE_CACHE=1 .
Use multi-stage builds for smaller images Separate build environment from runtime environment Keep only necessary artifacts in final image Reduce attack surface and image size Share built artifacts across multiple final images Example: See multi-stage Dockerfile example in optimization section Build only what changed with cache dependencies Order Dockerfile instructions from least to most frequently changed Separate dependency installation from application code Use specific COPY commands instead of COPY . . Implement mount caching for package managers Example: docker build --build-arg BUILDKIT_INLINE_CACHE=1 .
Implement build arguments for environment-specific builds Parameterize builds with ARG instructions Configure different behaviors based on environment Set build-time variables for versioning Avoid hardcoding environment-specific values Example: docker build --build-arg NODE_ENV=production .
Tag images with meaningful, traceable identifiers Use semantic versioning for release tags Include git commit SHA for traceability Consider timestamp-based tags for CI builds Apply multiple tags for different purposes Use immutable tags for production deployments Example: docker tag myapp:latest myapp:1.2.3-a7c45b9
Docker enables consistent, isolated testing environments that match production. Here's a comprehensive testing setup:
version : '3.8'
services :
# Main application service
app :
build :
context : .
dockerfile : Dockerfile
target : development # Use development stage with testing dependencies
depends_on :
db :
condition : service_healthy # Wait for database to be fully ready
redis :
condition : service_healthy
environment :
- NODE_ENV=test
- DB_HOST=db
- DB_PORT=5432
- DB_USER=test
- DB_PASSWORD=test
- DB_NAME=test_db
- REDIS_HOST=redis
- REDIS_PORT=6379
# Mount source code for hot reloading during development
volumes :
- ./src:/app/src:ro
- ./test:/app/test:ro
- node_modules:/app/node_modules
# Expose port for debugging
ports :
- "9229:9229"
# Add health check for readiness
healthcheck :
test : [ "CMD" , "wget" , "-qO-" , "http://localhost:3000/health" ]
interval : 5s
timeout : 3s
retries : 5
start_period : 10s
# Database service for testing
db :
image : postgres:13-alpine
environment :
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test_db
volumes :
- ./test/fixtures/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
- postgres-data:/var/lib/postgresql/data
# Health check ensures database is ready before tests run
healthcheck :
test : [ "CMD-SHELL" , "pg_isready -U test -d test_db" ]
interval : 5s
timeout : 3s
retries : 5
start_period : 10s
# Cache service
redis :
image : redis:alpine
healthcheck :
test : [ "CMD" , "redis-cli" , "ping" ]
interval : 5s
timeout : 3s
retries : 5
# Unit tests
test-unit :
image : myapp:test
depends_on :
app :
condition : service_healthy
environment :
- NODE_ENV=test
- TEST_TYPE=unit
volumes :
- ./test/reports:/app/test/reports
command : npm run test:unit
# Integration tests
test-integration :
image : myapp:test
depends_on :
app :
condition : service_healthy
db :
condition : service_healthy
redis :
condition : service_healthy
environment :
- NODE_ENV=test
- TEST_TYPE=integration
- DB_HOST=db
- REDIS_HOST=redis
volumes :
- ./test/reports:/app/test/reports
command : npm run test:integration
# End-to-end tests
test-e2e :
image : myapp:test
depends_on :
app :
condition : service_healthy
db :
condition : service_healthy
environment :
- NODE_ENV=test
- TEST_TYPE=e2e
- APP_URL=http://app:3000
volumes :
- ./test/reports:/app/test/reports
- ./test/screenshots:/app/test/screenshots
command : npm run test:e2e
volumes :
node_modules :
postgres-data :
Integrating security scanning into your CI/CD pipeline helps identify vulnerabilities early in the development process:
# Basic scan with Trivy in CI
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy:latest image myapp:latest
# Advanced scan with filtering and formatted output
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy:latest image \
--severity HIGH,CRITICAL \
--exit-code 1 \
--ignore-unfixed \
--format json \
--output trivy-results.json \
myapp:latest
# Pipeline integration with vulnerability threshold
if docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy:latest image --severity CRITICAL \
--exit-code 1 --no-progress myapp:latest ; then
echo "No critical vulnerabilities found"
else
echo "Critical vulnerabilities found, failing build"
exit 1
fi
# Scan and add report to build artifacts
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
-v "$( pwd )/reports:/reports" \
aquasec/trivy:latest image \
--format template \
--template "@/contrib/html.tpl" \
-o /reports/trivy-report.html \
myapp:latest
# Run Docker Bench Security to check host and container configuration
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc:/etc \
-v /usr/bin/containerd:/usr/bin/containerd \
-v /usr/bin/runc:/usr/bin/runc \
-v /usr/lib/systemd:/usr/lib/systemd \
-v /var/lib:/var/lib \
docker/docker-bench-security
# Run with output redirected to file for CI integration
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc:/etc \
docker/docker-bench-security > docker-bench-results.txt
# Filter for failed checks only
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/docker-bench-security | grep "\[WARN\]" > security-warnings.txt
Implement image signing Use Docker Content Trust for image signing and verification Configure CI/CD to sign images automatically Store signing keys securely in CI secrets or HSMs Example: DOCKER_CONTENT_TRUST=1 docker push myorg/myapp:1.0.0
Implement key rotation and management procedures Verify image integrity Validate image signatures before deployment Check image digests for immutability Implement chain of custody verification Example: docker trust inspect --pretty myorg/myapp:1.0.0
Integrate with CI/CD through automated validation steps Set up admission controllers Implement Kubernetes admission controllers for runtime validation Configure validating and mutating webhooks Check images against security policies before deployment Example Kubernetes configuration:
apiVersion : admissionregistration.k8s.io/v1
kind : ValidatingWebhookConfiguration
metadata :
name : image-policy-webhook
webhooks :
- name : image-policy.k8s.io
rules :
- apiGroups : [ "" ]
apiVersions : [ "v1" ]
operations : [ "CREATE" , "UPDATE" ]
resources : [ "pods" ]
scope : "Namespaced"
clientConfig :
service :
namespace : image-policy
name : image-policy-webhook
path : /validate
admissionReviewVersions : [ "v1" , "v1beta1" ]
sideEffects : None
timeoutSeconds : 5
Use policy engines like OPA Define declarative policies with Open Policy Agent (OPA) Integrate with Kubernetes using Gatekeeper Enforce organizational security standards Example OPA policy:
package kubernetes.admission
deny[msg] {
input.request.kind.kind == "Pod"
image := input.request.object.spec.containers[_].image
not startswith(image, "approved-registry.com/")
msg := sprintf("image '%v' comes from untrusted registry", [image])
}
Enforce security standards Implement compliance checks for industry standards Validate against CIS Docker Benchmark Enforce organization-specific security policies Automate compliance verification in CI/CD Generate compliance reports for audit purposes Example CI step:
compliance-check :
runs-on : ubuntu-latest
steps :
- name : Run CIS Docker Benchmark
run : |
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/docker-bench-security --check-sh 5
- name : Verify approved base images
run : |
docker inspect myapp:latest | jq -r '.[].Config.Image' | grep -q "^approved-base-image:"
Docker enables consistent deployments across environments through environment-specific configuration rather than environment-specific builds. This pattern supports the "build once, deploy anywhere" principle:
Development Local developer environments Feature branch deployments Rapid iteration and debugging Additional development tools and verbosity Testing/QA Automated test environments Manual QA verification Performance testing Security testing environments Staging Production-like environment Final validation before production User acceptance testing Pre-production data migrations Production Live customer-facing environment High availability configuration Production-grade security Monitoring and observability
# Example deployment script with advanced environment handling
#!/bin/bash
set -eo pipefail
# Set variables based on environment
case "$ENVIRONMENT" in
"dev")
DOCKER_COMPOSE_FILE="docker-compose.dev.yml"
REPLICAS=1
RESOURCES="--cpus=0.5 --memory=512m"
REGISTRY="dev-registry.example.com"
;;
"qa")
DOCKER_COMPOSE_FILE="docker-compose.qa.yml"
REPLICAS=2
RESOURCES="--cpus=1 --memory=1g"
REGISTRY="qa-registry.example.com"
;;
"staging")
DOCKER_COMPOSE_FILE="docker-compose.staging.yml"
REPLICAS=2
RESOURCES="--cpus=2 --memory=2g"
REGISTRY="staging-registry.example.com"
;;
"production")
DOCKER_COMPOSE_FILE="docker-compose.prod.yml"
REPLICAS=5
RESOURCES="--cpus=4 --memory=4g"
REGISTRY="production-registry.example.com"
# Additional production safeguards
DEPLOY_TIMEOUT="--timeout 300s"
HEALTHCHECK="--health-cmd 'curl -f http://localhost/health || exit 1' --health-interval=10s --health-retries=5"
;;
* )
echo "Unknown environment : $ENVIRONMENT"
exit 1
;;
esac
# Pull the specific image version
IMAGE_TAG=${VERSION:-latest}
docker pull $REGISTRY/myapp:$IMAGE_TAG
# Apply environment-specific configurations
envsubst < ${DOCKER_COMPOSE_FILE}.template > ${DOCKER_COMPOSE_FILE}
# Deploy with environment-specific settings
if [ "$ENVIRONMENT" == "production" ]; then
# Production uses a more careful deployment strategy
echo "Deploying to production with rolling update"
# Verify image security before deployment
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy:latest image --severity HIGH,CRITICAL \
--exit-code 1 $REGISTRY/myapp:$IMAGE_TAG || { echo "Security check failed"; exit 1; }
# Backup current state
docker-compose -f docker-compose.prod.yml config > docker-compose.prev.yml
# Deploy new version
docker-compose -f $DOCKER_COMPOSE_FILE up -d --remove-orphans $DEPLOY_TIMEOUT
# Verify deployment health
timeout 60s bash -c 'until docker-compose -f $DOCKER_COMPOSE_FILE ps | grep -q "(healthy)"; do sleep 2; done'
# Run smoke tests
./run_smoke_tests.sh || {
echo "Smoke tests failed, rolling back"
docker-compose -f docker-compose.prev.yml up -d
exit 1
}
else
# Development and staging environments
echo "Deploying to $ENVIRONMENT"
docker-compose -f $DOCKER_COMPOSE_FILE up -d --remove-orphans
fi
# Cleanup
if [ "$ENVIRONMENT" != "production" ]; then
echo "Pruning old images from $ENVIRONMENT"
docker image prune -a -f --filter "until=24h"
fi
echo "Deployment to $ENVIRONMENT complete"
# Base docker-compose configuration
# docker-compose.base.yml
version : '3.8'
services :
app :
image : ${REGISTRY}/myapp:${IMAGE_TAG:-latest}
restart : unless-stopped
environment :
- NODE_ENV=${ENVIRONMENT}
healthcheck :
test : [ "CMD" , "curl" , "-f" , "http://localhost:3000/health" ]
interval : 10s
timeout : 5s
retries : 3
start_period : 30s
deploy :
resources :
limits :
cpus : "${CPU_LIMIT:-0.5}"
memory : "${MEMORY_LIMIT:-512M}"
# Development-specific overrides
# docker-compose.dev.yml
version : '3.8'
services :
app :
extends :
file : docker-compose.base.yml
service : app
environment :
- NODE_ENV=development
- DEBUG=app:*
- LOG_LEVEL=debug
volumes :
- ./src:/app/src:ro
- ./config:/app/config:ro
ports :
- "3000:3000"
- "9229:9229" # Debug port
command : [ "npm" , "run" , "dev" ]
# Production-specific overrides
# docker-compose.prod.yml
version : '3.8'
services :
app :
extends :
file : docker-compose.base.yml
service : app
environment :
- NODE_ENV=production
- LOG_LEVEL=info
secrets :
- app_config
- db_credentials
deploy :
replicas : 5
update_config :
order : start-first
failure_action : rollback
delay : 10s
restart_policy :
condition : any
delay : 5s
max_attempts : 3
window : 120s
resources :
limits :
cpus : '4'
memory : 4G
reservations :
cpus : '2'
memory : 2G
secrets :
app_config :
external : true
db_credentials :
external : true
Automated build and test Automatic code verification at every commit Comprehensive test suite execution Code quality and security checks Artifact generation and validation Example: npm test && npm run lint && npm run build
Manual approval for production Human decision point before production deployment Approval gates with required reviewers Compliance and change management integration Documentation of approval process Example CI configuration:
deploy-production :
needs : [ build , test ]
environment : production
when : manual
rules :
- if : $CI_COMMIT_BRANCH == "main"
Ready-to-deploy artifacts Immutable container images Versioned and labeled for traceability Pre-validated in lower environments Stored in secure container registry Example image tagging strategy:
docker tag myapp:latest myapp:1.2.3- ${CI_COMMIT_SHORT_SHA}
docker push myapp:1.2.3- ${CI_COMMIT_SHORT_SHA}
Environment promotion
# Deploy new version (green)
docker-compose -f docker-compose.green.yml up -d
# Run smoke tests
./run_smoke_tests.sh
# Switch traffic to green
nginx -s reload
# Take down old version (blue)
docker-compose -f docker-compose.blue.yml down
Implement canary deployments with Docker:
Deploy new version to a small subset of servers Monitor performance and errors Gradually increase traffic to new version Automatically rollback if issues detected Complete the rollout when stable
version : '3.8'
services :
app :
image : myapp:latest
secrets :
- db_password
- api_key
secrets :
db_password :
external : true
api_key :
external : true
Jenkins with Docker agents Drone CI Tekton Concourse CI GitLab CI/CD Runners GitHub Actions CircleCI AWS CodeBuild/CodePipeline Google Cloud Build Azure DevOps Pipelines Follow these best practices for Docker in CI/CD:
Use specific image tags, not 'latest' Implement proper caching strategies Keep CI/CD pipelines fast Scan images for vulnerabilities Test in production-like environments Version control your Docker configurations Implement appropriate monitoring
# Matrix builds example with GitHub Actions
strategy :
matrix :
node-version : [ 14.x , 16.x , 18.x ]
os : [ ubuntu-latest , windows-latest ]
steps :
- uses : actions/checkout@v3
- name : Use Node.js ${{ matrix.node-version }}
uses : actions/setup-node@v3
with :
node-version : ${{ matrix.node-version }}
- run : docker build --build-arg NODE_VERSION=${{ matrix.node-version }} -t myapp:${{ matrix.node-version }} .
Deployment frequency Lead time for changes Change failure rate Mean time to recovery Build duration Failed build notifications Deployment notifications Performance impact alerts Error rate monitoring User feedback collection Common Docker CI/CD issues and solutions:
Docker socket permissions Registry authentication failures Resource constraints in CI Network connectivity issues Cache invalidation problems Image size and pull time issues