Complete guide to containerizing microservices applications for production deployment on AWS ECS, covering Flask API development, Nginx reverse proxy configuration, Docker best practices, and ECR image management.
Welcome to Part 3 of our comprehensive series on building production-grade microservices on AWS ECS. In this installment, we’ll focus on containerizing our applications using Docker best practices, creating a robust Flask API with database integration, configuring Nginx as a reverse proxy, and preparing our containerized applications for deployment to ECS Fargate.
In this phase, we’ll create a production-ready microservices application consisting of:
We’ll containerize these services using Docker best practices, test them locally with Docker Compose, and push them to Amazon ECR for ECS deployment.
Our containerized microservices architecture follows modern best practices:
┌─────────────────────────────────────────────────────────────────┐
│ Internet Traffic │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Nginx Reverse Proxy │
│ • Security Headers │
│ • Gzip Compression │
│ • Load Balancing │
│ • Health Checks │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Flask API Service │
│ • RESTful Endpoints │
│ • Database Integration │
│ • Redis Caching │
│ • Health Monitoring │
└────────────────────────┬────────────────────────────────────────┘
│
┌────────────┴────────────┐
▼ ▼
┌─────────────────────┐ ┌─────────────────────┐
│ PostgreSQL │ │ Redis │
│ Database │ │ Cache │
│ • Multi-AZ │ │ • Session Store │
│ • Encrypted │ │ • Performance │
│ • Backups │ │ • Scalability │
└─────────────────────┘ └─────────────────────┘
Before we begin, ensure you have the following tools and access:
Ensure your AWS credentials have the following permissions:
ecr:GetAuthorizationToken, ecr:BatchGetImage, ecr:PutImageecs:DescribeClusters, ecs:DescribeServicesiam:PassRole (for ECS task execution)Let’s build our production-ready containerized applications step by step.
First, let’s create a well-organized project structure for our microservices:
# Navigate to your project directory
cd ecs-cicd-project
# Create application directory structure
mkdir -p application/{flask-app,nginx,scripts}
mkdir -p application/flask-app/{app,tests,config}
# Create additional directories for best practices
mkdir -p application/{docs,monitoring,scripts}
This structure provides:
We’ll create a robust Flask application with comprehensive features for production deployment.
application/flask-app/app.py)Our Flask API includes:
import os
import sys
import time
from datetime import datetime
from flask import Flask, jsonify, request
import psycopg2
from psycopg2.extras import RealDictCursor
import redis
from functools import wraps
app = Flask(__name__)
# Environment variables
DB_HOST = os.getenv('DB_HOST', 'localhost')
DB_PORT = os.getenv('DB_PORT', '5432')
DB_NAME = os.getenv('DB_NAME', 'microservices_db')
DB_USER = os.getenv('DB_USER', 'dbadmin')
DB_PASSWORD = os.getenv('DB_PASSWORD', '')
REDIS_HOST = os.getenv('REDIS_HOST', 'localhost')
REDIS_PORT = int(os.getenv('REDIS_PORT', '6379'))
# Initialize connections
db_conn = None
redis_client = None
def get_db_connection():
"""Get database connection with retry logic"""
global db_conn
max_retries = 5
retry_delay = 5
for attempt in range(max_retries):
try:
if db_conn is None or db_conn.closed:
db_conn = psycopg2.connect(
host=DB_HOST,
port=DB_PORT,
database=DB_NAME,
user=DB_USER,
password=DB_PASSWORD,
cursor_factory=RealDictCursor
)
return db_conn
except psycopg2.OperationalError as e:
if attempt < max_retries - 1:
print(f"Database connection attempt {attempt + 1} failed. Retrying in {retry_delay}s...")
time.sleep(retry_delay)
else:
print(f"Failed to connect to database after {max_retries} attempts: {e}")
return None
def get_redis_connection():
"""Get Redis connection with retry logic"""
global redis_client
max_retries = 5
retry_delay = 5
for attempt in range(max_retries):
try:
if redis_client is None:
redis_client = redis.Redis(
host=REDIS_HOST,
port=REDIS_PORT,
decode_responses=True,
socket_connect_timeout=5
)
redis_client.ping()
return redis_client
except redis.ConnectionError as e:
if attempt < max_retries - 1:
print(f"Redis connection attempt {attempt + 1} failed. Retrying in {retry_delay}s...")
time.sleep(retry_delay)
else:
print(f"Failed to connect to Redis after {max_retries} attempts: {e}")
return None
def initialize_database():
"""Initialize database with sample tables"""
conn = get_db_connection()
if conn is None:
return False
try:
cursor = conn.cursor()
# Create visits table
cursor.execute("""
CREATE TABLE IF NOT EXISTS visits (
id SERIAL PRIMARY KEY,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
endpoint VARCHAR(255),
ip_address VARCHAR(45)
)
""")
# Create sample users table
cursor.execute("""
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
username VARCHAR(100) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
conn.commit()
cursor.close()
print("Database initialized successfully")
return True
except Exception as e:
print(f"Error initializing database: {e}")
return False
# Initialize database on startup
initialize_database()
# Decorator for tracking visits
def track_visit(f):
@wraps(f)
def decorated_function(*args, **kwargs):
try:
conn = get_db_connection()
if conn:
cursor = conn.cursor()
cursor.execute(
"INSERT INTO visits (endpoint, ip_address) VALUES (%s, %s)",
(request.path, request.remote_addr)
)
conn.commit()
cursor.close()
except Exception as e:
print(f"Error tracking visit: {e}")
return f(*args, **kwargs)
return decorated_function
@app.route('/')
@track_visit
def home():
"""Home endpoint"""
return jsonify({
'message': 'Welcome to ECS Microservices Demo',
'service': 'flask-api',
'version': '1.0.0',
'timestamp': datetime.utcnow().isoformat()
})
@app.route('/health')
def health():
"""Health check endpoint"""
health_status = {
'status': 'healthy',
'service': 'flask-api',
'timestamp': datetime.utcnow().isoformat(),
'checks': {}
}
# Check database
try:
conn = get_db_connection()
if conn:
cursor = conn.cursor()
cursor.execute('SELECT 1')
cursor.close()
health_status['checks']['database'] = 'connected'
else:
health_status['checks']['database'] = 'disconnected'
health_status['status'] = 'degraded'
except Exception as e:
health_status['checks']['database'] = f'error: {str(e)}'
health_status['status'] = 'degraded'
# Check Redis
try:
r = get_redis_connection()
if r:
r.ping()
health_status['checks']['redis'] = 'connected'
else:
health_status['checks']['redis'] = 'disconnected'
health_status['status'] = 'degraded'
except Exception as e:
health_status['checks']['redis'] = f'error: {str(e)}'
health_status['status'] = 'degraded'
status_code = 200 if health_status['status'] == 'healthy' else 503
return jsonify(health_status), status_code
@app.route('/api/stats')
@track_visit
def stats():
"""Get visit statistics"""
try:
conn = get_db_connection()
if conn:
cursor = conn.cursor()
cursor.execute('SELECT COUNT(*) as total_visits FROM visits')
result = cursor.fetchone()
cursor.close()
return jsonify({
'total_visits': result['total_visits'],
'timestamp': datetime.utcnow().isoformat()
})
else:
return jsonify({'error': 'Database not available'}), 503
except Exception as e:
return jsonify({'error': str(e)}), 500
@app.route('/api/cache-test')
@track_visit
def cache_test():
"""Test Redis caching"""
try:
r = get_redis_connection()
if r is None:
return jsonify({'error': 'Redis not available'}), 503
key = 'cache_test_counter'
# Increment counter in Redis
counter = r.incr(key)
r.expire(key, 3600) # Expire in 1 hour
return jsonify({
'message': 'Cache working',
'counter': counter,
'timestamp': datetime.utcnow().isoformat()
})
except Exception as e:
return jsonify({'error': str(e)}), 500
@app.route('/api/users', methods=['GET', 'POST'])
@track_visit
def users():
"""Manage users"""
conn = get_db_connection()
if conn is None:
return jsonify({'error': 'Database not available'}), 503
try:
if request.method == 'GET':
cursor = conn.cursor()
cursor.execute('SELECT * FROM users ORDER BY created_at DESC LIMIT 10')
users_list = cursor.fetchall()
cursor.close()
return jsonify({
'users': users_list,
'count': len(users_list)
})
elif request.method == 'POST':
data = request.get_json()
username = data.get('username')
if not username:
return jsonify({'error': 'Username required'}), 400
cursor = conn.cursor()
cursor.execute(
'INSERT INTO users (username) VALUES (%s) RETURNING id, username, created_at',
(username,)
)
new_user = cursor.fetchone()
conn.commit()
cursor.close()
return jsonify({
'message': 'User created',
'user': new_user
}), 201
except Exception as e:
return jsonify({'error': str(e)}), 500
@app.route('/api/info')
@track_visit
def info():
"""Get application info"""
return jsonify({
'service': 'flask-api',
'version': '1.0.0',
'environment': os.getenv('ENVIRONMENT', 'development'),
'hostname': os.getenv('HOSTNAME', 'unknown'),
'database': {
'host': DB_HOST,
'port': DB_PORT,
'database': DB_NAME
},
'redis': {
'host': REDIS_HOST,
'port': REDIS_PORT
},
'timestamp': datetime.utcnow().isoformat()
})
@app.errorhandler(404)
def not_found(error):
return jsonify({'error': 'Not found'}), 404
@app.errorhandler(500)
def internal_error(error):
return jsonify({'error': 'Internal server error'}), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=True)
application/flask-app/requirements.txt)Our production dependencies include:
# Web Framework
Flask==3.0.0
# Database Connectivity
psycopg2-binary==2.9.9
# Caching
redis==5.0.1
# Production WSGI Server
gunicorn==21.2.0
# Additional Production Dependencies
Werkzeug==3.0.1
Jinja2==3.1.2
MarkupSafe==2.1.3
itsdangerous==2.1.2
click==8.1.7
blinker==1.7.0
Key Dependencies Explained:
application/flask-app/Dockerfile)Our production Dockerfile follows security and performance best practices:
# Use official Python 3.11 slim image for smaller size
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
postgresql-client \
curl \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
# Copy requirements first for better layer caching
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY app.py .
# Create non-root user for security
RUN useradd -m -u 1000 appuser \
&& chown -R appuser:appuser /app
# Switch to non-root user
USER appuser
# Expose port
EXPOSE 5000
# Health check for container orchestration
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD curl -f http://localhost:5000/health || exit 1
# Production WSGI server configuration
CMD ["gunicorn", \
"--bind", "0.0.0.0:5000", \
"--workers", "2", \
"--worker-class", "sync", \
"--worker-connections", "1000", \
"--timeout", "60", \
"--keep-alive", "2", \
"--max-requests", "1000", \
"--max-requests-jitter", "100", \
"--preload", \
"app:app"]
Dockerfile Best Practices Applied:
Nginx will serve as our reverse proxy, providing load balancing, security headers, compression, and SSL termination capabilities.
application/nginx/nginx.conf)Our Nginx configuration includes:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml+rss;
# Upstream Flask application
upstream flask_app {
server flask-app:5000;
}
server {
listen 80;
server_name _;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Proxy settings
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Health check endpoint
location /nginx-health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Proxy all requests to Flask
location / {
proxy_pass http://flask_app;
proxy_redirect off;
proxy_buffering off;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# Error pages
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
application/nginx/Dockerfile)Our Nginx container is optimized for production use:
# Use official Nginx Alpine image for smaller size
FROM nginx:1.25-alpine
# Remove default configuration
RUN rm /etc/nginx/nginx.conf
# Copy custom production configuration
COPY nginx.conf /etc/nginx/nginx.conf
# Create custom error pages
RUN echo '<!DOCTYPE html>\
<html>\
<head>\
<title>Service Unavailable</title>\
<style>body{font-family:Arial,sans-serif;text-align:center;padding:50px;}</style>\
</head>\
<body>\
<h1>Service Temporarily Unavailable</h1>\
<p>We are experiencing high traffic. Please try again later.</p>\
</body>\
</html>' > /usr/share/nginx/html/50x.html
# Create additional error pages
RUN echo '<!DOCTYPE html>\
<html>\
<head>\
<title>Bad Gateway</title>\
<style>body{font-family:Arial,sans-serif;text-align:center;padding:50px;}</style>\
</head>\
<body>\
<h1>Bad Gateway</h1>\
<p>The server received an invalid response from the upstream server.</p>\
</body>\
</html>' > /usr/share/nginx/html/502.html
# Expose port
EXPOSE 80
# Health check for container orchestration
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost/nginx-health || exit 1
# Start Nginx in foreground mode
CMD ["nginx", "-g", "daemon off;"]
Nginx Container Features:
We’ll use Docker Compose to create a complete local development environment that mirrors our production setup.
application/docker-compose.yml)Our Docker Compose configuration provides:
version: "3.8"
services:
# PostgreSQL Database
postgres:
image: postgres:15-alpine
container_name: local-postgres
environment:
POSTGRES_DB: microservices_db
POSTGRES_USER: dbadmin
POSTGRES_PASSWORD: localpassword123
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dbadmin"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app-network
# Redis Cache
redis:
image: redis:7-alpine
container_name: local-redis
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app-network
# Flask Application
flask-app:
build:
context: ./flask-app
dockerfile: Dockerfile
container_name: local-flask-app
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_NAME: microservices_db
DB_USER: dbadmin
DB_PASSWORD: localpassword123
REDIS_HOST: redis
REDIS_PORT: 6379
ENVIRONMENT: development
ports:
- "5000:5000"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- app-network
restart: unless-stopped
# Nginx Reverse Proxy
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
container_name: local-nginx
ports:
- "80:80"
depends_on:
- flask-app
networks:
- app-network
restart: unless-stopped
volumes:
postgres_data:
networks:
app-network:
driver: bridge
Let’s thoroughly test our containerized applications to ensure they work correctly before pushing to ECR.
cd application
docker-compose up --build
This command will:
Expected Output:
Open a new terminal and run these tests to validate all functionality:
# Test 1: Basic connectivity through Nginx
echo "Testing Nginx proxy..."
curl -v http://localhost/
# Test 2: Health check endpoint
echo "Testing health endpoint..."
curl -v http://localhost/health
# Test 3: Database connectivity and statistics
echo "Testing database stats..."
curl -v http://localhost/api/stats
# Test 4: Redis caching functionality
echo "Testing Redis cache..."
curl -v http://localhost/api/cache-test
# Test 5: User management (POST)
echo "Creating a test user..."
curl -X POST http://localhost/api/users \
-H "Content-Type: application/json" \
-d '{"username": "testuser123"}' \
-v
# Test 6: User management (GET)
echo "Retrieving users..."
curl -v http://localhost/api/users
# Test 7: Application information and environment
echo "Getting application info..."
curl -v http://localhost/api/info
# Test 8: Error handling
echo "Testing error handling..."
curl -v http://localhost/nonexistent-endpoint
Expected Results:
# Check container status and health
echo "Checking container status..."
docker-compose ps
# Monitor Flask application logs
echo "Monitoring Flask logs..."
docker-compose logs -f flask-app
# Monitor Nginx logs
echo "Monitoring Nginx logs..."
docker-compose logs -f nginx
# Check database logs
echo "Checking database logs..."
docker-compose logs postgres
# Check Redis logs
echo "Checking Redis logs..."
docker-compose logs redis
# Test container health checks
echo "Testing container health..."
docker inspect --format='{{.State.Health.Status}}' local-flask-app
docker inspect --format='{{.State.Health.Status}}' local-nginx
# Stop all services
docker-compose down
# Remove volumes (database data) - use with caution
docker-compose down -v
# Remove all images (optional)
docker-compose down --rmi all
# Clean up unused Docker resources
docker system prune -f
Now that our applications are tested and working locally, let’s push them to Amazon ECR for production deployment.
# Get AWS account ID
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
echo "AWS Account ID: $AWS_ACCOUNT_ID"
# Login to ECR
aws ecr get-login-password --region ap-south-1 | \
docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.ap-south-1.amazonaws.com
ECR Authentication:
cd ../terraform
terraform output flask_app_repository_url
terraform output nginx_repository_url
Save these URLs - you’ll need them.
cd ../application
# Build Flask image
docker build -t flask-app:latest ./flask-app
# Build Nginx image
docker build -t nginx:latest ./nginx
# Tag for ECR (replace with your repository URLs)
docker tag flask-app:latest <FLASK_ECR_URL>:latest
docker tag nginx:latest <NGINX_ECR_URL>:latest
# Push Flask image
docker push <FLASK_ECR_URL>:latest
# Push Nginx image
docker push <NGINX_ECR_URL>:latest
# List Flask images
aws ecr list-images \
--repository-name ecs-microservices/flask-app \
--region ap-south-1
# List Nginx images
aws ecr list-images \
--repository-name ecs-microservices/nginx \
--region ap-south-1
Create application/build-and-push.sh:
#!/bin/bash
# Exit on error
set -e
# Configuration
AWS_REGION="ap-south-1"
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
PROJECT_NAME="ecs-microservices"
# ECR URLs
FLASK_REPO="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${PROJECT_NAME}/flask-app"
NGINX_REPO="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${PROJECT_NAME}/nginx"
# Get image tag (default to 'latest' or use git commit hash)
IMAGE_TAG="${1:-latest}"
echo "Building and pushing images with tag: ${IMAGE_TAG}"
# Login to ECR
echo "Logging in to ECR..."
aws ecr get-login-password --region ${AWS_REGION} | \
docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
# Build Flask image
echo "Building Flask image..."
docker build -t flask-app:${IMAGE_TAG} ./flask-app
docker tag flask-app:${IMAGE_TAG} ${FLASK_REPO}:${IMAGE_TAG}
# Build Nginx image
echo "Building Nginx image..."
docker build -t nginx:${IMAGE_TAG} ./nginx
docker tag nginx:${IMAGE_TAG} ${NGINX_REPO}:${IMAGE_TAG}
# Push images
echo "Pushing Flask image..."
docker push ${FLASK_REPO}:${IMAGE_TAG}
echo "Pushing Nginx image..."
docker push ${NGINX_REPO}:${IMAGE_TAG}
echo "✅ Images successfully built and pushed!"
echo "Flask: ${FLASK_REPO}:${IMAGE_TAG}"
echo "Nginx: ${NGINX_REPO}:${IMAGE_TAG}"
Make it executable:
chmod +x build-and-push.sh
# Usage:
./build-and-push.sh # Push with 'latest' tag
./build-and-push.sh v1.0.0 # Push with 'v1.0.0' tag
Solution: Ensure the database credentials in docker-compose match. Check logs:
docker-compose logs postgres
docker-compose logs flask-app
Solution: Wait for Redis to fully start. Check health status:
docker-compose ps
Solution: Ensure your AWS credentials have ECR permissions:
aws ecr describe-repositories --region ap-south-1
Solution: Build for AMD64:
docker build --platform linux/amd64 -t flask-app:latest ./flask-app
/health and /nginx-health for monitoring✅ Part 3 Complete! You now have:
Proceed to Part 4: ECS Deployment where we’ll:
This foundation provides a robust, scalable, and secure platform for deploying microservices on AWS ECS. The containerized applications are now ready for production deployment in the next phase!
Ready for deployment? In Part 4, we’ll deploy these containerized applications to ECS Fargate and configure the complete production environment! Here is the Part 4, where we’ll deploy these containerized applications to ECS Fargate and configure the complete production environment!
Questions or feedback? Feel free to reach out in the comments below!