AWS Production-Grade Migration: DMS Database Migration (Part 4)

Learn how to migrate databases using AWS Database Migration Service (DMS). This comprehensive guide covers DMS replication instances, endpoint configuration, full load migration, and data validation for production-grade database migration.

AWS Production-Grade Migration: DMS Database Migration (Part 4)

Table of Contents

AWS Production-Grade Migration: DMS Database Migration (Part 4)

Overview

In this comprehensive guide, we’ll use AWS Database Migration Service (DMS) to migrate the PostgreSQL database from the source environment to the production Multi-AZ RDS instance in the target environment. We’ll perform a Full Load migration, transferring all tables and data from the legacy database to the production database with complete data integrity validation.

Estimated Time: 60-90 minutes (including replication instance launch and data migration)
Region: ap-south-1 (Mumbai)
Difficulty: Intermediate


Why This Matters for Cloud Migration

AWS Database Migration Service (DMS) is the enterprise-grade solution for database migration, offering:

  • Zero-downtime migration with continuous data capture (CDC) capabilities
  • Data validation ensuring complete data integrity
  • Schema and data transformation for complex migration scenarios
  • Production-grade reliability with AWS-managed infrastructure
  • Comprehensive monitoring with CloudWatch integration

Prerequisites Checklist

  • Phase 1, 2, and 3 completed successfully
  • Source database (legacy-db) has sample data
  • Target database (production-db) is available (Multi-AZ)
  • VPC peering is configured and tested
  • DMS subnet group created (from Phase 2)
  • Can connect to both databases from source web server

Cost Warning

⚠️ Resources that will incur charges:

  • DMS Replication Instance (dms.t3.micro): $0.036/hour ($26.28/month)
  • Data transfer between AZs: ~$0.01/GB
  • RDS storage for both databases while both are running

Estimated DMS Cost: ~$1 per day during active migration

💡 Cost Tip: DMS is pay-as-you-go. Delete the replication instance immediately after migration completes to stop charges.


How AWS DMS Works

Architecture Flow

┌────────────────────────────────────────────────────────────────┐
│ Source Database (on-prem-vpc)                                  │
│ ┌────────────────────────────────────────────────────────┐    │
│ │ RDS: legacy-db (PostgreSQL)                            │    │
│ │ - Database: legacydb                                   │    │
│ │ - Tables: users, products, orders                      │    │
│ │ - Private subnet: 10.0.2.0/24                         │    │
│ └────────────────────────────────────────────────────────┘    │
└────────────────────────────────────────────────────────────────┘
                            │
                            │ VPC Peering
                            ▼
┌────────────────────────────────────────────────────────────────┐
│ DMS Replication Instance (aws-target-vpc)                      │
│ ┌────────────────────────────────────────────────────────┐    │
│ │ EC2-based DMS Instance (dms.t3.micro)                  │    │
│ │ - Connects to source database                          │    │
│ │ - Reads all table data                                 │    │
│ │ - Transforms data (if needed)                          │    │
│ │ - Writes to target database                            │    │
│ │ - Public subnet: 10.1.1.0/24                          │    │
│ └────────────────────────────────────────────────────────┘    │
└────────────────────────────────────────────────────────────────┘
                            │
                            │ Private Connection
                            ▼
┌────────────────────────────────────────────────────────────────┐
│ Target Database (aws-target-vpc)                               │
│ ┌────────────────────────────────────────────────────────┐    │
│ │ RDS: production-db (PostgreSQL Multi-AZ)               │    │
│ │ - Database: productiondb                               │    │
│ │ - Tables: (will receive migrated data)                 │    │
│ │ - Private subnets: 10.1.21.0/24, 10.1.22.0/24         │    │
│ └────────────────────────────────────────────────────────┘    │
└────────────────────────────────────────────────────────────────┘

Key Concepts

1. DMS Replication Instance: EC2-based instance that runs the migration software 2. Source Endpoint: Configuration that tells DMS how to connect to the source database 3. Target Endpoint: Configuration that tells DMS how to connect to the target database 4. Migration Task: The actual job that reads from source and writes to target 5. Full Load: One-time copy of all existing data (what we’ll use) 6. CDC (Change Data Capture): Continuous replication of ongoing changes (for zero-downtime migrations) 7. Table Mappings: Rules that define which tables/schemas to migrate


Step 1: Verify Database Connectivity

Before starting DMS, let’s ensure both databases are accessible.

1.1 SSH into Source Web Server

ssh -i on-prem-web-key.pem ec2-user@YOUR_SOURCE_PUBLIC_IP

1.2 Test Source Database Connection

psql --host=YOUR_SOURCE_RDS_ENDPOINT --port=5432 --username=postgres --dbname=legacydb -c "\dt"

Expected output: List of tables (users, products, orders)

1.3 Test Target Database Connection

psql --host=YOUR_TARGET_RDS_ENDPOINT --port=5432 --username=postgres --dbname=productiondb -c "\dt"

Expected output: Did not find any relations. (empty database)

1.4 Count Source Data

Document the baseline data to verify after migration:

# Count users
psql --host=YOUR_SOURCE_RDS_ENDPOINT --port=5432 --username=postgres --dbname=legacydb -c "SELECT COUNT(*) FROM users;"

# Count products
psql --host=YOUR_SOURCE_RDS_ENDPOINT --port=5432 --username=postgres --dbname=legacydb -c "SELECT COUNT(*) FROM products;"

# Count orders
psql --host=YOUR_SOURCE_RDS_ENDPOINT --port=5432 --username=postgres --dbname=legacydb -c "SELECT COUNT(*) FROM orders;"

📝 Note these counts:

  • Users: 4
  • Products: 5
  • Orders: 5

Step 2: Create DMS Replication Instance

The replication instance is the compute resource that performs the actual data migration.

2.1 Navigate to DMS Console

  1. Go to ServicesDatabase Migration Service
  2. Ensure you’re in ap-south-1 region
  3. If first time, you may see a welcome page - click navigation menu (☰)

2.2 Create Replication Instance

  1. Click Replication instances in the left sidebar
  2. Click Create replication instance (orange button)

2.3 Configure Instance Details

Instance configuration:

  • Name: legacy-to-production-replication
  • Description: Replication instance for database migration from legacy-db to production-db
  • Instance class: dms.t3.micro
    • ⚠️ For production migrations, use dms.t3.medium or larger
    • dms.t3.micro: 2 vCPUs, 2 GB RAM - sufficient for our small dataset

High Availability:

  • Multi-AZ: No (single-AZ is sufficient for one-time migration)
  • For ongoing replication or production, enable Multi-AZ

Engine version:

  • Engine version: 3.5.2 (or latest available)

Allocated storage:

  • Storage: 50 GB (default, sufficient)
    • Used for replication logs and temporary storage

2.4 Configure Connectivity

Network and connectivity:

  • VPC: Select aws-target-vpc
  • Replication subnet group: Select dms-target-subnet-group (created in Phase 2)
  • Publicly accessible: No (we’re using VPC peering for private connectivity)

Advanced security and network configuration:

  • Availability zone: No preference (let AWS choose)
  • VPC security group(s): Select target-dms-sg
  • KMS key: (default) aws/dms (for encryption at rest)

2.5 Maintenance and Monitoring

Maintenance:

  • Auto minor version upgrade: Yes
  • Maintenance window: No preference

Tags (optional):

  • Key: Project, Value: Database-Migration
  • Key: Environment, Value: Migration

2.6 Create Instance

  1. Review all settings
  2. Click Create replication instance
  3. You’ll see the instance in “Creating” status
  4. Wait for status to change to “Available” (takes 5-10 minutes)
  5. ☕ Good time for a quick break!

2.7 Note Instance Details

Once available, click on the instance name and note:

  • 📝 Replication instance ARN: arn:aws:dms:ap-south-1:…
  • 📝 Private IP address: 10.1.x.x
  • Status: Available

Step 3: Create Source Endpoint

The source endpoint tells DMS how to connect to the legacy database.

3.1 Navigate to Endpoints

  1. In DMS console, click Endpoints in the left sidebar
  2. Click Create endpoint (orange button)

3.2 Configure Endpoint Type

  • Endpoint type: Source endpoint
  • Endpoint identifier: legacy-db-source-endpoint
  • Descriptive Amazon Resource Name (ARN) - optional: Leave blank
  • Source engine: PostgreSQL

3.3 Configure Access Method

  • Access to endpoint database: Provide access information manually

3.4 Configure Server Connection

Server name: Enter your source RDS endpoint

  • Example: legacy-db.abc123xyz.ap-south-1.rds.amazonaws.com
  • (The endpoint you noted in Phase 1)

Port: 5432

SSL mode: require

  • This ensures encrypted connection

Database name: legacydb

3.5 Configure Authentication

User name: postgres

Password: Enter the password you created for legacy-db in Phase 1

3.6 Configure Test Endpoint Connection (Important!)

This verifies DMS can reach the source database through VPC peering.

Test endpoint connection:

  • VPC: Select aws-target-vpc
  • Replication instance: Select legacy-to-production-replication (Automatically selected)

3.7 Create and Test

  1. Click Run test button (at the bottom)
  2. Wait for the test to complete (30-60 seconds)
  3. Status should show “successful”

If the test fails, see troubleshooting section below

  1. Once successful, click Create endpoint
  2. You’ll see the endpoint listed with status “Active”

Step 4: Create Target Endpoint

The target endpoint tells DMS how to connect to the production database.

4.1 Create New Endpoint

  1. Click EndpointsCreate endpoint

4.2 Configure Endpoint Type

  • Endpoint type: Target endpoint
  • Endpoint identifier: production-db-target-endpoint
  • Target engine: PostgreSQL

4.3 Configure Access Method

  • Access to endpoint database: Provide access information manually

4.4 Configure Server Connection

Server name: Enter your target RDS endpoint

  • Example: production-db.xyz789abc.ap-south-1.rds.amazonaws.com
  • (The endpoint you noted in Phase 2)

Port: 5432

SSL mode: require

Database name: productiondb

4.5 Configure Authentication

User name: postgres

Password: Enter the password you created for production-db in Phase 2

4.6 Endpoint Settings (Important for PostgreSQL)

Click Endpoint settings to expand.

Add this important setting:

  • Extra connection attributes: Leave empty for now
  • But be aware you can add settings like:
    • captureDDLs=Y - to capture DDL changes
    • maxFileSize=512000 - for large objects

4.7 Test Connection

Test endpoint connection:

  • VPC: Select aws-target-vpc
  • Replication instance: Select legacy-to-production-replication
  1. Click Run test
  2. Wait for “successful” status
  3. Click Create endpoint

Step 5: Create Database Migration Task

The migration task is the actual job that moves data from source to target.

5.1 Navigate to Database Migration Tasks

  1. Click Database migration tasks in the left sidebar
  2. Click Create task (orange button)

5.2 Configure Task Configuration

Task identifier: migrate-legacy-to-production

Replication instance: Select legacy-to-production-replication

Source database endpoint: Select legacy-db-source-endpoint

Target database endpoint: Select production-db-target-endpoint

Migration type:

  • Select Migrate existing data (Full Load)
  • This is a one-time copy of all data
  • For zero-downtime production migrations, you’d use “Migrate existing data and replicate ongoing changes” (Full Load + CDC)

5.3 Configure Task Settings

Task settings:

Editing mode: Wizard (easier) or JSON (for advanced users)

Target table preparation mode:

  • Select Drop tables on target
  • This tells DMS to drop and recreate tables if they exist
  • Alternatives:
    • “Do nothing” - fails if tables exist
    • “Truncate” - deletes data but keeps table structure

Include LOB columns in replication:

  • Select Limited LOB mode
  • Max LOB size: 32 KB (default, sufficient for our data)
  • LOB = Large Objects (like text, JSON, binary data)

Enable validation: ☑ Check this (highly recommended)

  • DMS will validate that data was copied correctly
  • Compares row counts and checksums

Enable CloudWatch logs: ☑ Check this

  • Essential for troubleshooting
  • Creates logs in CloudWatch for task progress and errors

5.4 Configure Table Mappings

This defines which tables/schemas to migrate.

Editing mode: Wizard (we’ll use guided approach)

Click “Add new selection rule”:

Selection rule 1:

  • Schema: Enter schema name: public
    • In PostgreSQL, default schema is “public”
  • Table name: % (wildcard - all tables)
  • Action: Include

This will migrate all tables in the public schema (users, products, orders).

Preview of what will be migrated: The preview should show it will migrate:

  • Schema: public
  • Tables: All tables (users, products, orders)

5.5 Configure Migration Task Startup

Migration task startup configuration:

  • Select Automatically on create
  • The task will start immediately after creation
  • Alternative: “Manually later” if you want to review first

5.6 Create and Start Task

  1. Review all settings
  2. Click Create task
  3. The task will be created and immediately start
  4. You’ll be redirected to the task details page

Step 6: Monitor Migration Progress

6.1 View Task Details

  1. Click on the task: migrate-legacy-to-production
  2. You’ll see the task overview with status

Task Status Progression:

  1. Creating → Task is being set up
  2. Starting → Initializing connections
  3. Running → Actively copying data
  4. Load complete → All data migrated successfully
  5. Stopped → Task finished (either success or failure)

6.2 Monitor Table Statistics

Scroll down to the Table statistics tab:

You should see a table showing:

SchemaTable NameFull Load RowsInsertsDeletesUpdatesDDLsValidation State
publicusers40000Validated
publicproducts50000Validated
publicorders50000Validated

What these columns mean:

  • Full Load Rows: Number of rows migrated in initial load
  • Inserts: New rows (for CDC only)
  • Deletes: Deleted rows (for CDC only)
  • Updates: Updated rows (for CDC only)
  • Validation State: Whether data was validated successfully

6.3 View Real-Time Progress

The progress updates every few seconds. For our small dataset:

  • Expected time: 2-5 minutes for complete migration
  • Larger databases can take hours or days

6.4 Monitor CloudWatch Logs

  1. Click on the Logs tab
  2. You’ll see links to CloudWatch log groups
  3. Click on any log group to see detailed logs
  4. Look for messages like:
    • Starting migration task
    • Loading table public.users
    • Finished loading table public.users, loaded 4 rows
    • All tables loaded successfully

6.5 Wait for Completion

Status will change to “Load complete” when finished.

✅ Success indicators:

  • Status: Load complete
  • All tables show “Validated” in Table statistics
  • CloudWatch logs show “Load complete”
  • No errors in logs

Step 7: Verify Migrated Data

Now let’s verify the data was migrated correctly to the production database.

7.1 Connect to Production Database

SSH into your source web server (or bastion host):

ssh -i on-prem-web-key.pem ec2-user@YOUR_SOURCE_OR_BASTION_IP

Connect to the production database:

psql --host=YOUR_TARGET_RDS_ENDPOINT --port=5432 --username=postgres --dbname=productiondb

7.2 Verify Tables Exist

-- List all tables
\dt

-- Expected output: users, products, orders

7.3 Verify Row Counts

-- Count users
SELECT COUNT(*) FROM users;
-- Expected: 4

-- Count products
SELECT COUNT(*) FROM products;
-- Expected: 5

-- Count orders
SELECT COUNT(*) FROM orders;
-- Expected: 5

7.4 Verify Actual Data

-- Check users table
SELECT * FROM users ORDER BY id;
-- Should see: amodh, john_doe, jane_smith, admin

-- Check products table
SELECT product_name, price FROM products ORDER BY id;
-- Should see: Laptop, Mouse, Keyboard, Monitor, Headphones

-- Check orders with JOIN
SELECT
    o.id,
    u.username,
    p.product_name,
    o.quantity
FROM orders o
JOIN users u ON o.user_id = u.id
JOIN products p ON o.product_id = p.id
ORDER BY o.id;
-- Should see all 5 orders with correct joins

7.5 Verify Constraints and Indexes

-- Check primary keys
SELECT
    tc.table_name,
    kcu.column_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu
    ON tc.constraint_name = kcu.constraint_name
WHERE tc.constraint_type = 'PRIMARY KEY'
ORDER BY tc.table_name;

-- Check foreign keys
SELECT
    tc.table_name,
    kcu.column_name,
    ccu.table_name AS foreign_table_name,
    ccu.column_name AS foreign_column_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu
    ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage ccu
    ON ccu.constraint_name = tc.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY'
ORDER BY tc.table_name;

-- Exit psql
\q

✅ If all data matches, the migration was successful!


Step 8: Update Application Configuration

Now that the database is migrated, update the application to use the new production database.

8.1 SSH into Migrated Web Server

You need to connect to the migrated server (from Phase 3), not the source server.

Use one of these methods:

  • Bastion host: ssh bastion, then ssh 10.1.11.x
  • Session Manager: AWS Console → Systems Manager → Session Manager
  • Temporary Elastic IP: If you attached one for testing

8.2 Update Database Connection String

# Backup the original configuration
sudo cp /var/www/cgi-bin/db_app.py /var/www/cgi-bin/db_app.py.backup

# Update the database endpoint
sudo sed -i 's|DB_HOST = ".*"|DB_HOST = "YOUR_PRODUCTION_DB_ENDPOINT"|g' /var/www/cgi-bin/db_app.py

# Update the database name
sudo sed -i 's|DB_NAME = "legacydb"|DB_NAME = "productiondb"|g' /var/www/cgi-bin/db_app.py

# Update the password if it changed
sudo sed -i 's|DB_PASS = ".*"|DB_PASS = "YOUR_PRODUCTION_DB_PASSWORD"|g' /var/www/cgi-bin/db_app.py

Or edit manually:

sudo vi /var/www/cgi-bin/db_app.py

Change these lines:

DB_HOST = "production-db.xyz789abc.ap-south-1.rds.amazonaws.com"
DB_NAME = "productiondb"
DB_USER = "postgres"
DB_PASS = "ProductionDB2024!"  # Your production password

8.3 Restart Web Server

sudo systemctl restart httpd

8.4 Test Application

# Test locally on the server
curl http://localhost/cgi-bin/db_app.py

Expected: HTML page showing all users, products, and orders from the production database

8.5 Test from Load Balancer (if created)

If you created an ALB in Phase 3:

# From your local machine
curl http://YOUR_ALB_DNS_NAME/cgi-bin/db_app.py

# Or open in browser
http://YOUR_ALB_DNS_NAME/cgi-bin/db_app.py

✅ Application is now fully migrated and using the production database!


Step 9: Use Migration Hub for Tracking

9.1 View Database Migration in Migration Hub

  1. Go to ServicesMigration Hub
  2. Click Databases in the left sidebar
  3. You should see your database migration listed:
    • Source: legacy-db
    • Target: production-db
    • Status: Complete
    • Records migrated: 14 (4 users + 5 products + 5 orders)

9.2 View Complete Migration Project

  1. Click Dashboard in Migration Hub
  2. You should see a summary:
    • Servers migrated: 1 (from Phase 3 - MGN)
    • Databases migrated: 1 (from Phase 4 - DMS)
    • Status: Complete

9.3 Generate Migration Report

  1. In Migration Hub, click Reports
  2. Click Create report
  3. Select:
    • Report type: Migration summary
    • Resources: Select your migrated server and database
  4. Click Generate report
  5. Download the PDF report for documentation

Step 10: Post-Migration Database Tasks

10.1 Update PostgreSQL Statistics

After a large data load, update database statistics for query optimization:

psql --host=YOUR_PRODUCTION_DB_ENDPOINT --port=5432 --username=postgres --dbname=productiondb
-- Analyze all tables to update statistics
ANALYZE users;
ANALYZE products;
ANALYZE orders;

-- Or analyze all tables at once
ANALYZE;

-- Vacuum to reclaim space (optional)
VACUUM ANALYZE;

\q

10.2 Create Additional Indexes (if needed)

-- Connect to production database
psql --host=YOUR_PRODUCTION_DB_ENDPOINT --port=5432 --username=postgres --dbname=productiondb
-- Create indexes for common queries
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_products_name ON products(product_name);
CREATE INDEX idx_orders_user_id ON orders(user_id);
CREATE INDEX idx_orders_date ON orders(order_date);

-- Verify indexes
\di

\q

10.3 Set Up Automated Backups

The production database should already have automated backups (configured in Phase 2), but verify:

  1. Go to RDS ConsoleDatabases
  2. Click on production-db
  3. Go to Maintenance & backups tab
  4. Verify:
    • Automated backups: Enabled
    • Backup retention period: 7 days
    • Backup window: Configured
  5. Manually create a snapshot after migration:
    • ActionsTake snapshot
    • Snapshot name: production-db-post-migration-2025-10-17

Step 11: Performance Testing and Validation

11.1 Run Performance Tests

Compare query performance between source and target:

On Source Database:

psql --host=SOURCE_ENDPOINT --port=5432 --username=postgres --dbname=legacydb -c "\timing" -c "SELECT COUNT(*) FROM orders o JOIN users u ON o.user_id = u.id JOIN products p ON o.product_id = p.id;"

On Target Database:

psql --host=TARGET_ENDPOINT --port=5432 --username=postgres --dbname=productiondb -c "\timing" -c "SELECT COUNT(*) FROM orders o JOIN users u ON o.user_id = u.id JOIN products p ON o.product_id = p.id;"

Target should be equal or faster (especially with Multi-AZ).

11.2 Stress Test the Application

# Install Apache Bench on your local machine or bastion
# For Amazon Linux:
sudo yum install -y httpd-tools

# Run load test (100 requests, 10 concurrent)
ab -n 100 -c 10 http://YOUR_ALB_DNS_NAME/cgi-bin/db_app.py

Monitor:

  • Response times
  • Database connections in RDS console
  • CPU/memory usage

11.3 Validate Data Integrity

Run a complete data comparison (for critical migrations):

-- On both source and target, run:
SELECT
    'users' as table_name,
    COUNT(*) as row_count,
    SUM(id) as id_sum,
    MD5(string_agg(username::text, ',' ORDER BY id)) as username_hash
FROM users
UNION ALL
SELECT
    'products',
    COUNT(*),
    SUM(id),
    MD5(string_agg(product_name::text, ',' ORDER BY id))
FROM products
UNION ALL
SELECT
    'orders',
    COUNT(*),
    SUM(id),
    MD5(string_agg(quantity::text, ',' ORDER BY id))
FROM orders;

Compare the results between source and target - they should be identical.


Step 12: Clean Up DMS Resources

After successful migration, clean up DMS resources to stop charges.

12.1 Stop the Migration Task

  1. Go to DMS ConsoleDatabase migration tasks
  2. Select migrate-legacy-to-production
  3. ActionsStop
  4. Confirm by clicking Stop
  5. Wait for status to change to “Stopped”

12.2 Delete the Migration Task

  1. Select the stopped task
  2. ActionsDelete
  3. Confirm deletion
  4. ☑ Delete CloudWatch logs (optional - check if you want to remove logs)
  5. Click Delete

12.3 Delete Endpoints

  1. Go to Endpoints
  2. Select legacy-db-source-endpoint
  3. ActionsDelete
  4. Repeat for production-db-target-endpoint

12.4 Delete Replication Instance

⚠️ This is the most important cost-saving step!

  1. Go to Replication instances
  2. Select legacy-to-production-replication
  3. ActionsDelete
  4. Confirm deletion
  5. The instance will be deleted in a few minutes

💰 After deletion, you stop paying for the DMS replication instance!

12.5 (Optional) Keep Resources for Continuous Replication

If you plan to run continuous replication (CDC):

  • Keep the replication instance running
  • Keep the task in “Running” state
  • Set up monitoring and alerts
  • For production, use larger instance types (dms.t3.medium or larger)

Verification Checklist

  • DMS replication instance created and available
  • Source endpoint created and tested successfully
  • Target endpoint created and tested successfully
  • Migration task created with correct settings
  • Full load completed successfully
  • All tables migrated (users, products, orders)
  • Row counts match between source and target
  • Data validation passed in DMS console
  • Foreign key relationships intact
  • Primary keys and indexes migrated
  • Application updated to use production database
  • Application tested and working correctly
  • Performance is acceptable
  • DMS resources cleaned up (if one-time migration)
  • Migration documented in Migration Hub

Resource Inventory

📝 Important values for cleanup:

ResourceValueStatus
DMS Replication Instancelegacy-to-production-replicationDELETE after migration
Source Endpointlegacy-db-source-endpointDELETE after migration
Target Endpointproduction-db-target-endpointDELETE after migration
Migration Taskmigrate-legacy-to-productionDELETE after migration
Production Databaseproduction-db.xxx.rds.amazonaws.comKEEP - this is your production DB

Troubleshooting

Issue: Endpoint test connection fails

Solution:

  • Check VPC peering: Ensure peering is active and routes are configured
  • Check security groups:
    • Source DB SG must allow PostgreSQL from 10.1.0.0/16
    • Target DB SG must allow PostgreSQL from DMS SG
  • Check database credentials: Verify username/password
  • Check SSL mode: Try changing from ‘require’ to ‘prefer’
  • Review CloudWatch logs: Check DMS task logs for specific errors

Issue: Migration task fails to start

Solution:

  • Verify both endpoints tested successfully
  • Check replication instance is “Available”
  • Review task settings - ensure table mappings are correct
  • Check source database has no locks on tables
  • Review CloudWatch logs for detailed error messages

Issue: Some tables not migrating

Solution:

  • Check table mapping rules - ensure schema name is correct
  • PostgreSQL default schema is “public” (case-sensitive)
  • Verify tables exist in source database: \dt in psql
  • Check if tables have unsupported data types
  • Review CloudWatch logs for “Table does not exist” errors

Issue: Data validation shows discrepancies

Solution:

  • Wait for validation to complete (can take time for large tables)
  • Check for ongoing writes to source database during migration
  • Review validation errors in CloudWatch logs
  • Re-run migration task if needed
  • For production, use Full Load + CDC to capture ongoing changes

Issue: Migration is very slow

Solution:

  • Increase replication instance size (dms.t3.medium or larger)
  • Check network bandwidth between VPCs
  • Disable validation during initial load (run it separately)
  • Reduce number of tables being migrated simultaneously
  • Check source database performance (CPU, IOPS)

Issue: Application can’t connect to production DB

Solution:

  • Verify migrated server is in correct subnet
  • Check security group allows traffic from web server SG
  • Verify endpoint address is correct in application config
  • Test connection manually with psql
  • Check if Multi-AZ failover occurred (check RDS console)

Best Practices for Production Migrations

For Zero-Downtime Migration:

  1. Use Full Load + CDC instead of just Full Load
  2. Let CDC catch up - monitor replication lag
  3. Schedule cutover during low-traffic period
  4. Use Connection Drain - gradually move traffic to new database
  5. Keep source as fallback for 24-48 hours

For Large Databases:

  1. Use larger replication instances (dms.c5.2xlarge or higher)
  2. Migrate in batches - split tables into multiple tasks
  3. Use parallel full load - set ParallelLoadThreads to 4-8
  4. Tune PostgreSQL - adjust shared_buffers, work_mem
  5. Monitor closely - set up CloudWatch alarms

For Critical Data:

  1. Test in non-production first - practice the migration
  2. Enable validation - verify data integrity
  3. Take snapshots before and after migration
  4. Document everything - runbook, rollback procedures
  5. Have DBA review - especially for complex schemas

Cost Optimization

Immediate Actions:

  1. ✅ Delete migration task (keeps logs but stops task)
  2. ✅ Delete replication instance (biggest cost saver)
  3. ✅ Delete endpoints (minimal cost but good practice)
  4. ✅ Review CloudWatch logs and export if needed

Ongoing:

  1. Stop source database (legacy-db) after verification period
  2. Delete source database after final verification
  3. Optimize production database instance size based on usage
  4. Set up auto-scaling for RDS storage
  5. Use Reserved Instances for long-term workloads

Next Steps

Phase 4 Complete!

Your database has been successfully migrated from the legacy single-AZ database to a production-grade Multi-AZ RDS instance!

What you’ve accomplished:

  • Created DMS replication instance
  • Set up source and target endpoints
  • Performed full load migration of all tables
  • Validated data integrity
  • Updated application to use production database
  • Verified migration in Migration Hub
  • Cleaned up DMS resources

Current state:

  • Web server is running in target VPC (from Phase 3)
  • Database is running in target VPC Multi-AZ (from Phase 4)
  • Application is fully functional using production resources
  • Both server and database migrations are complete

Proceed to: CLEANUP.md to safely remove all source resources and temporary infrastructure.


Quick Reference Commands

# Connect to production database
psql --host=PRODUCTION_ENDPOINT --port=5432 --username=postgres --dbname=productiondb

# Quick data verification
psql --host=PRODUCTION_ENDPOINT --port=5432 --username=postgres --dbname=productiondb -c "SELECT 'users' as tbl, COUNT(*) FROM users UNION ALL SELECT 'products', COUNT(*) FROM products UNION ALL SELECT 'orders', COUNT(*) FROM orders;"

# Test application
curl http://YOUR_ALB_DNS_NAME/cgi-bin/db_app.py

# Monitor RDS performance
# Go to RDS Console → production-db → Monitoring tab

# View DMS logs
# Go to CloudWatch → Log groups → /aws/dms/tasks/...

Conclusion

In this comprehensive guide, we’ve successfully migrated a production PostgreSQL database using AWS Database Migration Service (DMS), completing the final phase of our cloud migration journey. This hands-on approach demonstrates enterprise-grade database migration strategies and best practices.

What we’ve accomplished:

  • DMS Infrastructure Setup: Created replication instance and configured endpoints for source and target databases
  • Full Load Migration: Successfully migrated all tables and data with complete integrity
  • Data Validation: Verified data consistency and referential integrity
  • Application Integration: Updated the migrated web server to use the production database
  • Performance Optimization: Configured database statistics and indexes for optimal performance
  • Resource Cleanup: Properly cleaned up DMS resources to optimize costs

Key Learning Outcomes:

  • Understanding of AWS Database Migration Service (DMS) capabilities and architecture
  • Hands-on experience with database migration workflows and best practices
  • Data validation and integrity verification techniques
  • Production-grade database migration strategies
  • Cost optimization and resource management for migration services

This completes our comprehensive AWS migration series! You now have a fully migrated, production-grade application running in AWS with both server and database successfully migrated using modern AWS migration services.


This is Part 4 of a comprehensive AWS migration series. The migration journey is now complete! Here is the Part 5, where we’ll clean up the source environment and temporary infrastructure.

Table of Contents