Learn how to safely and completely tear down all AWS resources from your document processing pipeline. Comprehensive cleanup guide with automated scripts, manual steps, and verification procedures.
After successfully building and testing your secure document processing pipeline, it’s crucial to properly clean up all AWS resources to avoid unexpected charges. This comprehensive guide provides multiple approaches for safely tearing down your infrastructure while preserving any data you want to keep.
Before starting cleanup:
Before starting cleanup:
For fastest cleanup, use this automated script:
Create file: cleanup.sh (Linux/macOS/WSL)
#!/bin/bash
# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
PROJECT_NAME="secure-doc-pipeline"
REGION="ap-south-1"
echo -e "${YELLOW}================================================${NC}"
echo -e "${YELLOW} AWS Secure Document Pipeline Cleanup${NC}"
echo -e "${YELLOW}================================================${NC}"
echo ""
echo -e "${RED}WARNING: This will delete all project resources!${NC}"
echo -e "${RED}This action is IRREVERSIBLE!${NC}"
echo ""
read -p "Are you sure you want to continue? (type 'yes' to confirm): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
echo "Cleanup cancelled."
exit 0
fi
echo ""
echo -e "${GREEN}Starting cleanup process...${NC}"
echo ""
# Step 1: Remove Lambda triggers
echo "Step 1: Removing S3 bucket notifications..."
aws s3api put-bucket-notification-configuration \
--bucket ${PROJECT_NAME}-internal-processing \
--notification-configuration '{}' \
--region ${REGION} 2>/dev/null || echo "Notification already removed or bucket doesn't exist"
# Step 2: Disable S3 replication
echo "Step 2: Disabling S3 replication..."
aws s3api delete-bucket-replication \
--bucket ${PROJECT_NAME}-uploads \
--region ${REGION} 2>/dev/null || echo "Replication already disabled"
aws s3api delete-bucket-replication \
--bucket ${PROJECT_NAME}-processed-output \
--region ${REGION} 2>/dev/null || echo "Replication already disabled"
# Step 3: Empty all S3 buckets
echo "Step 3: Emptying S3 buckets..."
BUCKETS=(
"${PROJECT_NAME}-uploads"
"${PROJECT_NAME}-internal-processing"
"${PROJECT_NAME}-processed-output"
"${PROJECT_NAME}-delivery"
"${PROJECT_NAME}-compliance-logs"
"${PROJECT_NAME}-cloudtrail-logs"
)
for bucket in "${BUCKETS[@]}"; do
echo " Emptying bucket: $bucket"
# Delete all object versions
aws s3api list-object-versions \
--bucket "$bucket" \
--region ${REGION} \
--output json \
--query '{Objects: Versions[].{Key:Key,VersionId:VersionId}}' 2>/dev/null | \
jq -r '.Objects[]? | "\(.Key)\t\(.VersionId)"' | \
while IFS=$'\t' read -r key versionId; do
aws s3api delete-object --bucket "$bucket" --key "$key" --version-id "$versionId" --region ${REGION} 2>/dev/null
done
# Delete all delete markers
aws s3api list-object-versions \
--bucket "$bucket" \
--region ${REGION} \
--output json \
--query '{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}' 2>/dev/null | \
jq -r '.Objects[]? | "\(.Key)\t\(.VersionId)"' | \
while IFS=$'\t' read -r key versionId; do
aws s3api delete-object --bucket "$bucket" --key "$key" --version-id "$versionId" --region ${REGION} 2>/dev/null
done
# Delete remaining objects
aws s3 rm s3://$bucket --recursive --region ${REGION} 2>/dev/null || echo " Bucket $bucket already empty or doesn't exist"
done
echo ""
echo "Step 4: Waiting 10 seconds for AWS to process deletions..."
sleep 10
# Step 5: Run Terraform destroy
echo "Step 5: Running Terraform destroy..."
cd terraform 2>/dev/null || cd ../terraform 2>/dev/null || cd ./terraform 2>/dev/null
if [ -f "main.tf" ]; then
terraform destroy -auto-approve
TERRAFORM_EXIT=$?
if [ $TERRAFORM_EXIT -eq 0 ]; then
echo -e "${GREEN}Terraform destroy completed successfully!${NC}"
else
echo -e "${RED}Terraform destroy encountered errors. See output above.${NC}"
fi
else
echo -e "${RED}Error: terraform directory not found!${NC}"
exit 1
fi
echo ""
echo -e "${GREEN}================================================${NC}"
echo -e "${GREEN} Cleanup Process Completed!${NC}"
echo -e "${GREEN}================================================${NC}"
echo ""
echo "Next steps:"
echo "1. Run the verification script to confirm all resources are deleted"
echo "2. Check AWS Console for any remaining resources"
echo "3. Monitor billing for 24 hours to ensure no charges"
Make script executable:
chmod +x cleanup.sh
Run the script:
./cleanup.sh
If you prefer manual cleanup or the script fails, follow these steps:
Remove S3 event notifications to prevent Lambda from being invoked during cleanup.
# Remove S3 notification configuration
aws s3api put-bucket-notification-configuration \
--bucket secure-doc-pipeline-internal-processing \
--notification-configuration '{}' \
--region ap-south-1
Verification:
aws s3api get-bucket-notification-configuration \
--bucket secure-doc-pipeline-internal-processing \
--region ap-south-1
Expected: Empty configuration {}
Disable replication rules to prevent errors during bucket deletion.
# Disable replication from uploads bucket
aws s3api delete-bucket-replication \
--bucket secure-doc-pipeline-uploads \
--region ap-south-1
# Disable replication from processed-output bucket
aws s3api delete-bucket-replication \
--bucket secure-doc-pipeline-processed-output \
--region ap-south-1
Verification:
aws s3api get-bucket-replication \
--bucket secure-doc-pipeline-uploads \
--region ap-south-1
Expected: Error message “ReplicationConfigurationNotFoundError” (this is good!)
S3 buckets must be empty before they can be deleted. This includes all versions if versioning is enabled.
# Empty uploads bucket
aws s3 rm s3://secure-doc-pipeline-uploads --recursive
aws s3api delete-objects --bucket secure-doc-pipeline-uploads \
--delete "$(aws s3api list-object-versions --bucket secure-doc-pipeline-uploads \
--query '{Objects: Versions[].{Key:Key,VersionId:VersionId}}' --max-items 1000)"
# Empty internal-processing bucket
aws s3 rm s3://secure-doc-pipeline-internal-processing --recursive
aws s3api delete-objects --bucket secure-doc-pipeline-internal-processing \
--delete "$(aws s3api list-object-versions --bucket secure-doc-pipeline-internal-processing \
--query '{Objects: Versions[].{Key:Key,VersionId:VersionId}}' --max-items 1000)"
# Empty processed-output bucket
aws s3 rm s3://secure-doc-pipeline-processed-output --recursive
aws s3api delete-objects --bucket secure-doc-pipeline-processed-output \
--delete "$(aws s3api list-object-versions --bucket secure-doc-pipeline-processed-output \
--query '{Objects: Versions[].{Key:Key,VersionId:VersionId}}' --max-items 1000)"
# Empty delivery bucket
aws s3 rm s3://secure-doc-pipeline-delivery --recursive
aws s3api delete-objects --bucket secure-doc-pipeline-delivery \
--delete "$(aws s3api list-object-versions --bucket secure-doc-pipeline-delivery \
--query '{Objects: Versions[].{Key:Key,VersionId:VersionId}}' --max-items 1000)"
# Empty compliance-logs bucket
aws s3 rm s3://secure-doc-pipeline-compliance-logs --recursive
# Empty cloudtrail-logs bucket
aws s3 rm s3://secure-doc-pipeline-cloudtrail-logs --recursive
Create file: empty_buckets.py
import boto3
from botocore.exceptions import ClientError
s3 = boto3.resource('s3', region_name='ap-south-1')
project_name = 'secure-doc-pipeline'
buckets = [
f'{project_name}-uploads',
f'{project_name}-internal-processing',
f'{project_name}-processed-output',
f'{project_name}-delivery',
f'{project_name}-compliance-logs',
f'{project_name}-cloudtrail-logs'
]
for bucket_name in buckets:
try:
bucket = s3.Bucket(bucket_name)
print(f'Emptying bucket: {bucket_name}')
# Delete all object versions
bucket.object_versions.all().delete()
# Delete all objects
bucket.objects.all().delete()
print(f' ✓ {bucket_name} emptied successfully')
except ClientError as e:
if e.response['Error']['Code'] == 'NoSuchBucket':
print(f' ⊗ {bucket_name} does not exist')
else:
print(f' ✗ Error emptying {bucket_name}: {e}')
print('\nAll buckets processed!')
Run the script:
python empty_buckets.py
Verification:
# Check each bucket is empty
aws s3 ls s3://secure-doc-pipeline-uploads/
aws s3 ls s3://secure-doc-pipeline-internal-processing/
aws s3 ls s3://secure-doc-pipeline-processed-output/
aws s3 ls s3://secure-doc-pipeline-delivery/
aws s3 ls s3://secure-doc-pipeline-compliance-logs/
aws s3 ls s3://secure-doc-pipeline-cloudtrail-logs/
Expected: No output (empty buckets)
CloudTrail must be stopped and deleted before its associated resources.
# Stop logging
aws cloudtrail stop-logging \
--name secure-doc-pipeline-trail \
--region ap-south-1
# Delete the trail
aws cloudtrail delete-trail \
--name secure-doc-pipeline-trail \
--region ap-south-1
Verification:
aws cloudtrail list-trails --region ap-south-1
Expected: No trail named secure-doc-pipeline-trail
Remove alarms, dashboards, and log groups.
# Delete alarms
aws cloudwatch delete-alarms \
--alarm-names \
secure-doc-pipeline-lambda-errors \
secure-doc-pipeline-lambda-throttles \
secure-doc-pipeline-lambda-duration \
secure-doc-pipeline-replication-lag \
secure-doc-pipeline-s3-4xx-errors \
secure-doc-pipeline-s3-5xx-errors \
--region ap-south-1
# Delete dashboard
aws cloudwatch delete-dashboards \
--dashboard-names secure-doc-pipeline-dashboard \
--region ap-south-1
# Delete log groups
aws logs delete-log-group \
--log-group-name /aws/lambda/secure-doc-pipeline-document-processor \
--region ap-south-1
Verification:
aws cloudwatch describe-alarms --region ap-south-1 | grep secure-doc-pipeline
aws cloudwatch list-dashboards --region ap-south-1 | grep secure-doc-pipeline
aws logs describe-log-groups --region ap-south-1 | grep secure-doc-pipeline
Expected: No results
Remove email subscriptions before deleting the topic.
# List subscriptions
aws sns list-subscriptions --region ap-south-1 | grep secure-doc-pipeline
# Unsubscribe (replace SUBSCRIPTION_ARN with actual ARN from above)
aws sns unsubscribe \
--subscription-arn "arn:aws:sns:ap-south-1:ACCOUNT_ID:secure-doc-pipeline-alerts:SUBSCRIPTION_ID" \
--region ap-south-1
Note: SNS topic will be deleted by Terraform in next step.
Give AWS time to process all deletions.
echo "Waiting 30 seconds for AWS to process deletions..."
sleep 30
This prevents errors in the next step where Terraform might try to delete resources that are still being deleted.
Now use Terraform to delete all remaining infrastructure.
cd terraform
# Preview what will be destroyed
terraform plan -destroy
# Destroy all resources
terraform destroy -auto-approve
Expected output:
...
Destroy complete! Resources: XX destroyed.
If errors occur:
terraform destroy againRemove the access keys for the third-party user before the user is deleted (already deleted by Terraform, but verify).
# List access keys for third-party user
aws iam list-access-keys \
--user-name secure-doc-pipeline-third-party-user \
--region ap-south-1 2>/dev/null
# If any exist, delete them
aws iam delete-access-key \
--user-name secure-doc-pipeline-third-party-user \
--access-key-id ACCESS_KEY_ID \
--region ap-south-1
KMS keys cannot be deleted immediately. They must be scheduled for deletion (7-30 days).
# List all KMS keys for this project
aws kms list-aliases --region ap-south-1 | grep secure-doc-pipeline
# Schedule key deletion (replace KEY_ID with actual key IDs)
aws kms schedule-key-deletion \
--key-id "arn:aws:kms:ap-south-1:ACCOUNT_ID:key/KEY_ID" \
--pending-window-in-days 7 \
--region ap-south-1
Note: Terraform should have already scheduled deletion. This is just verification.
To cancel deletion (if needed):
aws kms cancel-key-deletion \
--key-id "arn:aws:kms:ap-south-1:ACCOUNT_ID:key/KEY_ID" \
--region ap-south-1
Run this comprehensive verification script:
Create file: verify_cleanup.sh
#!/bin/bash
PROJECT_NAME="secure-doc-pipeline"
REGION="ap-south-1"
ALL_CLEAN=true
echo "================================================"
echo " Cleanup Verification"
echo "================================================"
echo ""
# Check S3 Buckets
echo "Checking S3 Buckets..."
BUCKETS=$(aws s3 ls | grep $PROJECT_NAME | wc -l)
if [ "$BUCKETS" -eq 0 ]; then
echo " ✓ No S3 buckets found"
else
echo " ✗ Found $BUCKETS S3 bucket(s) still exist"
aws s3 ls | grep $PROJECT_NAME
ALL_CLEAN=false
fi
# Check Lambda Functions
echo "Checking Lambda Functions..."
LAMBDAS=$(aws lambda list-functions --region $REGION | grep -c $PROJECT_NAME)
if [ "$LAMBDAS" -eq 0 ]; then
echo " ✓ No Lambda functions found"
else
echo " ✗ Found $LAMBDAS Lambda function(s) still exist"
ALL_CLEAN=false
fi
# Check IAM Roles
echo "Checking IAM Roles..."
ROLES=$(aws iam list-roles | grep -c $PROJECT_NAME)
if [ "$ROLES" -eq 0 ]; then
echo " ✓ No IAM roles found"
else
echo " ✗ Found $ROLES IAM role(s) still exist"
ALL_CLEAN=false
fi
# Check IAM Users
echo "Checking IAM Users..."
USERS=$(aws iam list-users | grep -c $PROJECT_NAME)
if [ "$USERS" -eq 0 ]; then
echo " ✓ No IAM users found"
else
echo " ✗ Found $USERS IAM user(s) still exist"
ALL_CLEAN=false
fi
# Check CloudWatch Alarms
echo "Checking CloudWatch Alarms..."
ALARMS=$(aws cloudwatch describe-alarms --region $REGION | grep -c $PROJECT_NAME)
if [ "$ALARMS" -eq 0 ]; then
echo " ✓ No CloudWatch alarms found"
else
echo " ✗ Found $ALARMS CloudWatch alarm(s) still exist"
ALL_CLEAN=false
fi
# Check CloudWatch Log Groups
echo "Checking CloudWatch Log Groups..."
LOGS=$(aws logs describe-log-groups --region $REGION | grep -c $PROJECT_NAME)
if [ "$LOGS" -eq 0 ]; then
echo " ✓ No CloudWatch log groups found"
else
echo " ✗ Found $LOGS CloudWatch log group(s) still exist"
ALL_CLEAN=false
fi
# Check SNS Topics
echo "Checking SNS Topics..."
TOPICS=$(aws sns list-topics --region $REGION | grep -c $PROJECT_NAME)
if [ "$TOPICS" -eq 0 ]; then
echo " ✓ No SNS topics found"
else
echo " ✗ Found $TOPICS SNS topic(s) still exist"
ALL_CLEAN=false
fi
# Check CloudTrail Trails
echo "Checking CloudTrail Trails..."
TRAILS=$(aws cloudtrail list-trails --region $REGION | grep -c $PROJECT_NAME)
if [ "$TRAILS" -eq 0 ]; then
echo " ✓ No CloudTrail trails found"
else
echo " ✗ Found $TRAILS CloudTrail trail(s) still exist"
ALL_CLEAN=false
fi
# Check KMS Keys
echo "Checking KMS Keys..."
KEYS=$(aws kms list-aliases --region $REGION | grep -c $PROJECT_NAME)
if [ "$KEYS" -eq 0 ]; then
echo " ✓ No KMS keys found (or scheduled for deletion)"
else
echo " ! Found $KEYS KMS key alias(es) - These are scheduled for deletion"
echo " (This is normal - KMS keys take 7-30 days to fully delete)"
fi
echo ""
echo "================================================"
if [ "$ALL_CLEAN" = true ]; then
echo " ✓✓✓ CLEANUP SUCCESSFUL ✓✓✓"
echo "================================================"
echo ""
echo "All resources have been deleted!"
echo ""
echo "Next steps:"
echo "1. Check AWS Billing Console after 24 hours"
echo "2. Verify no unexpected charges"
echo "3. KMS keys will auto-delete in 7 days"
else
echo " ✗✗✗ CLEANUP INCOMPLETE ✗✗✗"
echo "================================================"
echo ""
echo "Some resources still exist. See above for details."
echo "You may need to manually delete these resources."
fi
Run verification:
chmod +x verify_cleanup.sh
./verify_cleanup.sh
Cause: S3 bucket still contains objects or versions
Solution:
# Force empty the bucket
aws s3 rm s3://BUCKET_NAME --recursive
# Delete all versions
aws s3api delete-objects --bucket BUCKET_NAME \
--delete "$(aws s3api list-object-versions --bucket BUCKET_NAME \
--query '{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
# Try deletion again
aws s3 rb s3://BUCKET_NAME
Cause: Trying to delete replication that doesn’t exist
Solution: This is actually fine. It means replication is already disabled. Continue with next step.
Cause: CloudTrail is still logging
Solution:
# Stop logging first
aws cloudtrail stop-logging --name secure-doc-pipeline-trail
# Wait 10 seconds
sleep 10
# Then delete
aws cloudtrail delete-trail --name secure-doc-pipeline-trail
Cause: IAM user/role has attached policies
Solution:
# List attached policies
aws iam list-attached-user-policies --user-name secure-doc-pipeline-third-party-user
# Detach each policy
aws iam detach-user-policy \
--user-name secure-doc-pipeline-third-party-user \
--policy-arn POLICY_ARN
# Then delete user
aws iam delete-user --user-name secure-doc-pipeline-third-party-user
Cause: KMS keys require a waiting period
Solution: This is expected. KMS keys are scheduled for deletion (7-30 days) and will auto-delete. No action needed.
Look for these services:
Expected after cleanup: $0.00 or minimal charges for current partial month
# Create billing alarm
aws cloudwatch put-metric-alarm \
--alarm-name "BillingAlertAfterCleanup" \
--alarm-description "Alert if AWS charges exceed $1 after cleanup" \
--metric-name EstimatedCharges \
--namespace AWS/Billing \
--statistic Maximum \
--period 21600 \
--threshold 1.0 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 1 \
--dimensions Name=Currency,Value=USD
After running all cleanup steps, verify:
terraform state list returns empty)Clean up your local project directory:
# Navigate to project root
cd /path/to/secure-doc-pipeline
# Remove Terraform state files
cd terraform
rm -f terraform.tfstate terraform.tfstate.backup
rm -rf .terraform/
rm -f .terraform.lock.hcl
# Go back to project root
cd ..
# Optionally remove the entire project
# BE CAREFUL - This deletes everything!
# cd ..
# rm -rf secure-doc-pipeline/
If nothing else works and you need to delete everything immediately:
#!/bin/bash
# DANGER: This force-deletes everything with no confirmation
PROJECT_NAME="secure-doc-pipeline"
REGION="ap-south-1"
# Force empty and delete all S3 buckets
for bucket in $(aws s3 ls | grep $PROJECT_NAME | awk '{print $3}'); do
echo "Force deleting $bucket..."
aws s3 rb s3://$bucket --force
done
# Delete all Lambda functions
for func in $(aws lambda list-functions --region $REGION | grep $PROJECT_NAME | jq -r '.Functions[].FunctionName'); do
echo "Deleting Lambda: $func"
aws lambda delete-function --function-name $func --region $REGION
done
# Delete all IAM roles
for role in $(aws iam list-roles | grep $PROJECT_NAME | jq -r '.Roles[].RoleName'); do
echo "Deleting role: $role"
# Detach policies first
for policy in $(aws iam list-attached-role-policies --role-name $role | jq -r '.AttachedPolicies[].PolicyArn'); do
aws iam detach-role-policy --role-name $role --policy-arn $policy
done
aws iam delete-role --role-name $role
done
# Continue for other resources...
echo "Emergency cleanup complete. Run verify_cleanup.sh to check."
⚠️ Use this only as a last resort!
If you encounter persistent issues:
Check Terraform state:
cd terraform
terraform state list
terraform state show RESOURCE_NAME
Check AWS Console manually for resources matching “secure-doc-pipeline”
AWS Support:
Cost concerns:
You’ve successfully cleaned up all resources! Here’s what was removed:
Estimated time to $0 monthly cost: Immediate (except KMS keys which are $0 after scheduled deletion)
By completing this cleanup, you’ve learned:
Document Version: 1.0
Last Updated: December 2024
Region: ap-south-1 (Mumbai)
Congratulations! You’ve successfully completed the entire AWS Secure Document Pipeline project:
You now have the knowledge to build production-ready AWS infrastructure. Consider these next steps:
🎉 Project Complete! You’ve successfully built AND cleaned up a production-grade AWS infrastructure! 🎉