Complete Migration Guide

AWS DMS Migration Guide

Migrate from Supabase to AWS Aurora PostgreSQL using AWS Console (GUI)

2-4 hours Setup Time
No CLI Required GUI Only
Minimal Downtime < 5 minutes

Overview & What You'll Learn

What This Guide Covers

This guide will teach you how to migrate data from Supabase to Aurora using only the AWS Console (web browser interface). You'll learn:

  • How to navigate AWS Console
  • How to set up AWS DMS using GUI
  • How to configure all components visually
  • How to monitor migration through dashboards
  • How to verify data using Query Editor
  • No terminal commands required!

What is AWS DMS?

AWS Database Migration Service (DMS) is a managed cloud service that automates database migrations through a web interface. You can:

  • Set up everything in your browser
  • Monitor progress visually
  • No coding or command-line experience needed

Migration Strategy

We'll use Full Load + CDC (Change Data Capture):

1
Full Load

Copy all existing data

2
CDC

Capture ongoing changes

3
Switch

Move app to Aurora

4
Complete

Stop replication

Benefits:

  • Minimal downtime (< 5 minutes during switch)
  • Safe to test before committing
  • Easy rollback if needed

AWS Console Preparation

Step 1

Access AWS Console

  1. Open your web browser
  2. Go to: https://console.aws.amazon.com
  3. Sign in with your AWS account credentials
    • Enter your email
    • Enter your password
    • Complete MFA if enabled
Step 2

Select Your Region

Important: Use the same region as your Aurora database!
  1. Look at top-right corner of AWS Console
  2. Click on the region dropdown (shows current region like "N. Virginia")
  3. Select the region where your Aurora database is located
    • Example: "US East (N. Virginia)" or "us-east-1"
  4. Remember this region - you'll use it for all DMS resources
Step 3

Verify Permissions

You need access to these AWS services:

  • DMS (Database Migration Service)
  • RDS (for Aurora)
  • VPC (networking)
  • IAM (permissions)
  • CloudWatch (monitoring)
To verify:
  1. Use the search bar at top of console
  2. Type: "DMS" and press Enter
  3. If you can open DMS console → You have access ✅
  4. If you see "Access Denied" → Contact your AWS admin

Pre-Migration Setup (GUI)

Phase 1: Gather Your Information

You'll need this information. Write it down or save in a secure note:

Supabase Database Details

Open a text file and fill in:

SUPABASE_HOST: db.yourproject.supabase.co
SUPABASE_PORT: 5432
SUPABASE_DATABASE: postgres
SUPABASE_USER: postgres
SUPABASE_PASSWORD: [your-password]

Tables to migrate:
- user_dagad_entries
- user_dagad_folders
- user_dagad_files
- user_dagad_embeddings
- user_dagad_usage_log
- user_dagad_addon_imports
To find your Supabase details:
  1. Log in to app.supabase.com
  2. Select your project
  3. Click "Settings" (gear icon in left sidebar)
  4. Click "Database"
  5. Scroll down to "Connection string"
  6. Copy the host, port, database, user from the connection string

Aurora Database Details

Method 1: Find via AWS Console

  1. In AWS Console search bar, type "RDS"
  2. Click "RDS" to open RDS console
  3. In left sidebar, click "Databases"
  4. Find your Aurora cluster (shows "aurora-postgresql")
  5. Click on the cluster name
  6. Scroll to "Connectivity & security" section
  7. Copy these values:
    • Endpoint: your-cluster.cluster-xxxxx.us-east-1.rds.amazonaws.com
    • Port: 5432
    • VPC ID: vpc-xxxxx (you'll need this!)
    • Security group: sg-xxxxx (you'll need this!)
AURORA_ENDPOINT: [your-cluster-endpoint]
AURORA_PORT: 5432
AURORA_DATABASE: helium_production
AURORA_USER: admin
AURORA_PASSWORD: [your-password]
AURORA_VPC_ID: vpc-xxxxx
AURORA_SECURITY_GROUP: sg-xxxxx

Phase 2: Check Database Size

We need to know how much data you're migrating.

Option 1: Using Supabase Dashboard

  1. Go to app.supabase.com
  2. Select your project
  3. Click "Database" in left sidebar
  4. Look for database size indicator (usually shown in dashboard)

Option 2: Using pgAdmin (Visual Tool)

  1. Open pgAdmin
  2. Right-click "Servers" → "Register" → "Server"
  3. Fill in Supabase details
  4. Click "Save"
  5. Expand server → Database → Schemas → Tables
  6. Right-click on a table → "Properties" to see size

Option 3: Using TablePlus (Visual Tool)

  1. Open TablePlus
  2. Click "Create a new connection"
  3. Select PostgreSQL
  4. Fill in connection details
  5. Click "Connect"
  6. View database statistics in UI

Phase 3: Create IAM Roles

AWS DMS needs permissions to work. Let's create the necessary roles.

Create DMS VPC Role

  1. In AWS Console search bar, type "IAM"
  2. Click "IAM" to open IAM console
  3. In left sidebar, click "Roles"
  4. Click orange "Create role" button
Step 1: Select trusted entity
  • Select: "AWS service"
  • Use case: Scroll down and select "DMS"
  • Click: "Next"
Step 2: Add permissions
  • Search for: "AmazonDMSVPCManagementRole"
  • Check the box next to it
  • Search for: "AmazonDMSCloudWatchLogsRole"
  • Check the box next to it
  • Click: "Next"
Step 3: Name, review, and create
  • Role name: dms-vpc-role
  • Description: "Role for DMS to manage VPC resources"
  • Scroll down, click "Create role"
You should see: "Role dms-vpc-role created successfully"

Step-by-Step: AWS DMS Setup

Step 1

Create Replication Instance

The replication instance is the "middleman" that moves your data.

1.1: Navigate to DMS Console

  1. In AWS Console search bar, type "DMS"
  2. Click "Database Migration Service"
  3. You'll see the DMS dashboard

1.2: Create Replication Instance

  1. In left sidebar, click "Replication instances"
  2. Click orange "Create replication instance" button

You'll see a form. Fill it in as follows:

Configuration Settings
Name and description:
  • Name: supabase-to-aurora-replication
  • Description: Replication instance for migrating Supabase AIM data to Aurora
Instance configuration:
  • Instance class:
    • For < 10GB data: Select dms.t3.micro
    • For 10-100GB data: Select dms.t3.mediumRecommended
    • For 100-500GB: Select dms.c5.large
  • Engine version: Leave default (latest version)
  • High Availability: Select "Single-AZ" (cheaper for testing)
  • Allocated storage (GB): Enter 100 (or your database size + 20% buffer)
Network and connectivity:
  • Virtual Private Cloud (VPC): Select the same VPC as your Aurora database
  • Replication subnet group: Select existing or create new
  • Publicly accessible:
    • Select "Yes" (if Supabase is external to AWS)
    • Select "No" (if using VPN/VPC peering)
Advanced settings:
  • VPC security group(s): Select your Aurora security group
  • KMS key: Select "(default) aws/dms"

1.3: Create the Instance

  1. Scroll to bottom
  2. Click orange "Create replication instance" button
  3. Wait - You'll see a banner: "Creating replication instance..."
Monitor Creation:
  • Status: Will show "Creating" with a spinner
  • Wait time: 5-10 minutes
  • When done: Status changes to "Available" with green checkmark ✅

1.4: Note the IP Address

  1. Click on your replication instance name
  2. Look for "Public IP address" or "Private IP address"
  3. Copy the IP address
  4. Save it - you'll need it for Supabase firewall (if applicable)
Cost Note: This instance costs ~$0.19/hour (~$140/month). You can stop it when not migrating!
Step 2

Configure Security Groups

We need to allow the replication instance to connect to Aurora.

2.1: Navigate to EC2 Security Groups

  1. In AWS Console search bar, type "EC2"
  2. Click "EC2" to open EC2 console
  3. In left sidebar, scroll down to "Network & Security"
  4. Click "Security Groups"

2.2: Find Aurora's Security Group

  1. In the search box, enter your Aurora security group ID
  2. Click on the security group name to select it

2.3: Add Inbound Rule for DMS

  1. Look at bottom tabs
  2. Click "Inbound rules" tab
  3. Click "Edit inbound rules" button (top right)
  4. Click "Add rule" button
Configure the new rule:
  • Type: Select "PostgreSQL" (auto-fills port 5432)
  • Protocol: TCP (auto-selected)
  • Port range: 5432 (auto-filled)
  • Source: Enter the security group of your DMS replication instance
  • Description: DMS Replication Instance Access
  1. Click "Save rules" button (orange, bottom right)
  2. You'll see: "Successfully modified security group rules"
Step 3

Create Source Endpoint (Supabase)

This tells DMS how to connect to Supabase.

3.1: Navigate to Endpoints

  1. In DMS Console, look at left sidebar
  2. Click "Endpoints"
  3. Click orange "Create endpoint" button

3.2: Configure Endpoint

Endpoint type:
  • Select: "Source endpoint"
Endpoint configuration:
  • Endpoint identifier: supabase-source-endpoint
  • Source engine: Select "PostgreSQL"
  • Access to endpoint database: Select "Provide access information manually"
Endpoint settings:
  • Server name: db.yourproject.supabase.co
  • Port: 5432
  • Database name: postgres
  • SSL mode: Select "require" ⭐ Important for Supabase!
  • User name: postgres
  • Password: [your-supabase-password]

3.3: Add Endpoint Settings (For CDC)

Scroll down to find "Endpoint settings":

  1. Click "Add new setting" button
  2. Enter this JSON:
{
  "PluginName": "pglogical",
  "HeartbeatEnable": true,
  "HeartbeatFrequency": 1
}
What this does:
  • Enables Change Data Capture (CDC)
  • Keeps connection alive
  • Monitors replication health

3.4: Create Endpoint

  1. Scroll to bottom
  2. Click "Run test" button (if you filled test connection details)
  3. Wait 30-60 seconds
  4. Look for: "Connection tested successfully" ✅
  5. Click orange "Create endpoint" button
Verify Creation: Status should show "Active" with green dot
Step 4

Create Target Endpoint (Aurora)

This tells DMS how to connect to Aurora.

4.1: Create Endpoint

  1. Still in Endpoints section, click "Create endpoint" button again

4.2: Configure Endpoint

Endpoint type:
  • Select: "Target endpoint"
Endpoint configuration:
  • Endpoint identifier: aurora-target-endpoint
  • Target engine: Select "PostgreSQL"
  • Access to endpoint database: Select "Provide access information manually"
Endpoint settings:
  • Server name: [your-aurora-endpoint]
  • Port: 5432
  • Database name: helium_production
  • SSL mode: Select "none" (if Aurora is in same VPC)
  • User name: admin
  • Password: [your-aurora-password]

4.3: Add Performance Settings (Optional but Recommended)

Scroll down to "Endpoint settings":

{
  "BatchApplyEnabled": true,
  "ParallelApplyThreads": 4,
  "ParallelApplyBufferSize": 100
}
What this does:
  • Speeds up data loading
  • Applies changes in parallel
  • Improves performance

4.4: Create Endpoint

  1. Click "Run test" to verify connection
  2. Should show: "Connection tested successfully" ✅
  3. Click "Create endpoint" button
  4. Verify: Status shows "Active"
You now have:
  • ✅ Source endpoint (Supabase)
  • ✅ Target endpoint (Aurora)
  • ✅ Replication instance
  • Next: Create the migration task!
Step 5

Create Database Migration Task

This is the actual job that migrates your data.

5.1: Navigate to Tasks

  1. In DMS Console left sidebar, click "Database migration tasks"
  2. Click orange "Create task" button

5.2: Configure Task Settings

Task configuration:
  • Task identifier: supabase-to-aurora-aim-migration
  • Replication instance: Select supabase-to-aurora-replication
  • Source database endpoint: Select supabase-source-endpoint
  • Target database endpoint: Select aurora-target-endpoint
Task settings:
  • Migration type: Select "Migrate existing data and replicate ongoing changes" ⭐ Recommended!
  • Start task on create: Leave checked (task starts automatically)
  • Target table preparation mode: Select "Do nothing"
  • Stop task after full load completes: Select "Don't stop"
  • Include LOB columns in replication: Select "Full LOB mode"
  • Enable validation: Check this box ✅ Very important!
  • Enable CloudWatch logs: Check this box ✅ For monitoring!

5.3: Configure Table Mappings

This tells DMS which tables to migrate.

Using Wizard Method:
  1. Click "Add new selection rule"
Selection rule 1:
  • Schema: Select or enter public
  • Table name: Select "Enter a table name pattern"
  • Enter: user_dagad_% (% is wildcard)
  • Action: Select "Include"

This includes all tables starting with "user_dagad_"

Using JSON Editor Method:
  1. Click "JSON editor" tab
  2. Replace the content with:
{
  "rules": [
    {
      "rule-type": "selection",
      "rule-id": "1",
      "rule-name": "include-aim-tables",
      "object-locator": {
        "schema-name": "public",
        "table-name": "user_dagad_%"
      },
      "rule-action": "include"
    }
  ]
}

5.4: Review and Create Task

  1. Scroll to bottom
  2. Review all settings
  3. Click orange "Create task" button
  4. You'll see: "Creating database migration task..."
Task will start automatically (if you left "Start task on create" checked)

Migration Execution

What Happens Now

Your task is running! Here's what's happening:

1
Starting

30 seconds - 2 minutes

DMS prepares connections
2
Full Load

1-12 hours depending on data size

Copying all existing data from Supabase to Aurora
3
CDC (Continuous)

Ongoing

Capturing and applying ongoing changes

Monitoring Migration (GUI)

View Task Status

Main Task Dashboard

  1. In DMS Console, click "Database migration tasks"
  2. Find your task: supabase-to-aurora-aim-migration
  3. Look at Status column:
    • "Starting": Task is initializing
    • "Running": Full load in progress
    • "Load complete": Full load done, CDC ongoing ✅
    • "Failed": Something went wrong (see logs)

Detailed Task View

  1. Click on your task name
  2. You'll see detailed information:
    • Status: Current state
    • % complete: Progress percentage
    • Tables loaded: Number of tables completed
    • Rows loaded: Total rows copied

Monitor Table Statistics

This shows per-table progress.

View Table Statistics

  1. In task details, look for tabs at top
  2. Click "Table statistics" tab
  3. You'll see a table with these columns:
    • Table name: Which table
    • Full load: Rows copied during initial load
    • Inserts: New rows added (CDC)
    • Updates: Rows modified (CDC)
    • Deletes: Rows removed (CDC)
    • Validation: Data validation status
What to look for:
  • ✅ "Full load" numbers increasing steadily
  • ✅ "Validation" status: "Validated" or "Pending"
  • ❌ Any errors in status column

View CloudWatch Logs

Logs show detailed information about what's happening.

Access Logs from DMS Console

  1. In task details, click "Monitoring" tab
  2. Scroll down to "Logs"
  3. Click "View CloudWatch Logs"

Access Logs from CloudWatch

  1. In AWS Console search bar, type "CloudWatch"
  2. Click "CloudWatch"
  3. In left sidebar, expand "Logs"
  4. Click "Log groups"
  5. Find and click: /aws/dms/tasks/[your-task-id]
  6. Click on a log stream (usually the most recent)
Good log messages (normal):
  • "Table loaded successfully"
  • "CDC load has started"
  • "Change processing has started"
  • "Task is running"
Warning messages (may be okay):
  • "Retrying after connection timeout" (temporary network issue)
  • "Large transaction in progress" (just info)
Error messages (need attention):
  • "Failed to connect to source endpoint"
  • "Table does not exist"
  • "Permission denied"
  • "Validation failed"

Monitor CloudWatch Metrics

Metrics show performance graphs.

View Metrics Dashboard

  1. In DMS task details, click "Monitoring" tab
  2. You'll see graphs for:
    • CPU utilization: Should stay under 80%
    • Free memory: Should not drop too low
    • Network receive throughput: Data coming from Supabase
    • Network transmit throughput: Data going to Aurora

Key Metrics to Watch:

1. CDCLatency (Critical!)
  • < 5 seconds: Excellent ✅
  • 5-30 seconds: Good 👍
  • 30-60 seconds: Monitor closely ⚠️
  • > 60 seconds: May have issues ❌
2. FullLoadThroughputRows
  • Shows how fast data is copying
  • Should be steady (not dropping to 0)
3. ValidationFailedRecords
  • Must be 0 ✅
  • > 0: Data integrity issue ❌

Data Verification (GUI)

Once full load completes, verify your data before cutover.

Method 1: Using AWS Query Editor

AWS provides a built-in SQL editor for RDS databases.

Access Query Editor

  1. In AWS Console, go to RDS
  2. In left sidebar, look for "Query Editor"
  3. Click "Query Editor"

Connect to Aurora

  1. Select "Aurora" tab
  2. Choose your Aurora cluster from dropdown
  3. Database name: helium_production
  4. Database username: admin
  5. Password: [your-aurora-password]
  6. Click "Connect to database"

Run Verification Queries

Query 1: Compare row counts
-- In Aurora
SELECT 'user_dagad_entries' as table_name, COUNT(*) as row_count
FROM user_dagad_entries
UNION ALL
SELECT 'user_dagad_folders', COUNT(*) FROM user_dagad_folders
UNION ALL
SELECT 'user_dagad_files', COUNT(*) FROM user_dagad_files
UNION ALL
SELECT 'user_dagad_embeddings', COUNT(*) FROM user_dagad_embeddings
UNION ALL
SELECT 'user_dagad_usage_log', COUNT(*) FROM user_dagad_usage_log
UNION ALL
SELECT 'user_dagad_addon_imports', COUNT(*) FROM user_dagad_addon_imports;

Compare with Supabase: Row counts should match! ✅

Query 2: Spot-check sample data
-- Get recent entries from Aurora
SELECT entry_id, user_id, title, created_at
FROM user_dagad_entries
ORDER BY created_at DESC
LIMIT 10;

Run in both Aurora and Supabase - results should be identical!

Query 3: Verify foreign key relationships
-- Check for orphaned entries
SELECT COUNT(*) as orphaned_entries
FROM user_dagad_entries e
LEFT JOIN user_dagad_folders f ON e.folder_id = f.folder_id
WHERE e.folder_id IS NOT NULL AND f.folder_id IS NULL;

Result should be: 0 (no orphaned records)

Query 4: Verify embeddings
-- Check if embeddings exist
SELECT
    COUNT(*) as total_embeddings,
    COUNT(CASE WHEN embedding IS NOT NULL THEN 1 END) as embeddings_with_data
FROM user_dagad_embeddings;

Check: Both counts should match (all embeddings have data)

Verification Checklist

  • All tables exist in Aurora
  • Row counts match between Supabase and Aurora
  • Sample data looks correct (compare 10-20 rows)
  • No orphaned records (foreign keys intact)
  • Embeddings have data
  • Created_at timestamps preserved
  • No NULL values where they shouldn't be

Cutover Process (GUI)

Once data is verified and CDC is stable, switch your application to Aurora.

Pre-Cutover Checklist

Before switching, confirm:

  • Full load completed (Status: "Load complete")
  • CDC running smoothly for 24+ hours
  • Row counts match (verified above)
  • Sample data verified
  • Application tested against Aurora (staging)
  • Team notified
  • Rollback plan ready

Monitor CDC Status Before Cutover

Check that CDC is caught up:

Check CDCLatency Metric

  1. Go to CloudWatch → Metrics → DMS
  2. Select your task → CDCLatency
  3. View graph
  4. Verify: < 5 seconds ✅

Or:

  1. In DMS task details, click "Monitoring" tab
  2. Look at "CDC latency" graph
  3. Verify: Line is near zero

Option A: Feature Flag Switch (Recommended)

If your application supports feature flags:

Step 1: Enable Feature Flag

Enable: aim_use_aurora = true

Step 2: Monitor Application

Watch for error rate, response times, database connections

Step 3: Verify Aurora is Active

Check for new activity in Aurora Query Editor

Step 4: Let CDC Run for 1-2 Hours

Keep CDC running as a safety net

Step 5: Stop CDC Replication
  1. Go to DMS Console → Database migration tasks
  2. Find your task
  3. Click "Actions" dropdown
  4. Select "Stop"
  5. Confirm
Step 6: Stop Replication Instance (Save $$)
  1. Go to DMS Console → Replication instances
  2. Find your instance
  3. Click "Actions" dropdown
  4. Select "Stop"
  5. Confirm
Savings: ~$140/month by stopping the instance!

Option B: Maintenance Window Switch

If you prefer a scheduled cutover during maintenance:

T-15
15 Minutes Before Window
  • Announce maintenance to users
  • Put application in maintenance mode
  • Check CDCLatency < 5 seconds
T+0
At Window Start
  • Stop application or enable maintenance mode
  • Wait 2-3 minutes for CDC to apply final changes
  • Verify row counts match
T+5
Switch Connection
  • Enable feature flag or update config
  • Restart application
  • Verify application starts successfully
T+10
Test
  • Run smoke tests
  • Check Aurora Query Editor for activity
  • Verify application functionality
T+15
Resume
  • Remove maintenance mode
  • Announce completion
  • Monitor for issues
Total downtime: ~15 minutes

Troubleshooting (GUI)

Issue 1: Endpoint Test Failed

Symptom: When testing endpoint connection, you see "Failed"

Fix: Connection Timeout

Cause: Firewall blocking connection

  1. Go to EC2 → Security Groups
  2. Find Aurora security group
  3. Check inbound rules
  4. Verify port 5432 is open for DMS instance
  5. If not, add the rule

Fix: Permission Denied

Cause: Wrong credentials

  1. Double-check username and password
  2. Verify in Query Editor
  3. Update endpoint with correct password
  4. Re-test connection

Fix: SSL Error

For Supabase:

  1. Edit source endpoint
  2. Change SSL mode to "require"
  3. Save changes
  4. Re-test connection

Issue 2: Task Failed or Stopped

Symptom: Task status shows "Failed" or "Stopped unexpectedly"

Check Logs

  1. Click on task name
  2. Click "Monitoring" tab
  3. Click "View CloudWatch Logs"
  4. Look for error messages (usually in red)

Common Errors and Fixes

Error: "Table does not exist"

  • Verify tables exist in Aurora
  • If missing, create tables in Aurora first
  • Restart task

Issue 3: Migration Too Slow

Symptom: Full load taking hours for small database

Fix: Upgrade Instance

  1. Stop task
  2. Go to Replication instances
  3. Click on your instance
  4. Click "Modify"
  5. Change instance class to larger size
  6. Click "Modify"
  7. Wait for modification to complete
  8. Restart task

Issue 4: CDC Lag Increasing

Symptom: CDCLatency metric keeps growing

Possible Causes

  • Too many writes to Supabase
  • Replication instance too small
  • Aurora can't write fast enough
  • Network issues

Fix: Upgrade Resources

  1. Upgrade replication instance
  2. Scale up Aurora instance if CPU/memory high
  3. Add more parallel threads in task settings

Issue 5: Validation Failures

Symptom: ValidationFailedRecords > 0

Find Failed Records

SELECT * FROM awsdms_validation_failures_v1
LIMIT 10;

Common Causes

  • Data type mismatch: Float precision differences
  • Encoding issues: Special characters
  • NULL handling: NULL vs empty string

Post-Migration Cleanup (GUI)

After migration is successful and stable.

Phase 1: Monitor for 48 Hours

Keep everything running for 2 days:

Daily Checklist (Day 1 & 2):

  • Check application error logs
  • Monitor Aurora CPU/memory (RDS console)
  • Verify no user complaints
  • Test key functionality
  • Spot-check data integrity

Phase 2: Cleanup After 1 Week

Once confident Aurora is stable:

Stop DMS Task

  1. Go to DMS Console → Database migration tasks
  2. Find your task
  3. Select the checkbox next to it
  4. Click "Actions" dropdown
  5. Select "Stop"
  6. Confirm

Stop Replication Instance

  1. Go to DMS Console → Replication instances
  2. Find your instance
  3. Select the checkbox
  4. Click "Actions" dropdown
  5. Select "Stop"
  6. Confirm
Saves: ~$140/month
Don't delete yet - keep stopped for 1 more week

Phase 3: Delete Resources After 2 Weeks

Only if everything is stable:

Delete DMS Task

  1. Go to Database migration tasks
  2. Find your STOPPED task
  3. Select checkbox
  4. Click "Actions" → "Delete"
  5. Type "delete" to confirm
  6. Click "Delete"

Delete Replication Instance

  1. Go to Replication instances
  2. Find your STOPPED instance
  3. Select checkbox
  4. Click "Actions" → "Delete"
  5. Confirm deletion

Delete Endpoints (Optional)

  1. Go to Endpoints
  2. Select source endpoint
  3. Click "Actions" → "Delete"
  4. Confirm
  5. Repeat for target endpoint

Phase 4: Supabase Data (Optional)

Option A: Keep Supabase Data (Recommended)

Why:

  • Serves as permanent backup
  • Minimal cost (just storage)
  • Can restore if disaster occurs
  • No harm in keeping it

Do nothing - just keep paying for Supabase storage

Option B: Export and Delete (After 3+ Months)

Only if absolutely certain Aurora is stable. Export backup first, store safely, then delete Supabase tables.

Summary & Best Practices

What You Learned

Congratulations! You now know how to:

  • Set up AWS DMS using only the GUI
  • Create and configure replication instance
  • Set up source and target endpoints
  • Create and monitor migration tasks
  • Verify data using Query Editor
  • Perform safe cutover with rollback option
  • Monitor and troubleshoot using CloudWatch
  • Clean up resources to save costs

Best Practices Recap

  1. Always test endpoints before creating tasks
  2. Enable validation on migration tasks
  3. Monitor CloudWatch logs during migration
  4. Keep CDC running 24-48 hours before cutover
  5. Verify data thoroughly before switching
  6. Have rollback plan ready
  7. Stop resources when not needed (save $$$)
  8. Keep Supabase data as backup

Key Metrics to Remember

  • CDCLatency: Should be < 10 seconds (ideally < 5)
  • ValidationFailedRecords: Should be 0
  • CPU Utilization: Should be < 80%
  • Task Status: Should be "Running" or "Load complete"

Cost Optimization

  • Stop replication instance when not migrating: Saves ~$140/month
  • Delete resources after migration complete: No ongoing costs
  • Use t3.micro for small databases: Cheaper
  • Keep task and instance stopped (not deleted) for first week: Free rollback insurance

You're Ready!

You now have a complete, GUI-focused guide for migrating from Supabase to Aurora using AWS DMS!

Next Steps:

  1. Bookmark this guide
  2. Gather your credentials
  3. Start with Phase 1: AWS Console Preparation
  4. Follow each section step-by-step
  5. Take screenshots as you go (for documentation)

Good luck with your migration! 🚀