Cloud Migration | Database • Apr 12, 2026 • 8 min read

PostgreSQL Zero-Downtime Migration: On-Prem to AWS Cloud

By JoeyAlpha5

1. Introduction

This article demonstrates a professional, zero-downtime homogeneous database migration from a local "on-premise" PostgreSQL database to AWS RDS.

It demonstrates the process that AWS DMS runs "under the hood" when migrating a homogeneous database. Instead of relying on a managed service, this architecture manually implements PostgreSQL Native Logical Replication (Publication/Subscription) operating securely over a Site-to-Site IPsec VPN tunnel. It includes a Python script simulating live order traffic to prove that data continuously syncs during the migration process.

You can download the full source code for this project on GitHub: Download the Source Code


2. Architecture Overview

1. The On-Premise Environment

Simulated using Docker, this environment contains the source PostgreSQL database, a pgAdmin dashboard, and a live Python application generating mock e-commerce transactions.

Architectural diagram of the local on-premise Docker setup, highlighting the source database, live traffic script, and pgAdmin interface
Fig 1: Simulated on-premise environment using interconnected Docker containers.

2. The Cloud Environment

Provisioned via Terraform, this environment features a secure VPC, private subnets, an RDS PostgreSQL instance, and a Virtual Private Gateway (VGW) ready to receive the VPN connection.

Architectural diagram of the AWS cloud environment, showing a VPC, subnets, RDS instance, and a Virtual Private Gateway
Fig 2: Cloud infrastructure layout provisioned dynamically via Terraform.

3. The Combined Migration Architecture

The strongSwan container establishes an IPsec tunnel to the AWS VGW, allowing the AWS RDS instance to securely reach back into the local Docker network and subscribe to the live transaction feed.

Combined architecture diagram showing the Site-to-Site IPsec VPN tunnel securely connecting the on-premise Docker network to the AWS VPC
Fig 3: End-to-end logical replication architecture connected over a secure Site-to-Site VPN tunnel.

3. Prerequisites

  • Docker & Docker Compose: To run the local environment.
  • Terraform: To provision the AWS infrastructure.
  • AWS CLI: Configured with credentials that have permission to create VPCs, RDS instances, and VPNs.
  • psql: Installed locally (or accessible via your pgAdmin container) to execute the schema dump.

4. Step-by-Step Execution Guide

Phase 1: Start the On-Premise Infrastructure

  1. Navigate to the on-prem directory: cd on-prem
  2. Start the Docker containers: docker-compose up -d --build
  3. Verify the live traffic generator is working by checking the logs: docker logs -f fake_live_traffic
  4. Open your browser and navigate to http://localhost:5050 to access pgAdmin.
  5. Log in using admin@sunbirdanalytics.com and password sunbird_analytics.
  6. Add the local server using host source_db, username sunbird_analytics, and password sunbird_analytics. You will see the live_orders table actively populating.

Phase 2: Gather Required IP Addresses

Before provisioning the cloud, you need two critical IP addresses.

  1. Find your Public IP: Run curl ifconfig.me in your terminal. Save this for Terraform.
  2. Find your Source DB Docker IP: Run docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' source_db.
  3. Open scripts/main.md and replace the IP address in the CREATE SUBSCRIPTION connection string with this Docker IP.

Phase 3: Provision the AWS Infrastructure

  1. Navigate to the cloud setup directory: cd ../cloud-setup
  2. Initialize Terraform: terraform init
  3. Deploy the infrastructure (replace the placeholder with your actual public IP from Phase 2): terraform apply -var="my_ip=YOUR_PUBLIC_IP_HERE"
  4. Type yes to confirm. This will take roughly 5-10 minutes to provision the RDS instance and VPN.
  5. Once complete, retrieve your new RDS endpoint via the AWS Console or by running:
    aws rds describe-db-instances \
     --region us-west-1 \
    --db-instance-identifier target-cloud-db \
    --query "DBInstances[0].Endpoint.Address" \
    --output text

Phase 4: Map Your Infrastructure with Sunbird Insyte

Once terraform apply finishes and everything is successfully provisioned on AWS, you can visualize your new cloud environment.

Use Sunbird Insyte to automatically map all of your newly provisioned infrastructure. It will generate a clear overview of all your provisioned resources, giving you total visibility into your active cloud setup.

Dashboard view of Sunbird Insyte showing an interactive architectural map of the newly deployed AWS resources
Fig 4: Verifying the deployed AWS VPC and resources automatically with Sunbird Insyte.

Phase 5: Configure the Site-to-Site VPN Tunnel

AWS has generated the keys for your VPN tunnel, but your local Docker container needs them to establish the connection.

  1. Log into the AWS Management Console and navigate to the VPC Dashboard -> Site-to-Site VPN Connections.
  2. Select your newly created VPN and click Download configuration (Vendor: Generic).
  3. Open the downloaded file and find the IPSec Tunnel #1 section to locate your Pre-Shared Key (PSK) and the Virtual Private Gateway IP (<AWS_TUNNEL_IP>).
  4. Update vpn_config/ipsec.secrets in your project folder:
    <YOUR_PUBLIC_IP> <AWS_TUNNEL_IP> : PSK "YOUR_PRE_SHARED_KEY_HERE"
  5. Update vpn_config/ipsec.conf in your project folder:
    leftid=<YOUR_PUBLIC_IP>
    ...
    right=<AWS_TUNNEL_IP>
  6. Start Tunnel: docker restart aws_ipsec_vpn then docker exec aws_ipsec_vpn ipsec up aws-tunnel-1.

Phase 6: Inject Local Network Routes

Even with the tunnel "Up," the pgAdmin and Postpres containers need a map to find AWS.

  1. Find VPN Container IP: docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' aws_ipsec_vpn
  2. Add Route to pgadmin and local db containers so they know to send traffic destined for AWS: through the VPN tunnel:
    docker exec -u root pgadmin_dashboard ip route add 10.0.0.0/16 via <VPN_IP>
    docker exec -u root source_db apt-get update && docker exec -u root source_db apt-get install -y iproute2
    docker exec -u root source_db ip route add 10.0.0.0/16 via <VPN_IP>
  3. Test Connectivity: From pgAdmin container, ping the AWS RDS endpoint to confirm connectivity:
    docker exec -it pgadmin_dashboard ping <YOUR_RDS_ENDPOINT>
  4. If the ping is successful, you have established a secure connection from your local Docker environment to AWS! You can now also connect to the AWS RDS instance from pgAdmin using the <YOUR_RDS_ENDPOINT>, username SunbirdAdmin, and the password sunbird_analytics you set in Terraform.
pgAdmin interface screenshot displaying a successful database connection to the AWS RDS endpoint along with replicated live order data
Fig 5: Connecting directly to the AWS cloud server via the local pgAdmin dashboard.

Phase 7: Execute the Migration Using PostgreSQL Publication and Subscription

1. Schema Migration

Export the schema from your Docker Postgres:

docker exec source_db pg_dump -s -U sunbird_analytics -d sunbird_analytics > schema.sql

Import the schema into AWS RDS. Open the schema.sql script and Copy all the text inside that file. Then, in pgAdmin open a query window connected from the cloud server to the postgres database on AWS RDS. Paste the entire schema.sql content and execute it. This will create all the tables and structures needed on AWS.

2. On the Source DB (Local Docker)

Create the "Publisher." This tells your local DB to start broadcasting every change to the transaction logs.

-- Run this in the 'sunbird_analytics' database on Localhost
CREATE PUBLICATION sunbird_migration_pub FOR ALL TABLES;

3. On the Target DB (AWS RDS)

Get the IP address of your local Docker Postgres container. This is the "host" that AWS will connect to.

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' source_db

Create the "Subscriber." This tells AWS to reach through the VPN and start sucking in data.

-- Run this in the 'sunbird_analytics' database on AWS RDS
CREATE SUBSCRIPTION sunbird_migration_pub
CONNECTION 'host=172.20.0.2 port=5432 user=sunbird_analytics password=sunbird_analytics dbname=sunbird_analytics'
PUBLICATION sunbird_migration_pub;

4. Reset Sequences (CRITICAL)

Postgres sequences (for ID columns) don't stay in sync during replication. You must manually "bump" them on AWS so the next ID used is correct.

Run this on AWS RDS:

-- This script resets all sequences to the current max ID + 1
SELECT setval(pg_get_serial_sequence('live_orders', 'id'), coalesce(max(id), 1)) FROM live_orders;

Need Help With Your Infrastructure?

Navigating complex database migrations and modernizing your cloud infrastructure doesn't have to be a solo mission.