Supabase Test Project Refresh

Project Goals 

The goal of this project was to create a repeatable process that refreshes a Supabase test project from a production backup.

The process needed to fully synchronize database schemas, data, sequences, and Supabase Storage buckets. The client required a consistent, low-risk way to perform test environment resets without manual intervention.

This process also allows the client to periodically validate external backups.

Work Summary 

I created a set of scripts to automate refreshing the test project using a backup of the production project. This process used PostgreSQL-native tools, SQL, and AWS CLI commands for S3 object synchronization. I also developed validations and safety mechanisms to ensure that the refresh process never impacts production systems.

Project Technologies

  • Supabase (managed PostgreSQL, Storage, Authentication)

  • PostgreSQL 15

  • AWS S3 storage and S3 API-compatible endpoints

  • Python (boto3), Bash scripting

  • PostgreSQL utilities: pg_dump, pg_restore, psql

  • Supabase CLI and platform-specific APIs

Challenges and Design Considerations

  • Protection of production data: The refresh process had to guarantee that no write operations or credential paths could impact the production Supabase instance. I built strict variable-based environment guards and isolated AWS profiles to prevent cross-project actions.

  • Supabase-specific schemas: Schemas such as auth, storage, secrets, and internal metadata require careful handling. Some objects must be restored, others must persist, and some must be truncated and repopulated. The process respects Supabase’s schema requirements and restores only what is safe to overwrite.

  • Storage bucket synchronization: The client had multiple Storage buckets with thousands of objects. Full deletion and re-copy operations were slow. To address this, I used a manifest-driven approach to count, compare, and remove objects before copying in fresh data from the production export.

  • Performance and safety: Large tables required efficient pg_restore options and tuning.

Development Tasks

I developed the process to clear existing data and structures in the test project, then reload the test project using the production backup.

Used the PostgreSQL pg_restore utility and SQL routines to truncate and reload individual application schemas.

Developed S3-based Storage refresh: listing, counting, and copying bucket objects.

Validation Process

I created a series of validation scripts to confirm that the refresh process was correct by comparing a copy of the production project to the refreshed test project. Some of the key checks included:

  • Schema-level comparisons for required Supabase and application schemas.

  • Row count comparisons.

  • Row-level comparisons.

  • Sequence last values.