Summary
This postmortem analyzes a failure to manually back up a Supabase‑hosted PostgreSQL database using pg_dump on macOS. The core issue was incorrect connection strings and attempting to use the connection pooler instead of the primary database host, which prevents pg_dump from connecting. The result was repeated hostname translation errors and authentication failures.
Root Cause
The failure stemmed from a combination of factors:
- Using the pooled connection endpoint, which does not support
pg_dumporpg_restore - Incorrectly formatted connection strings, including:
- Missing password encoding
- Misplaced username segments
- Typos in the hostname
- Not using the “direct database connection” host provided by Supabase
- Local PostgreSQL client unable to resolve the incorrect hostname
The key takeaway: Supabase’s pooled connection URL cannot be used for backups. You must use the direct connection string from the project settings.
Why This Happens in Real Systems
These issues are extremely common in cloud‑hosted PostgreSQL environments:
- Connection poolers strip session‑level features, making them incompatible with tools like
pg_dump - Developers copy the wrong connection string, especially when multiple URLs look similar
- Passwords containing special characters break URLs unless properly URL‑encoded
- DNS resolution failures occur when the hostname is malformed by even one character
- Supabase uses different hostnames for pooled vs. direct connections, which is easy to overlook
Real-World Impact
When this happens in production environments, the consequences can be significant:
- Backups silently fail, leaving teams without disaster‑recovery options
- RLS‑protected data cannot be exported, blocking migrations
- Restores fail due to missing roles or permissions
- Downtime increases because engineers must manually debug connection issues
- Security risks emerge if engineers disable RLS or modify roles to “make it work”
Example or Code (if necessary and relevant)
Below is a correct pg_dump invocation using a direct Supabase connection string.
(Replace values with your actual project credentials.)
pg_dump "postgresql://postgres:YOUR_URL_ENCODED_PASSWORD@db.YOUR-PROJECT.supabase.co:5432/postgres" \
--format=custom \
--no-owner \
--file=database-dump.dump
To restore:
pg_restore --dbname="postgresql://postgres:YOUR_URL_ENCODED_PASSWORD@db.YOUR-PROJECT.supabase.co:5432/postgres" \
--clean \
--if-exists \
database-dump.dump
How Senior Engineers Fix It
Experienced engineers approach this systematically:
- Verify the correct host from Supabase Dashboard → Project Settings → Database → Connection Info
- Ensure the password is URL‑encoded (e.g., using
python3 -c "import urllib.parse; print(urllib.parse.quote('yourpass'))"). - Avoid the pooled endpoint (
pooler.supabase.com) - Test connectivity with
psqlfirst before runningpg_dump - Use
--format=customfor safer restores - Confirm RLS policies do not block the dump, or temporarily use the
postgresrole
Why Juniors Miss It
Less‑experienced engineers often struggle with this class of issue because:
- They assume all connection strings are interchangeable
- They don’t realize poolers break session‑level operations
- They overlook URL‑encoding requirements
- They trust that copy‑pasted commands should “just work”
- They misinterpret DNS errors as PostgreSQL errors
- They don’t yet know that Supabase exposes multiple hostnames with different purposes
If you want, I can also generate a companion checklist that engineers can use to validate their backup workflow.