
We Didn’t Deploy Any Code - But Google Login Broke Anyway 🤡
This is one of those bugs that makes you stare at your screen and question your life choices.
The Symptom
“Continue with Google” works in local
Redirects correctly in production
…but the user never logs in
No errors. Just vibes.
The Plot Twist
👉 We didn’t push any code.
Not a single commit.
What Changed?
We migrated from AWS (single server) to Vercel (serverless) for some site .
That’s it.
No logic changes.
No auth refactor.
No “oops” deployment.
AWS Era: The Forgiving Parent
Single, persistent server
Session detection could be a bit slow
Wildcard redirect URLs like
/**were toleratedEven if auth was slightly delayed, the server eventually “figured it out”
Translation:
“Sige na nga, pasok ka na.” - AWS
This setup was inherited from a previous developer.
I don’t actually know the guy personally - but whoever you are, sir…
I genuinely hope you mildly choke on your coffee one morning ☕😌
(non-fatal, just enough to reflect on your architectural decisions)
Because this setup only worked thanks to AWS being nice.
Vercel Era: The Strict Teacher
Serverless, on-demand execution
During Google OAuth, there’s a short moment where the session is still null
SessionWatcher.tsxsaw that and went:“Ah, wala kang session? Logout ka.”
So the user was logged out mid-OAuth flow.
No crash.
No error.
Just sadness.
The Inherited Problem (a.k.a. Why This Was Extra Annoying)
This auth flow and session architecture was not originally written by us.
It was inherited.
No documentation.
No explanation.
Loose assumptions everywhere.
The code assumed:
Sessions are always immediately available
Redirects can be wildcards
Infrastructure will “just handle it”
That kind of setup survives on AWS.
It does not survive serverless.
And yes - the architecture was… all over the place.
The Real Fix (No Code, Just Common Sense)
We fixed the Supabase Redirect URLs for :
https://VERCEL_DOMAIN/auth/callback
https://CUSTOM_DOMAIN/auth/callbackNo wildcards.
No guessing.
No “bahala na”.
And suddenly… it worked.
Why Did This Work Before?
Because AWS is forgiving and Vercel is strict.
AWS lets loose setups survive.
Vercel forces you to be correct.
This migration exposed a setup that:
Worked by accident
Relied on timing
Relied on a persistent server
Was never actually correct
Bonus: We Accidentally Got More Secure 🔒
Our old wildcard redirect setup was potentially vulnerable to redirect hijacking.
By locking redirects down:
OAuth flow is predictable
Session exchange is correct
Attack surface is smaller
So yes - stupid bug, but good outcome.
Lessons Learned
Serverless exposes lazy assumptions
If auth breaks after infra changes - check redirect URLs first
Wildcard redirects are a red flag
Just because it worked before doesn’t mean it was right
Inherited code can hide landmines for years
Final Take
This wasn’t a Supabase bug.
This wasn’t a Vercel bug.
This wasn’t even a code bug.
This was a “we inherited it and AWS was covering for it” bug.
And honestly?
Those are the worst -
and funniest -
ones to debug. 😅