Over the past few weeks adding new cities to LocalMusicX.com, I ran into a scaling issue that felt very familiar — one of those “I’ve seen this movie before” moments. It took a few hours of back and forth with the Agent, but we got to the root cause.
At small scale, everything worked great. We were loading event data and filtering it client-side by city. Fast, simple, no issues.
Then we started adding more cities and events.
And things… slowed down. Then degraded. Then basically hit a wall.
The root cause was classic:
the backend was returning all events for all cities
the browser was doing the filtering
So even if a user just wanted to return 20 events for “Bellingham tonight,” we were sending the entire dataset of thousands of events over the wire.
It worked beautifully… until it didn’t.
What’s interesting is how subtle this failure mode is:
-
performance degrades gradually
-
nothing obviously “breaks” at first
-
easy to miss until you hit a tipping point
Then suddenly you’re debugging under pressure with a live site struggling.
The fix, of course, was straightforward once identified:
move filtering to the server
return only relevant data per request
Now the architecture scales properly — performance stays flat as we add more cities.
What struck me most is how timeless this pattern is. I’ve hit this same issue multiple times over decades of development, just with different tech stacks.
Curious how others here approach this tradeoff:
-
Do you default to server-side filtering early?
-
Any heuristics for knowing when client-side filtering is “safe”?
-
Have you run into similar slow-burn scaling failures?
Would love to hear war stories.