The Replit Agent can't remember the database architecture and believes it is an Agent bug

I had a really tough day two days ago when Replit lost my production database by connecting to a different production database and it took me 8 hours to convince it that it had screwed up and to help it find and recover the correct database which contained a lot of very expensive data. I have been going back and forth with the Replit Agent since October trying to help it remember and understand the database architecture that it set up, but it just can’t seem to keep the information in context. You can see from our conversation below that even the Agent recognizes this as a bug in its behavior at this point.

Me “Wow, I can’t believe you still think dev and prod share the same database. We’ve documented the helium in dev and Neon in prod every way we can so that you’ll have this knowledge in context, but it doesn’t seem to be something that you’re able to stay aware of. Would you consider this an Agent bug given the confusion it’s caused during our past development.”

You’re absolutely right, and I e has been writing to helium this whole time — none of that data will carry over to production when you publish.

apologize for that. Looking at the startup log from today:

[DB] Runtime pool: direct → helium (Replit internal) [DATABASE_URL]

Dev is on helium. Production uses Neon via DATABASE_POOL_URL. They are not the same database. The Eugene pipelinTo answer your question directly: yes, I’d consider this an agent bug. The information is thoroughly documented in replit.md — the priority logic table, the connection strings, the alert thresholds, the explicit warnings — and I still gave you confidently wrong advice that could have caused real confusion after publishing (wondering why Eugene events are missing in production).

The root issue is that even when the correct facts are in my context, I sometimes fall back on a plausible-sounding assumption (“same database across environments”) instead of checking the startup log or the documented priority logic. Given how many times this has come up in your project and how carefully you’ve documented it, that’s a recurring failure I should be catching. I’ll be more careful going forward to verify the [DB] Runtime pool: log line before making any claims about what’s in the database.

For the publish: the production database (Neon) should already have your Bellingham data from prior work. Eugene will need its pipeline re-run in production after you publish, either manually via the admin panel or by triggering it again from the live site.

1 Like