I’ve been considering a code review tool to look over my apps? www.coderabbit.ai is one I’ve heard of, but I haven’t yet used it. Has anyone tried any of these tools, or got any recommendations?
I know there is a built-in tool in the Replit workspace, but it seems extremely basic.
Pretty sure my old company used Sentry but I think for something different (AI Code Review looks new for them), a lot of people love DataDog, and OWASP. I’m at an inflection point for making a decision on this soon, I’ll keep you posted if I like whatever I move forward with.
Hi Steve,
Might be a slight tangent but do you have any suggestions for learning system architecture fundamentals?
I’ll keep an eye on this thread for code review tools but I’m worried the issues from my rather complex apps are from a lack of system architecture knowledge and not so much individual bugs.
I heard coding agents are great on a micro scale but that there are serious limitations if the user (like me) has no coding knowledge.
As someone who’s been in tech all my life, starting out with a degree in computation which taught me all those fundamentals, architecture is just built in to everything I live and breath. And the same goes for every other techie who is turning to use AI dev tools as part of their process.
So I guess we’ve never had to think about where we’d go to learn it from scratch. But you are right to identify this. 99% of issues with apps built by non-tech viber coders are because you haven’t planned it properly and created some sort of solid design and architecture.
That doesn’t have to be as detailed as it was back in my early IT consultancy days - documentation running to hundreds of pages even for the smallest system.
But just rocking up to do the fun vibe coding part is always going to end in disaster, and why many vibed apps are only getting to 80% complete before they fall apart and people give up on them.
So, as with everything these days, I’d simply start with ChatGPT/Claude. Tell it you are working with Replit to give it context, and then just get it to help plan your app properly before you start building. Explain that you’d like it to act as guide and teacher as well as an expert systems architect, and to explain the principles along the way. Learn as you go.
Btw, this is why vibe engineering is possibly a better description for the whole vibe movement - to cover the whole software lifecycle. Or maybe vibe architect
One thing I’m noticing, licensing is a bit nuanced. DataDog charges on “Host”, and Assistant is telling me that you’d be charged for both the application but also APM since it would need to be implemented in both places (and you could have multiple APM hosts). Then log volume, as a secondary. Pricing could escalate quickly, which I’ve always heard about them.
There is also a Code Review tool from Google - that might be an important consideration since it’s likely that Google Gemini 3 may be part of it - and what I appreciate about where Google has gotten to is that they are supporting a million token context - so enough to meaningfully remember a full repo and evaluate it / review the code.
Thanks @JMS. More options than I realised. I think I will try Coderabbit first, in Github, as my replit app is synced to Github.
My preference is for it to tell me issues and then I ask Replit agent to fix them one by one with my own tailored prompts. I have tech experience so combining Coderabbit with my understanding makes sense. But for non techies I imagine you will want a done for you fixing service where Coderabbit and/or replit agent just get on and make fixes.
@rajharrykissoon I already have an audit prompt I use to ask Replit agent to find issues. The purpose of me using Coderabbit is just to take that to another level, to make the app really solid.
@Gipity-Steve I have a project that may be launch-ready. It’s a multitenancy SaaS. I need a final review, by an appropriately skilled human-in-the-loop, to basically audit the vibe-coded project, debug, and optimize code for stability and efficiency.
My colleagues and I use varying AI’s as oppositional AI to deliver critiquing in a useful way. Sort of a Board of Experts model that reviews things. We do this for co-created written work and it makes a great deal of sense particularly for code development. Arguably for the same reasons that we leverage teams of testers in contrast to our teams of developers in organizations. I’m on the same path as you - I’ll share what I find as I test different testers as well, @Gipity-Steve!
The difference with all code testers is that they have been organized and trained to focus on code testing from some other human’s point of view than from the Replit team’s point of view. As with all things, as soon as you introduce a completely different team’s goals into the mix - the methods and point of view change. There is no one-size fits all approach to code testing per se.