Fun
so what’s the process you follow to use it?
Basically just following git and review at the pull request before merging.
In CodeRabbit, I simply prompted: @coderabbitai full review
In Replit Plan mode I prompted:
Please perform a comprehensive security and code quality review of the codebase. Analyze the entire project and create a detailed task list for improvements in these areas:
-
Security Analysis
-
Code Quality
-
Architecture Review
-
Healthcare Compliance
-
Performance & Scalability
For each issue found, provide:
- File location and line numbers
- Severity level (Critical/High/Medium/Low)
- Specific recommendation
- Code example of the fix
Create a prioritized task list for addressing these findings.
Yes I have a similar audit prompt I wrote to run in Replit’s plan mode, and am generally happy with the results.
But I am curious if running Coderabbit on my app as an alternative to my own audit prompt is worth it?
I have decided CodeRabbit is not what I want for reviewing my Replit apps.
I started playing with it properly today, and seems it only works off repo pull requests. i.e. it only scans newly changed files.
But I want to take a finished app and say “review the entire codebase”. But to achieve this, I would have to create a pull request where I had touched (changed) all files.
My ongoing audit process:
Yet again, the answer is “no external tools needed. Just use Replit for everything”
I will continue with my own auditing prompt that I drop into the agent. And it is free, unlike CodeRabbit’s $24/month ![]()
Personally, I’ve enjoyed it a lot. It did catch some things that Replit did not. I’m running a prompt in Replit, npm audit in Shell (npm installed), then CodeRabbit.ai before merge. I really like how it generates documentation and you can set how nit-picky it is with the code to bring up suggestions on changes/security risks. Looks like it has a bit more functionality than that, but need to take the time to learn it better among the million other things I have on my to-do-list.
I think like @rajharrykissoon said there is a way to prompt full review.
I do remember in the initial review there is a token limit, so if it’s a large app it might not be the right fit.
I’m thankful for it though because it caught a big vulnerability that npm/replit did not (and it had been there for a while).
@Gipity-Steve @rajharrykissoon I’m curious if you all are still just using Replit for this or if you’ve started using Claude Code to review with the launch of their new ability to review codebases for security vulnerabilities.
I also used a video made by Matt Palmer one of Replit’s DevRel guys where he talks through a bunch of different items on a security checklist. Thought I’d pass it along in case it’s helpful. https://www.youtube.com/watch?v=0D9FMFyNBWo
I am building my own code auditing tool. Ready any day now and available for others to use (paid).
I built B.A.D.A.S.S. (Behavioral AI Defense & Attack Simulation System) while testing AI integrations in our own platform, Aprovio.
Most security tools focus on scanning code, but with AI systems the real problems often appear when the model is actually running.
BADASS scans a repository to identify likely AI endpoints and, if you provide the base URL and authentication for a running instance, it can launch controlled adversarial tests against the AI API — things like prompt injection, tool misuse, data exfiltration, RAG poisoning, and excessive agent autonomy.
The idea is simple: don’t just check whether the code looks safe — see if the AI can actually be exploited.
Sharing it with the community for so anyone building AI systems can test their implementations.
I use Sonar Cloud. I setup Replit to check it directly and pay off tech debt immediately after every commit.