Agent Thinking - But doing nothing

I continue to get x seconds of work performed - and charged approx 25 cents each time - but literally no work is happening, and the agent is clearly halting mid operation…mid sentence even.

Anyone else?

1 Like

YES Me too! Same here.

There’s a major problem with Replit.
The Agent is making corrections that were never prompted. I also frequently notice that the Agent says “problem fixed,” but nothing has actually been done. It sometimes reports an issue on line 245 with “x,” but when I check, there isn’t even a line 245.

Files are often not saved correctly, and the Agent regularly works in the wrong file.
We’re about to go live — in the middle of testing and fine-tuning — and now the Agent has completely stopped working.

It’s highly frustrating. I honestly don’t know what to do right now.
Is anyone else on the platform experiencing this?

In a separate posts I’ve mentioned several of these issues.

I just posted in the main issues thread about how I provided feedback to the agent about how to properly clear a frontend field. The agent decided on action but did not implement anything. It created a checkpoint and charged my account. The charge is likely for the effort it put in to diagnose and decide on a solution. I don’t have complaints about being charged for work the agent does, but the fact that it stops mid-work is annoying and inefficient.

I’ve experienced the agent stopping and failing to continue regardless of prompting. I’m not sure of the cause, but I’ve had success with starting a new chat. This is inefficient but has allowed for continued development. It’s helpful for your first prompt in the new chat to be a quick summary of changes leading up to the agent failure and the prompt that the agent failed to respond to so the new chat will be able to reclaim some of the context quickly.

I’ve noticed the agent has issues with line references and counting, although I haven’t experienced them with v3 yet. I used to use large planning documents for massive refactors and experienced this a lot during that process. I’ve asked the agent about this and the fault typically lies in context window management. I can understand this being difficult to manage even though it seems simple. The agent is focused on the code, not line numbers or counts.

I’ve experienced the agent working on incorrect files but I’ve been able to mitigate this to nearly zero with research, planning, and documentation before asking the agent to implement changes. Plan Mode has been a great tool for me. Since I’m not a developer, I spend a lot on planning changes. I reference planning files so the agent knows what I’m referring to and have it include file/line/code references in the plan. A common phrase I use when asking the agent to create a plan and before I have it implement a plan is “Ensure the plan is based on the existing implementation, avoids assumptions, and reuses existing code where possible.” I’ve distilled many prompts and responses down to this single sentence in order to ensure the agent doesn’t go rogue and create entire new systems within the app that could have been small refactors. This was a huge problem before I started including this in prompts. To me, this seems like the natural approach, but I don’t think any agentic coding implementation works like this yet.