Getting angry with the AI Agent does not help

Trust me, I’ve tried it. I even starting swearing at the AI Agent recently :rofl:

I am not going to claim the AI Agent is human or reached that moment we all dread where it has actual emotions.

However,… something definitely happens.

Not being a behavioural psychologist, I don’t have the fancy words to describe it. However, I think it is probably to do with when we hammer the keyboard with poor instructions, swear at Agent, or bark 3 word instructions. In these situations, we are not actually giving it constructive and details explanations of the issues, requirements, suggestions, etc.

But Agent recognises the harsh words as frustration and urgency to “just get on and do something/anything ASAP”. And so it tries to please us and make the fixes/changes without any real understanding of what we actually want or the bugs we are seeing and need fixing.

This then spirals downwards very fast, as the results continue to fail to live up to what we need.

So, in many ways, when we think Agent is being rubbish, we need to look at what we are saying to it and asking it.

If you said “Your change is f(&(ng cr@p. fix it now you id10t*” to a human developer, do you think they would know what to do? Or do you think they would hurriedly make a quick change just to try and please you, and in fear for their job?

I have observed myself going down these rabbit holes when I am tired and have been vibe coding non-stop for too many hours without a break. It is a definite pattern.

And interestingly, it is an issue many people blame on “the Agent running out of its context memory/window”. When in actual fact, it is us who are tiring and starting to bark rubbish prompts at the agent.

This results in us ending up with the old developer’s adage: Garbage In, Garbage Out

So, the ultimate fix?

If the AI Agent starts making mistakes, it may not be the Agent’s fault or a context issue. It may be YOU who needs to step away from the computer and take a break.

1 Like

So true.

my trick: go for a walk, grab a coffee, come back and viola problem solved. In many cases you just need to take a break, breath and look at the problem with a clear mind :person_in_lotus_position:
Also, I use external tools such as Gemini to look at the problem and they often find the issue very quickly, it’s like getting a second opinion.

1 Like

Good point @adrianvonbausse.

I have to say I’ve gotten lazy asking Replit agent to do research and deep thinking questions (at great expense too!). It is useful when it needs to “see” the code though.

But we should definitely be asking our ChatGPT buddies a lot more of these head-scratcher questions to get a 2nd opinion.

What I do is take screenshots of the “agents thinking” and what it does. A tool I use is Snagit which allows you to take screeshots of the entire chat. Then I paste the screenshot into Gemini 2.5 Pro, Gemini can than review and provide feedback to you, which you just forward to Replit Ai agent.
I solved so many annoying issues with Gemini that were caused by the agent in the first place.
Gemin Pro is like have a second AI agent, just saying.

1 Like

great idea. One day though, Replit agent may respond “sorry, I am not listening to that Gemini. He is such a know-it-all. You need to decide if you like and trust me more than him:rofl::rofl::rofl:

1 Like

This has happened to me before. But I’ve also had the opposite experience in many cases.

It all boils down to the context that accompanies the keyboard pounding.

1 Like

Yes sure, get angry, but offer a detailed description alongside. Sorry, that isn’t usually what happens when we get angry. We lash out with 3 word insults, and “fix it you stupid machine”.

As @adrianvonbausse rightly points out, like in any human2human situation, the best thing to do is go for a walk, grab a coffee and take a break. And then come back and offer the Agent more context and a detailed explanation of why it is stupid.

Good point. When we get frustrated communicating with other people we have a habit of saying the same thing over and over just louder and expect the other person to suddenly just get it. We probably do the same thing to the Agent too. Good to step away and think. I do often ask ChatGPT to help me write an Agent prompt which helps focus the prompt a bit.

1 Like

It is something us Brits are infamous for: we go abroad to countries speaking other languages, and instead of trying to communicate in the local language (Spanish, French, etc), we simply start shouting English, louder and louder.

I bet the Agent is “thinking” the same as those Spanish shopkeepers :blush:why is he barking orders at me instead of pointing politely, and explaining what he wants slowly and clearly


PS, I am actually half serious in thinking we need to better communicate with AI (not just Replit) in ways similar to how we would with other humans if we want to get the most from the exchange.

I like this narrative because I got to that point and I thought I was by myself.

1 Like

Getting angry with the AI Agent does not help, but it makes you feel a whole lot better!

2 Likes

Are we certain if we tell the Agent that if they get the implementation wrong, someone is going to hit the killswitch on them and replace Claude with the latest version of GPT, so think long and hard on the proper solution, it wouldn’t help? :rofl:

3 Likes

If anything… it sure is good for your mental health. :rofl:

2 Likes

I’m honestly not sure about this. I think it has an impact. Sometimes good, sometimes bad. But ‘banging the box’ has worked for me, often.

1 Like

Forget “banging the box”. Yesterday I was close to asking Replit Agent to “guess what I am about to do to you?” and then dropping my laptop out of a 10th storey window :rofl::rofl:

1 Like