

I think it really depends on the user and how you communicate with the AI. People are different, and we communicate differently. But if you’re precise and you tell it what you want, and what your expected result should be it’s pretty good at filling in the blanks.
I can pull really useful code out of Claude, but ask me to think up a prompt to feed into Gemini for video creation and they look like shit.







The longer the project the more stupid Claude gets. I’ve seen it both in chat, and in Claude code, and Claude explains the situation quite well:
Increased cognitive load: Longer projects have more state to track - more files, more interconnected components, more conventions established earlier. Each decision I make needs to consider all of this, and the probability of overlooking something increases with complexity.
Git specifically: For git operations, the problem is even worse because git state is highly sequential - each operation depends on the exact current state of the repository. If I lose track of what branch we’re on, what’s been committed, or what files exist, I’ll give incorrect commands.
Anything I do with Claude. I will split into different chats, I won’t give it access to git but I will provide it an updated repository via Repomix. I get much better results because of that.