I feel like it's a bad habit to associate these kinds of tasks as being "claude" specific when really most of them can be handled by any LLM.
Analemma_•1mo ago
My experience thus far has been that for coding work, the model is at most 50% of the magic sauce, quite possibly less. It’s the harness and tooling doing a lot of the heavy lifting, and for the moment Claude Code remains on top. I think Gemini 3.0 is smarter than Opus 4.5 overall, but Claude Code still handily outperforms Gemini CLI, which all comes down to the quality of the tooling and system prompts.
sam1522•1mo ago
hmm i would rather say to use codex instead of claude code because codex would solve my problems in one shot whereas i would need to prompt 2 - 3 times with claude
seesawtron•1mo ago
Sure it makes a few things easier to execute when you have an agent running locally. But many people fear what such an agent might do under a wrong or misunderstood prompt to your local system. There is also mistrust of how it may access your data locally compared to a more controlled scenario where you specifically choose which files it has access to. So, no, “everyone” “should” not necessarily be using more CC. It depends on the tasks in hand and risks associated
johann8384•1mo ago
But, it isn't running locally. It uses tools locally but the LLM is still shipping the output of those tools to Anthropic and running the inference there.
A compromise to this would be Bedrock but that still ships the data to Amazon to execute the LLM on their processors.
wormpilled•1mo ago
Analemma_•1mo ago