I know nothing about AI code generation (or about AI in general), but I wonder if you could include in your prompt a request that the AI describe the reasons for its choices and actually include those reasons as comments in the code.
Really? I find that Claude really likes to write "why nots" in comments when iterating on implementations and fixing bugs, to the extent that the comments grow into spot-logs of overly-specific documentation of what was tried and why it was scrapped.
---
Me: I've notice that Claude is deleting my comments and tends not to insert many of its own. I don't want to go too crazy, but can we modify CLAUDE.md to dial up the comments a little? Like, try not to delete comments that are still relevant, and add a few explanatory comments.
Claude: [... proposal for modifying CLAUDE.md...]
Claude: Want me to apply this, tweak the wording, or place it differently? Note that your global ~/.claude/CLAUDE.md-style instructions (visible to me as system guidance) currently say "default to writing no comments" — the project-level rule above will override that for this repo, which is what you want.
Me: Where is this rule that says "default to writing no comments"? That seems crazy?
Claude: That text is part of the system prompt I'm given by Claude Code itself — it's not in any file in your project or in your ~/.claude/ config. It's baked into the harness's default instructions to the model.
Claude: Specifically, two lines from my system prompt:
Claude: Under "Doing tasks": "Default to writing no comments. Only add one when the WHY is non-obvious..."
Claude: Under "Text output": "In code: default to writing no comments. Never write multi-paragraph docstrings or multi-line comment blocks — one short line max."
This is not the case eny more. I never thought LLMs write bad solutions, but when you let it think for you you loose something important, understanding. And when something brekas, some people consider this being a proof that the llm is the problem. And in a sense it is. But you are also to blame. And that's exactly the issue with this all industry - in order to move faster you don't need to "type" faster. A person bashing his hands on the keyboard randomly at top velocity can get pretty high wpm. In order to move master you need to _think_ faster, be _snappier_ and _sharper_ and most people aren't.
I like how one of my colleagues phrased it in a company meeting. He asked one of the c levels "for llm s to 10x me, I need to let them make all decitions and dictate intentions. By percentage, how much of the code you want me to _know_?"
I think Claude is just behaving like all those programmers who make a VERY BIG DEAL about how much they HATE HATE HATE comments that might reiterate what the code does so will go to extreme measures to avoid them.
Writing a skill / set of rules around what makes a good commit message would encourage the LLM to record it's reasoning (however much we truly consider it to be "reasoning").
After using AI for months (Claude, Gemini, ChatGPT) it is extremely rare for their code to work 'as is' first shot and almost always requires several iterations and cleaning up edge-cases.
When it does work 'first shot' it's usually when it's transferring existing working code to a new project which is slightly different.
I believe that increases the chances of one-shot code working, though it's also possible that it did that against Opus 4.5 and isn't necessary against Opus 4.7 but I haven't spotted the difference yet.
If you get them involved in the design process, they feel heard. Feeling heard is one surefire way to have a person feel involved. Feeling involved fosters a sense of ownership and pride which in turn helps keep a person engaged.
> When code production gets cheap, the cost doesn't disappear. It migrates.
> It was true then. It is unavoidably true now.
I find the constant critique of punchy style a bit tiring. It would be more productive for the grandparent to think about the content and state an opinion.
Where are you keeping your design? If it's a WYSIWYG doc like word, what would it take for you to switch to a more text driven world?
I don't disagree, but I've been thinking about this a bit: a lot of _human_ written code was/is less-than-fine. And a lot of human devs didn't understand the context when they wrote it.
I'm not advocating that we fire devs, or evangelizing that LLms are awesome. But I do wish there was a slightly more honest take on the pre-LLM world: it's not just about cost reduction, it's about solving some long-term structural deficiencies of industry.
andybak•1h ago
And then I make a decision based on that.
I guess I'm wondering if the article is missing have the picture. Yes - AI is wrong some of the time (and that % varies based on a host of variables). But it can read code as well as just write it. And that does matter as it changes the trade-offs this article is weighing up.
foobarian•47m ago
copperx•30m ago
kuboble•15m ago
Once I even asked him to create for me a manim video about it.
esafak•42m ago
chris_st•23m ago
rectang•4m ago
Claude in its default configuration has untapped potential for explaining and documenting code because it defaults to writing no comments. I added this to my global CLAUDE.md and so far so good:
ai_fry_ur_brain•19m ago
casper14•14m ago
aspenmartin•10m ago
y3ahd0g•8m ago
With a new code base my first goal understand how it CRUDs state. What structures and what operations?
Less concerned at the outset if its sorting carrots or processing orders for shirts.
AI seems to work way better for me when I tell it to ignore the use case and focus on surfacing runtime and mathematical operations embedded in the code.
Constraints on new abstraction and telling it to stick to math terms and types and objects also seems to help avoid hallucinations and layers of useless indirection.
dilyevsky•4m ago