- Looking at compiler errors and fixing them. Looking at program output and fixing errors.
- Looking for documentation on the internet. This used to be a skill in itself: Do I need the reference work (language spec), a stackoverflow, or an article?
- Typing out changes quickly. This goes a little bit deeper than just typing or using traditional "change all instances of this name"-tools, but its essence is that to edit a program, you often have to make a bunch of changes to different documents that preserver referential integrity.
All these things can be amazingly faster due to the agent being able to mix the three legs.
However, it doesn't save you from knowing what needs to be done. If you couldn't in principle type out the whole thing yourself, AI will not help you much. It's very good at confidently suggesting the wrong path and taking you there. It also makes bad choices that you can spot as it is writing out changes, and it's useful to just tell it "hey why'd you do that?" as it writes things. If you don't keep it in line, it veers off.
The benefit for me is the level of thinking it allows me. If I'm working on a high-level change, and I write a low-level bug, I will have to use my attention on figuring this out before coming back to my original context. The window of time during the day where I can attempt a series of low-level edits that satisfy a high-level objective is narrow. With AI, I can steer the AI when I'm doing other things. I can do it late at night, or when I'm on a call. I'm also not stuck "between save points" since I can always make AI finish off whatever it was doing.
This is how I use AI coding tools, but I've internally described it to myself as, "Use the tool to write code only when I am certain of what the expected output should be."
If there is something that needs to be done and some reasoning is required, I just do it myself.
onion2k•47m ago
You definitely can where someone has just vibe coded a thing in a weekend. When someone has actually taken a lot of care to use AI to build something well, using many iterations of small steps to create code that's basically what they'd have written themselves and to integrate good UX driven by industry-standard libraries (e.g. shadcn, daisy), then it looks pretty much exactly like any other MVP app... because that's what it is.
jcims•42m ago
Also generated comments tend to explain how/what vs why, which is usually what I want to know.
onion2k•38m ago
It does if you let the AI generate lots of code at once. If you take small steps and build iteratively telling it what to do (following a plan that the AI generated if you want to) then it doesn't.
This isn't revelatory though. It's exactly the same as a developer would do - if you give a person a vague idea about what they should make and just leave them to get on with it they'll come back with something that does things you didn't want too.
dotancohen•24m ago
observationist•24m ago
AI can even get there, if guided by someone who knows what they're doing. We need more tutorials on how to guide AI, just like tutorials for photoshop used to walk amateurs through producing excellent logos, designs, and graphics. A whole generation of photoshop users learned to master the tools by following and sharing cookie cutter level instructions, then learning to generalize.
We should see the same thing happen with AI, except I think the capabilities will exceed the relevance of instructions too fast for any domain skills to matter.
If AI coding stagnates and hits a plateau for a couple years, then we'll see human skill differentiate uses of the tool in a pragmatic way, and a profusion of those types of tutorials. We're still seeing an acceleration of capabilities, though, with faster, better models with more capabilities appearing more frequently, ~every 3-4 months.
At some point there will be a practical limit to release schedules, with resource constraints on both human and compute sides, and there will be more "incremental" updates, comparable to what Grok is already doing with multiple weekly incremental updates on the backend, and 4-5 major updates throughout the year.
Heck, maybe at some point we'll have a reasonable way of calibrating these capabilities improvements and understanding what progress means relative to human skills.
Anyway - a vast majority of AI code feels very "cheap Ikea" at this point, with only a few standouts from people who already knew what they were doing.