This isn't some triviality you can throw aside as unimportant, it is the shape that the code has today, and limits and controls what it will have tomorrow.
It's how you make things intuitive, and it is equally how you ensure people follow a correct flow and don't trap themselves into a security bug.
glimshe•1h ago
I think I'd actually have a use for an AI that could receive my empty public APIs (such as a C++ header file) as an input and produce a first rough implementation. Maybe this exists already, I don't know because I haven't done any serious vibe coding.
stuaxo•59m ago
Yeah it can, though rough is definitely the word.
And sometimes the LLM just won't go in the direction you want, but that's OK - you just have to go write those bits of code.
It can be suprising where it works and where it doesn't.
Just go with those first suggestions though and the code will end up rough.
jeroenhd•34m ago
As long as you're reinventing the wheel (implementing some common pattern because you don't want to pull in an entire dependency), that kind of AI generation works quite well. Especially if you also have the AI generate tests for its code, so you can force it to iterate on itself while it gets things wrong the first couple of tries. It's slow and resource intensive, but it'll generate something mostly complete most of the time.
I'm not sure if you're saving any time there, though. Perhaps if you give an LLM task before ending the work day so it can churn away for a while unattended, it may generate a decent implementation. There's a good chance you need to throw out the work too; you can't rely on it, but it can be a nice bonus if you're lucky.
I've found that this only works on expensive models with large context windows and limited API calls, though. The amount of energy wasted on shit code that gets reverted must be tremendous.
I hope the AI industry makes true on its promise that it'll solve the whole inefficiency problem because the way things are going now, the industry isn't sustainable.
IanCal•22m ago
You can do this already, the most useful things to help with this are either writing tests or having it write tests and telling it how to compile and see error messages so you can let it loop.
AirMax98•55m ago
I really disagree with this too, especially given the article's next line:
> ...You’ll be forever tweaking individual lines of code, asking for a .reduce instead of a .map.filter, bikeshedding function names, and so on. At the same time, you’ll miss the opportunity to guide the AI away from architectural dead ends.
I think a good review will often do both, and understand that code happens at the line level and also the structural level. It implies a philosophy of coding that I have seen be incredibly destructive firsthand — committing a bunch of shit that no one on a team understands and no one knows how to reuse.
tossandthrow•14m ago
> for a .reduce instead of a .map.filter...
Thisnis distinctly not the api, but an implementation detail.
Personally, i can ask colleagues to change function names, rework hierarchy, etc. But leave this exact example be, as it does not have any material difference difference - regardless of my personal preference.
lapcat•3m ago
If you are good at code review, you will also be good at not using AI agents.
shakna•1h ago
... Function names compose much of the API.
The API is the structure of the codebase.
This isn't some triviality you can throw aside as unimportant, it is the shape that the code has today, and limits and controls what it will have tomorrow.
It's how you make things intuitive, and it is equally how you ensure people follow a correct flow and don't trap themselves into a security bug.
glimshe•1h ago
stuaxo•59m ago
And sometimes the LLM just won't go in the direction you want, but that's OK - you just have to go write those bits of code.
It can be suprising where it works and where it doesn't.
Just go with those first suggestions though and the code will end up rough.
jeroenhd•34m ago
I'm not sure if you're saving any time there, though. Perhaps if you give an LLM task before ending the work day so it can churn away for a while unattended, it may generate a decent implementation. There's a good chance you need to throw out the work too; you can't rely on it, but it can be a nice bonus if you're lucky.
I've found that this only works on expensive models with large context windows and limited API calls, though. The amount of energy wasted on shit code that gets reverted must be tremendous.
I hope the AI industry makes true on its promise that it'll solve the whole inefficiency problem because the way things are going now, the industry isn't sustainable.
IanCal•22m ago
AirMax98•55m ago
> ...You’ll be forever tweaking individual lines of code, asking for a .reduce instead of a .map.filter, bikeshedding function names, and so on. At the same time, you’ll miss the opportunity to guide the AI away from architectural dead ends.
I think a good review will often do both, and understand that code happens at the line level and also the structural level. It implies a philosophy of coding that I have seen be incredibly destructive firsthand — committing a bunch of shit that no one on a team understands and no one knows how to reuse.
tossandthrow•14m ago
Thisnis distinctly not the api, but an implementation detail.
Personally, i can ask colleagues to change function names, rework hierarchy, etc. But leave this exact example be, as it does not have any material difference difference - regardless of my personal preference.