- LLM is basically useless without explicit intent in your prompt.
- LLM failed to correct itself. If it generated bullshits, it's an inifinite loop of generating more bullshits.
The question is, without explicit prompt, could LLM leverage all the best practices to provide maintainable code without me instruct it at least ?
ben_w•13h ago
> - LLM is basically useless without explicit intent in your prompt.
You can say the same about every dev I've worked with, including myself. This is literally why humans have meetings rather than all of us diving in to whatever we're self-motivated to do.
What does differ is time-scales of the feedback loop with the management:
Humans meetings are daily to weekly.
According to recent research*, the state-of-the-art models are only 50% accurate at tasks that would take a human expert an hour, or 80% accurate at tasks that would take a human expert 10 minutes.
Even if the currently observed trend of increasing time horizons holds, we're 21 months from having an AI where every other daily standup is "ugh, no, you got it wrong", and just over 5 years from them being able to manage a 2-week sprint with an 80% chance of success (in the absence of continuous feedback).
Even that isn't really enough for them to properly "leverage all the best practices to provide maintainable code", as archiecture and maintainability are longer horizon tasks than 2-week sprints.
* https://youtu.be/evSFeqTZdqs?si=QIzIjB6hotJ0FgHm
revskill•12h ago
LLM failed at the most basic things related to maintainable code. Its code is basicaly a hackery mess without any structure at all.
It's my expectation is that, at least, some kind of maintainable code is generated from what's it's learnt.
ben_w•12h ago
> It's my expectation is that, at least, some kind of maintainable code is generated from what's it's learnt.
And your observation:
> LLM failed at the most basic things related to maintainable code. Its code is basicaly a hackery mess without any structure at all.
QED, *your expectations* are way too high.
They can't do that yet.