Doing the verification after the execution tends to lead to “yeah this is good” when it really isn’t. Stuff like copilot annoyingly loves to change the tests so it can pass, rather than changing the implementation to make the tests pass. I wonder if their platform prevents that kind of thing.
"Developers are still needed in age of AI" is not about managing unreliable compilers.
Management mistakes in form of overdelegation and underdelegation is not about managing unreliable compilers.
Software process design with explicit checkpoints is not about managing unreliable compilers.
"Dear developer, it's time to turn yourself into a manager" is not about managing unreliable compilers.
Finally, a shameless advertisement plug from an AI toolkit company responsible for creating this post is not about managing unreliable compilers either!
Okay, LLMs being unreliable and plentiful is almost about managing unreliable compilers, but only if you believe the "many have analogized LLMs with compilers" opening statement. And even if you believe it, this post contains no practical examples of unreliability or how that unreliability is managed; the whole post is generic and lacks any connection to software development practice, to the point where it seems LLM-generated as a whole.
pwdisswordfishs•5h ago
> But they’re fast, and there are effectively infinitely many of them.
That's like saying you can hire effectively infinitely many human workers, because there are 8+ billion people on Earth.
Even though they're cheaper than humans ($200 gets you a month of passable work, rather than a single day), USD is still one of the limiting reagents that determines how much you can get out of them.