This is going to be a big problem. How do people using Claude-like code generation systems do this? What artifacts other than the generated code are left behind for reuse when modifications are needed? Comments in the code? The entire history of the inputs and outputs to the LLM? Is there any record of the design?
I have prompting in AGENTS.md that instructs the agent to update the relevant parts of the project documentation for a given change. The project has a spec, and as features get added or reworked the spec gets updated. If you commit after each session then the git history of the spec captures how the design evolves. I do read the spec, and the errors I've seen so far are pretty minor.
To be fair - humans also fail at that. Just look at the GTK documentation as an example. When you point that out, ebassi may ignore you because criticism is unwanted; and the documentation will never improve, meaning they don't want new developers.
There is a difference in qualia in it happens to work and it was made for a purpose.
Business logic will strive more for it happens to work as a good enough.
I think the concern is not that "people don't know how everything works" - people never needed to know how to "make their own food" by understanding all the cellular mechanisms and all the intricacies of the chemistry & physics involved in cooking. BUT, when you stop understanding the basics - when you no longer know how to fry an egg because you just get it already prepared from the shop/ from delivery - that's a whole different level of ignorance, that's much more dangerous.
Yes, it may be fine & completely non-concerning if agricultural corporations produce your wheat and your meat; but if the corporation starts producing standardized cooked food for everyone, is it really the same - is it a good evolution, or not? That's the debate here.
I fail to see how this isn't a problem? Grid failures happen? So do wars and natural disasters which can cause grids and supply chains to fail.
This new arrangement would be perfectly fine if they aren't responsible when/if it breaks.
CPU instructions, caches, memory access, etc. are debated, tested, hardened, and documented to a degree that's orders of magnitude greater than the LLM-generated code we're deploying these days. Those fundamental computing abstractions aren't nearly as leaky or nearly as in need of refactoring tomorrow.
I am no CS major, nor do I fully understand the inner workings of a computer beyond "we tricked a rock into thinking by shocking it."
I'd love to better understand it, and I hope that through my journey of working with computers, i'll better learn about these underlying concepts registers, bus's, memory, assembly etc
Practically however, I write scripts that solve real world problems, be that from automating the coffee machine, to managing infrastructure at scale.
I'm not waiting to pick up a book on x86 assembly first before I write some python however. (I wish it were that easy.)
To the greybeards that do have a grasp of these concepts though? It's your responsibility to share that wealth of knowledge. It's a bitter ask, I know.
I'll hold up my end of the bargain by doing the same when I get to your position and everywhere in between.
One can continue to perfect and exercise their craft the old school way, and that’s totally fine, but don’t count on that to put food on the table. Some genius probably can, but I certainly am not one.
Humans aren’t without flaws; prior to coding assistants, I’ve lost count of the times my PM telling me to rush things at the expense of engineering rigor. We validate or falsify the need for a feature sooner and move on to other things. Sometimes it works sometimes a bug blows up in our faces, but things still chug along.
This point will become increasingly moot as AI gets better at generating good code, and faster, too.
Adam Jacob
It’s not slop. It’s not forgetting first principles. It’s a shift in how the craft work, and it’s already happened.
This post just doubled down without presenting any kind of argument. Bruce Perens
Do not underestimate the degree to which mostly-competent programmers are unaware of what goes on inside the compiler and the hardware.
Now take the median dev, compress his lack of knowledge into a lossy model, and rent that out as everyone's new source of truth.So nothing new under the sun, often the practices come first, then only can some theory emerge, from which point it can be leverage on to go further than present practice and so on. Sometime practice and theory are more entengled in how they are created on the go, obviously.
For example, why is the HP-12C still the dominant business calculator? Because using other calculators for certain financial calculations were non-deterministically wrong. The HP-12C may not have even been strictly "correct", but it was deterministic in the ways in wasn't.
Financial people didn't know or care about guard digits or numerical instability. They very much did care that their financial calculations were consistent and predictable.
The question is: Who will build the HP-12C of AI?
Does anyone on the planet actually know all of the subtleties and idiosyncrasies of the entire tax code? Perhaps the one inhabitant of Sealand and the Sentinelese but no-one in any western society.
The issue with frameworks is not the magic. We feel like it's magic because the interfaces are not stable. If the interfaces were stable we'd consider them just a real component of building whatever
You don't need to know anything about hardware to properly use a CPU isa.
The difference is the cpu isa is documented, well tested and stable. We can build systems that offer stability and are formally verified as an industry. We just choose not to.
"It's not slop. It's not forgetting first principles. It's a shift in how the craft work, and it's already happened."
It actually really is slop. He may wish to ignore it but that does not change anything. AI comes with slop - that is undeniable. You only need to look at the content generated via AI.
He may wish to focus merely on "AI for use in software engineering", but even there he is wrong, since AI makes mistakes too and not everything it creates is great. People often have no clue how that AI reaches any decision, so they also lose being able to reason about the code or code changes. I think people have a hard time trying to sell AI as "only good things, the craft will become better". It seems everyone is on the AI hype train - eventually it'll either crash or slow down massively.
https://youtu.be/36myc8wQhLo (USENIX ATC '21/OSDI '21 Joint Keynote Address-It's Time for Operating Systems to Rediscover Hardware)
The lack of comprehensive, practical, multi-disciplinary knowledge creates a DEEP DEPENDENCY on the few multinational companies and countries that UNDERSTAND things and can BUILD things. If you don't understand it, if you can't build it, they OWN you.
youarentrightjr•2h ago
True.
But in all systems up to now, for each part of the system, somebody knew how it worked.
That paradigm is slowly eroding. Maybe that's ok, maybe not, hard to say.
redrove•2h ago
If the project is legacy or the people just left the company that’s just not true.
youarentrightjr•1h ago
Yeah, that's why I said "knew" instead of "knows".