I don't know the answer to that. But it's an interesting point that's buried in the article is that companies like to shortchange that part of the process, and it's that part of the process that is the most important to getting good code out of an LLM. I suppose part of the problem with using LLMs is that the providers have a vested interest in collecting fees that are barely less than the fully-loaded cost of the development staff.
So it'll be interesting to see if some companies find themselves ratcheting up on the documentation, and then revisiting the need for the LLM when the LLM pricing rises to the maximum the market can support.
Often it feels more efficient to take shorter steps, try something in code, we how it looks, update docs, show a demo, etc. That was true before LLMs and I think LLMs make it even more true.
Having spent some of my career as an SRE, I would argue that what distinguishes production code from it-worked-on-my-machine code has very little to do with the things at the start of that list and almost everything with the "bolts and screws needed to make the software production-ready", and particularly the 12 factors [0]. With my anecdata, I've had a much more productive time maintaining in production systems written by junior coders who only started using that particular language a few months ago, but are eager to take direction, than experienced developers with profound knowledge and strong opinions held tightly.
With this in mind, I've been quite productive doing "vibe engineering" [1], rigidly controlling the code from the outside as mostly a black box, with extensive precommit testing, focusing my code reviews on preventing weird "going off the rails" issues and adding new tests against them, while not worrying at all about code style.
> I mean, for at least the last 3 or 4 decades, one of the biggest impediments we had regarding fast and reliable software development were poor requirements and poor architectural design. Human software developers had the same problems with poorly designed and documented requirements and architecture for the last decades, but nobody cared. Instead, the only complaint was that writing software would take too long.
This is not in any way similar to what humans do.
ajkjk•5h ago
Probably programs as "text that you run on a computer" is, long-term, not how anything is going to be done. After all, what is a prompt but a (lossy, error-prone, inexact) specification for a program, or at least part of a program, before you go in and modify it by hand? The code itself is just an interchange format, no different than JSON. Can we formalize that abstraction such that the prompt is an exact specification, just, at a super high level? AI-text generation makes it faster to write text, but no amount of text-generation gets around the fact that maybe text generation is... not... what we should be doing, actually. And the LLMs are going to be better working at that level also.
I really wish the people geeking out over LLMs would be geeking out over radical new foundational ideas instead. Picture Bret Victor-style re-imaginings of the whole programming experience. (I have loads of ideas myself which I've been trying to find some angle of attack for.) Hard work at improving the world looks like finding radically new approaches to problems, and there are loads of ways to make the world a better place that are being distracted from by the short-term view of working entirely in the existing paradigm.
davemp•4h ago
Maybe (likely) you could come up with a more convenient set of operations, but I don’t really see how expressing that as plain text ast is really holding things back.
ajkjk•3h ago
In particular, the syntax tree is also Just Another Representation of the functionality... But it's still way overspecified, compared to your intent, since it has lots of implementation details encoded in it. Actually it is the tests that get closer to an exact representation of what you intend (but still, not very close). (This is also why I love React and declarative programming: because it lets me code in a way which is closer to the model of what I intend that I hold in my head. Although still not that close).
So, programming seems similar to the mesh data for a model to me. The more you can get a representation which is faithful to the programming intent, the more powerful you are. LLMs demonstrate that natural language sorta does this.. But not really, or at least, not when the 'compiler' is a stochastic parrot. On the flip side it gets you part of the way and then you can iterate from there by other methods.
ajkjk•3h ago
(a) a mathematical model like a group is a representation of a physical concept, not the concept itself
(b) this process of representing things by mathematical models has some properties that are inescapable, for instance the model must factor over the ways you can decompose the system into parts
(c) in particular there is some intrinsic coordinate-freedom to your choice of model. In physics, this could be the choice of say coordinate frame or a choice of algebraic system (matrices vs complex numbers vs whatever); in programming the choice of programming language or implementation detail or whatever else
(d) the coordinate-freedom is forced to align at interfaces between isolated systems. In physics this corresponds to the concept of particles (particularly gauge bosons like photons, less sure about fermions...); in programming corresponds to APIs and calling conventions and user interfaces---you can have all the freedom you want in the details but the boundaries are fixed by how they interop with each other.
all very hand-wavey since I understand neither side well... but I like to imagine that someday there will be a "representation theory of software" class in the curriculum (which would not dissimilar from the formal-language concepts of denotational/operational semantics, but maybe the overlaps with physics could be exploited somehow to share some language?)... it seems to me like things mathematically kinda have to go in something like this direction.
[1] https://en.wikipedia.org/wiki/Representation_theory
xnorswap•4h ago
Because, what does a well-specified formalisation of a problem solution look like? It looks like a programming language.
Since COBOL, the dream has been a language which is formalised for computers while being understandable and able to be written by "business users".
We've been promised this future by COBOL, Visual Basic, SQL, and many others.
And what does the reality look like? It looks like the business users being upset by the fussiness that formalisation adds.
That's why Excel is still king.
Does adding better visual descriptors of program execution really help communicate solutions?
LLMs are actually a great bridge between, "Here's an idea" and "Here's the idea formalised as a set of problem and solution statements".
They're really good at it. Claude Sonnet 4.5 will output a dozen pages of formalised steps for solving a problem that acts as a good bridge between the domain expert and the programmer.
It makes mistakes. It misunderstands things sometimes. Sometimes it understands things better than the programmer or the domain user, such as when it recently corrected me on my understanding on the OAuth2.0 spec, because I was using a non-standard parameter that Cisco Meraki had mistakenly added to their documentation.
daxfohl•2h ago
LLMs are great (sometimes) for conversational editing where there's a fast, iterative back and forth between description, code, clarification and touch-ups, etc. But trying to avoid code entirely eventually makes everything harder, not easier.
sirwhinesalot•2h ago
nradov•2h ago
zahlman•15m ago
You can get a lot of APL / LISP feeling out of carefully written Python or JavaScript. Not the metaprogramming stuff, sure. But a lot of that in LISP depends on homoiconicity, which is one of the biggest things making the language "not usable by average developers".
zahlman•22m ago
... And you want a representation of that abstraction to persist on disk?
... But it shouldn't be "text"?
Why not?
And how will you communicate it? You want to prompt in lossy, error-prone, inexact text, and then trust that an opaque binary blob correctly represents the formalization of what you meant? Or go through feedback cycles of trying to correct it with more prompting, but without the ability to edit manually?
> but no amount of text-generation gets around the fact that maybe text generation is... not... what we should be doing, actually.
Well, sure. But that isn't a problem with text; it's a problem with boilerplate in our designs.