Their main idea is to version control the reasoning, which, OK, cool. They want to graph the reasoning and requirements, sounds nice, but there are graph languages that fit conviently into git to achieve this already...
I also fundamentally disagree with the notion that the code is "just an artifact". The idea to specify a model is cute, but, these are indeterminate systems that don't produce reliable output. A compiler may have bugs yes, but generally speaking the same code will always produce the same machine instructions, something that the proposed scheme does not...
A higher order reasoning language is not unreasonable, however the imagined system does not yet exist...
CONGRATULATIONS: you have just 'invented' documentation, specifically a CHANGE_LOG.
> By regenerable, I mean: if you delete a component, you can recreate it from stored intent (requirements, constraints, and decisions) with the same behavior and integration guarantees.
That statement just isn't true. And, as such, you need to keep track of the end result... _what_ was generated. The why is also important, but not sufficient.
Also, and unrelated, the "reject whitespace" part bothered me. It's perfectly acceptable to have whitespace in an email address.
How different the output is each time you generate something from an LLM is a property called 'prompt adherence'. It's not really a big deal in coding LLMs, but in image generation some of the newer models (Z Image Turbo for example) give virtually the same output every time if the prompt doesn't change. To the point where some users claim it's actually a problem because most of the time you want some variety in image gen. It should be possible to tune a coding LLM to give the same response every time.
Note, when Fabrice Bellard made his LLM thing to compress text, he had to make sure it was deterministic. It would be terrible if it slightly corrupted files in different ways each time it decompressed
The only practical obstacle is:
> Non-deterministic generators may produce different code from identical intent graphs.
This would not be an obstacle if you restrict to using a single version of a local LLM, turn off all nondeterminism and record the initial seed. But for now, the kinds of frontier LLMs that are useful as coding agents run on Someone Else's box, meaning they can produce different outcomes each time you run them -- and even if they promise not to change them, I can see no way to verify this promise.
I’m sorry but it feels like I got hit in the head when I read this, it’s so bad. For decades, people have been dreaming of making software where you can just write the specification and don’t have to actually get your hands dirty with implementation.
1. AI doesn’t solve that problem.
2. If it did, then the specification would be the code.
Diffs of pure code never really represented decisions and reasoning of humans very well in the first place. We always had human programmers who would check code in that just did stuff without really explaining what the code was supposed to do, what properties it was supposed to have, why the author chose to write it that way, etc.
AI doesn’t change that. It just introduces new systems which can, like humans, write unexplained, shitty code. Your review process is supposed to catch this. You just need more review now, compared to previously.
You capture decisions and specifications in the comments, test cases, documentation, etc. Yeah, it can be a bit messy because your specifications aren’t captured nice and neat as the only thing in your code base. But this is because that futuristic, Star Trek dream of just giving the computer broad, high-level directives is still a dream. The AI does not reliably reimplement specifications, so we check in the output.
The compiler does reliably reimplement functionally identical assembly, so that’s why we don’t check in the assembly output of compilers. Compilers are getting higher and higher level, and we’re getting a broader range of compiler tools to work with, but AI are just a different category of tool and we work with them differently.
Except you can't run english on your computer. Also the specification can be spread out through various parts of the code base or internal wikis. The beauty of AI is that it is connected to all of this data so it can figure out what's the best way to currently implement something as opposed to regular code which requires constant maintenance to keep current.
At least for the purposes I need it for, I have found it reliable enough to generate correct code each time.
And before you say "that's indirect!", it genuinely does not matter how indirect the execution is or how many "translation layers" there are. Python for example goes through at least 3 translation layers, raw .py -> Python bytecode -> bytecode interpreter -> machine code. Adding one more automated translation layer does not suddenly make it "not code."
So what did you say about version contol?
Would love people's thoughts on this: https://0xmmo.notion.site/Preventing-agent-doom-loops-with-p...
would not be unfamiliar to mechanical engineers who work with CAD. The ‘Histories’ (successive line-by-line drawing operations - align to spline of such-and-such dimensions, put a bevel here, put a hole there) in many CAD tools are known to be a reflection of design intent moreso than the final 3D model that the operations ultimately produce.
What the author actually wants is ADRs: https://github.com/joelparkerhenderson/architecture-decision...
That’s a way of being able to version control requirements.
jayd16•1h ago
JellyBeanThief•1h ago