I think the translation from L0 to L1 is going to become more and more important. There have been a lot of discussions here on HN about how natural language specs “aren’t code” and how LLMs provide no formal relationship between their inputs and outputs.
One way to side step this is to have the LLM translate the NL into a formal language and persuade the human that the formal language captures their intent. This reduces the burden because the user only has to look at and understand the formal language spec, rather than all the code produced by the LLM.
Also once a formal spec is obtained, you can do lots of interesting computation on it. Property based testing comes to mind. Or even full-blown verification of the formal spec. Or, LLMs might be good at recognizing ambiguity. An LLM could generate two formal specs, use an SMT solver to find an input where the specs differ, and help the user use this diff to ask clarifying questions and resolve the ambiguity.
One comment I have is that layers L1 and L2 _might_ be reinventing the wheel slightly. Your ensure statements remind me of Dafny or Verus, for instance, which have a lot of tooling behind them.
divingstar•1h ago
Repo: https://github.com/ewiger/len
`len` is an experiment in making specifications more central than code itself. The rough model is:
* write intent and examples in natural language * capture the core structure with types, relations, and contracts * describe how that model should map to concrete generated code
In the repo I currently call those layers L0, L1, and L2.
This is very much an early project. The current tooling is a small Go CLI that validates the structure of `.l1` files, and the language design is still evolving. So this is not a polished “look what I built” launch.
I’m sharing it because I’d like honest feedback on whether the core idea is interesting, confused, redundant with existing tools, or maybe useful in some narrower niche.
Comments, criticism, and pointers to similar work would be very welcome.