I've been thinking about why AI agents struggle with code generation — it's not the models, it's the languages. Every
compiler was designed to output human-readable errors. Agents have to regex-scrape them, which breaks constantly.
So I built Valea: a minimal systems language where the compiler speaks JSON by default.
$ valea check program.va --json
[{"code":"E201","message":"Function 'foo' returns bool but declared int","span":{"start":4,"end":7}}]
Stable error codes (E201, E202...) mean an agent can branch on the code, not fragile string matching. Canonical
formatter means two agents generating the same program always produce identical output. Targets C as output.
The demo shows Claude autonomously:
- Writing a program → hitting E001 (negative literal not supported) → inventing a workaround within the language
constraints → emitting valid C
- Building Fibonacci via pure function chaining (no loops, no recursion)
- All without human intervention
https://asciinema.org/a/834560
Repo: https://github.com/hvoetsch/valea
Still MVP — only functions, int/bool, addition, and zero-arg calls. But the compiler infrastructure (lexer, parser,
type checker, C emitter, JSON diagnostics) is solid. Looking for feedback on the design direction before expanding the
language surface.
What would you add first: parameters, variables, or conditionals?
hvoetsch•2h ago