If AI is going to write more of the first draft, what should the source look like for the human reviewer?
Aver is an experimental statically typed language and toolchain for AI-written, human-reviewed code.
My bet is that source code should carry more than implementation. In most projects, implementation survives in the code, but intent lives in docs, decisions live in ADRs or tickets, and expected behavior lives in tests that may or may not stay close to what they describe.
Aver tries to make those parts first-class: explicit method-level effects in function signatures; ? strings for machine-readable function intent; decision blocks for design choices and tradeoffs; colocated verify blocks for pure functions; deterministic record/replay for effectful flows; aver context for compact contract-level module export; aver compile to Rust; and aver proof to Lean 4 for mechanical proof checking of the pure subset.
A small pure example:
fn charge(account: String, amount: Int) -> Result<String, String>
? "Pure charge validation and transaction-id creation."
match amount
0 -> Result.Err("Cannot charge zero")
_ -> Result.Ok("txn-{account}-{amount}")
verify charge
charge("alice", 100) => Result.Ok("txn-alice-100")
charge("bob", 0) => Result.Err("Cannot charge zero")
A reviewer can see at a glance what the function is for (`?`) and a machine-checkable example of expected behavior (`verify`).An effectful wrapper looks like this:
fn chargeAndPrint(account: String, amount: Int) -> Result<Unit, String>
? "Effectful wrapper around charge."
"Prints the transaction id on success."
! [Console.print]
result = charge(account, amount)
match result
Result.Ok(txn) ->
Console.print(txn)
Result.Ok(Unit)
Result.Err(err) ->
Result.Err(err)
The `!` makes side effects part of the signature rather than hidden inside the implementation.Aver is intentionally opinionated: no exceptions, no null, no `if`/`else`, no loops, no closures. Branching goes through `match`, failure through `Result`, absence through `Option`, and side effects are explicit.
The repo includes small examples, but also `projects/workflow_engine`, which is my attempt at a medium-sized auditable application core with app/domain/infra split, replayable effectful flows, and verify-driven pure logic.
This is still early. I’m not claiming everyone should replace mainstream languages with Aver.
The narrower question I’m testing is whether making intent, decisions, checks, and effect boundaries machine-readable inside the source makes AI-produced code easier to review, constrain, and trust.
I’d especially like feedback on whether this feels like a language worth existing, or whether the same idea should just be conventions and tooling on top of an existing language.
jasisz•1h ago
Aver is an experimental statically typed language for AI-written, human-reviewed code.
What’s different is that intent (`?`), explicit effects (`!`), design decisions (`decision`), and behavior checks (`verify`) are part of the source itself rather than split across code, comments, docs, and tests.
If you want to evaluate it quickly, I’d suggest these places:
- medium-sized example: https://github.com/jasisz/aver/tree/main/projects/workflow_e...
- Lean proof export for the pure subset: https://github.com/jasisz/aver/tree/main/docs/lean.md
- examples of effectful programs / replay: https://github.com/jasisz/aver/tree/main/examples/services
The main question I’m testing is whether this deserves to be a language, or whether the same idea should just be tooling and conventions on top of an existing language.