Over the last few years, we’ve seen an explosion of Python and TypeScript frameworks trying to wrangle LLMs. The problem is that they are bolting non-deterministic, probabilistic compute onto deterministic, sequential languages. You end up with 500 lines of Pydantic models, JSON-parsing retry loops, and async spaghetti just to coordinate two agents.
I built Turn (https://turn-lang.dev) to solve this at the language level. It is a statically-typed, compiled language with a custom Rust bytecode VM, designed specifically to treat LLMs as native ALUs.
The three core primitives of Turn: Cognitive Type Safety (infer Struct): You define a standard struct, and call infer. The VM natively handles the schema constraints at the inference provider level. No manual parsing, no regex hacks. If you ask for a TradeDecision { action: Str, size: Num }, the VM guarantees you get a strongly-typed TradeDecision struct back.
Probabilistic Routing (confidence): Because LLMs hallucinate, confidence is a native binary operator. You can write control flow like: if confidence decision < 0.85 { return Fallback }. The VM inherently tracks the uncertainty of variables.
Erlang-style Actors (spawn_link & receive): Multi-agent orchestration in Python is a race condition nightmare. Turn uses an Actor model. Agents run in isolated VM threads (spawn_link) and communicate strictly via zero-shared-state mailboxes (receive). Turn currently supports Anthropic, Azure OpenAI, standard OpenAI, Google Gemini, xAI Grok, and Ollama out of the box natively via a single environment variable—no SDKs required.
Here is an example of a multi-agent quantitative hedge fund written natively in Turn: https://turn-lang.dev/docs/quant-syndicate
The Rust VM source is open here: https://github.com/ekizito96/Turn
You can test the language live in the browser using your own API keys in our sandboxed playground. Would love to hear your thoughts on treating LLMs as native compute targets at the language level rather than just API wrappers.