Introducing trama — agents don't need frameworks. They need a runtime.
You say what you want. trama writes the orchestration as a complete, executable program — then runs it, auto-repairs when it breaks, and versions every change. Because the orchestration is code, not config, the agent can write it and rewrite it — and so can you. git clone && trama run.
What makes this different: trama programs can generate other trama programs. A parent program decomposes a task, spawns sub-programs, runs them, synthesizes the results. The orchestration is not configured — it's written by the agent, and the agent can rewrite it on demand.
~1000 lines of runtime. No ceiling — as LLMs get better at writing code, trama gets more powerful without a single framework change.
Built on @badlogicgames's pi as the intelligence substrate. The autonomous optimization loop is inspired by @karpathy's autoresearch — propose, eval, keep or discard, repeat. trama just makes the loop — and the program itself — agent-written.
NaNhkNaN•1h ago
You say what you want. trama writes the orchestration as a complete, executable program — then runs it, auto-repairs when it breaks, and versions every change. Because the orchestration is code, not config, the agent can write it and rewrite it — and so can you. git clone && trama run.
What makes this different: trama programs can generate other trama programs. A parent program decomposes a task, spawns sub-programs, runs them, synthesizes the results. The orchestration is not configured — it's written by the agent, and the agent can rewrite it on demand.
~1000 lines of runtime. No ceiling — as LLMs get better at writing code, trama gets more powerful without a single framework change.
Built on @badlogicgames's pi as the intelligence substrate. The autonomous optimization loop is inspired by @karpathy's autoresearch — propose, eval, keep or discard, repeat. trama just makes the loop — and the program itself — agent-written.