> "The hottest new programming language is English"
I've built a Natural Language Programming stack for building AI Agents. I think it is the first true Software 3.0 stack.
The core idea: Use LLMs as CPUs! You can finally step debug through your prompts and get reliable, verifiable execution. The stack includes a new language, compiler, developer tooling like VSCode extension.
Programs are written as markdown. H1 tags are agents, H2 tags are natural language playbooks (i.e. functions), python playbooks. All playbooks in an agents run on the same call stack. NL and python playbooks can call each other.
# Country facts agent
This agent prints interesting facts about nearby countries
## Main
### Triggers
- At the beginning
### Steps
- Ask user what $country they are from
- If user did not provide a country, engage in a conversation and gently nudge them to provide a country
- List 5 $countries near $country
- Tell the user the nearby $countries
- Inform the user that you will now tell them some interesting facts about each of the countries
- process_countries($countries)
- End program
```python
from typing import List
@playbook
async def process_countries(countries: List[str]):
for country in countries:
# Calls the natural language playbook 'GetCountryFact' for each country
fact = await GetCountryFact(country)
await Say("user", f"{country}: {fact}")
```
## GetCountryFact($country)
### Steps
- Return an unusual historical fact about $country
There are a bunch of very interesting capabilities. A quick sample -
- "Queue calls to extract table of contents for each candidate file" - Effortless calling MCP tools, multi-threading, artifact management, context management
- "Ask Accountant what the tax rate would be" is how you communicate with other agents
- you can mix procedural natural language playbooks, ReAct playbooks, Raw prompt playbooks, Python playbooks and external playbooks like MCP tools seamlessly on the same call stack
- "Have a meeting with Chef, Marketing expert and the user to design a new menu" is how you can spawn multi-agent workflows, where each agent follows their own playbook for the meeting
- Coming soon: Observer agents (agents observing other agents - automated memory storage, verify/certify execution, steer observed agents), dynamic playbook generation for procedural memory, etc.
I hope this changes how we build AI agents going forward for the better. Looking forward to discussion! I'll be in the comments.
amthewiz•6h ago
It has puzzled me why someone hasn't already done this yet, given LLMs are so good at language now.
Probably the short answer is that is it hard to get this to actually work. There were many open questions that one has to tackle simultaneously -
- What is the right balance between relying on LLMs to do the right thing vs the runtime around LLMs? For example, I went back and forth a few times getting LLMs to manage stack as you call one playbook from another one. Finally decided that it is most reliable to let the runtime take care of that.
- Context engineering - What to put in the prompt, in what order, how to represent state, how to handle artifacts, how to make sure that we use LLM cache optimally as context grows, how to "unwind" context as playbook calls return, how to compact specific types of information, how to make sure important context isn't lost, etc
- LLMs today have vastly different capabilities than 2 years ago. I have had to rewrite the whole stack from scratch 4 times to adjust. Wasn't fun, but had to be done.
- Language(s): How to represent the pseudocode so that it is both fluid natural language and a capable programming language? How to transform it so that it can be executed reliably through LLMs? How to NOT lose the flexibility and fluidity in the process (e.g. easy to convert to a graph like LangGraph, but then you are stuck with control flow), how to create a semantic compiler for that transition, what primitives to use for the compiled language that I call Playbooks Assembly Language [1].
- Agents and multi-agent system considerations - How to represent agents, how they should communicate. Agents are classes and they expose public playbooks that other agents can call). Agents can send natural language messages to each other and engage in conversations. Agents can call multi-party meetings. How can the behavior across all these interaction patterns be defined so it remains intuitive. For example, lifetime of a meeting is tied to "meeting: true" playbooks so the agent simply returns from the playbook to exit a meeting and meeting lifecycle is tied to the host returning from its meeting playbook.
- Which LLMs to support? Go for "Bring your own LLM" or restrict the set? Which LLM? LLM selection impacts how all the internal prompts are implemented so prompt building had to occur simultaneously with LLM selection.
It felt like playing an N-dimensional game of chess!
amthewiz•7h ago
> "The hottest new programming language is English"
I've built a Natural Language Programming stack for building AI Agents. I think it is the first true Software 3.0 stack.
The core idea: Use LLMs as CPUs! You can finally step debug through your prompts and get reliable, verifiable execution. The stack includes a new language, compiler, developer tooling like VSCode extension.
Programs are written as markdown. H1 tags are agents, H2 tags are natural language playbooks (i.e. functions), python playbooks. All playbooks in an agents run on the same call stack. NL and python playbooks can call each other.
Quick intro video: https://www.youtube.com/watch?v=ZX2L453km6s
Github: https://github.com/playbooks-ai/playbooks (MIT license)
Documentation: https://playbooks-ai.github.io/playbooks-docs/getting-starte...
Project website: runplaybooks.ai
Example Playbooks program -
There are a bunch of very interesting capabilities. A quick sample -- "Queue calls to extract table of contents for each candidate file" - Effortless calling MCP tools, multi-threading, artifact management, context management
- "Ask Accountant what the tax rate would be" is how you communicate with other agents
- you can mix procedural natural language playbooks, ReAct playbooks, Raw prompt playbooks, Python playbooks and external playbooks like MCP tools seamlessly on the same call stack
- "Have a meeting with Chef, Marketing expert and the user to design a new menu" is how you can spawn multi-agent workflows, where each agent follows their own playbook for the meeting
- Coming soon: Observer agents (agents observing other agents - automated memory storage, verify/certify execution, steer observed agents), dynamic playbook generation for procedural memory, etc.
I hope this changes how we build AI agents going forward for the better. Looking forward to discussion! I'll be in the comments.
amthewiz•6h ago
Probably the short answer is that is it hard to get this to actually work. There were many open questions that one has to tackle simultaneously -
- What is the right balance between relying on LLMs to do the right thing vs the runtime around LLMs? For example, I went back and forth a few times getting LLMs to manage stack as you call one playbook from another one. Finally decided that it is most reliable to let the runtime take care of that.
- Context engineering - What to put in the prompt, in what order, how to represent state, how to handle artifacts, how to make sure that we use LLM cache optimally as context grows, how to "unwind" context as playbook calls return, how to compact specific types of information, how to make sure important context isn't lost, etc
- LLMs today have vastly different capabilities than 2 years ago. I have had to rewrite the whole stack from scratch 4 times to adjust. Wasn't fun, but had to be done.
- Language(s): How to represent the pseudocode so that it is both fluid natural language and a capable programming language? How to transform it so that it can be executed reliably through LLMs? How to NOT lose the flexibility and fluidity in the process (e.g. easy to convert to a graph like LangGraph, but then you are stuck with control flow), how to create a semantic compiler for that transition, what primitives to use for the compiled language that I call Playbooks Assembly Language [1].
- Agents and multi-agent system considerations - How to represent agents, how they should communicate. Agents are classes and they expose public playbooks that other agents can call). Agents can send natural language messages to each other and engage in conversations. Agents can call multi-party meetings. How can the behavior across all these interaction patterns be defined so it remains intuitive. For example, lifetime of a meeting is tied to "meeting: true" playbooks so the agent simply returns from the playbook to exit a meeting and meeting lifecycle is tied to the host returning from its meeting playbook.
- Which LLMs to support? Go for "Bring your own LLM" or restrict the set? Which LLM? LLM selection impacts how all the internal prompts are implemented so prompt building had to occur simultaneously with LLM selection.
It felt like playing an N-dimensional game of chess!
[1] https://playbooks-ai.github.io/playbooks-docs/reference/play...