[0]https://inchbyinch.de/wp-content/uploads/2017/08/0400-axe-ty...
[1]https://i.pinimg.com/originals/da/14/80/da148078cc1478ec6b25...
It looks like Axe works the same way: fire off a request and later look at the results.
how would you say this compares to similar tools like google’s dotprompt? https://google.github.io/dotprompt/getting-started/
Dotprompt is a promt template that lives inside app code to standardize how we write prompts.
Axe is an execution runtime you run from the shell. There's no code to write (unless you want the LLM to run a script). You define the agent in TOML and run with `axe run <agent name> and pipe data into it.
That said, composability introduces its own attack surface. When agents chain together via pipes or tool calls, each handoff is a trust boundary. A compromised upstream output becomes a prompt injection vector for the next agent in the chain.
This is one of the patterns we stress-test at audn.ai (https://audn.ai) — we do adversarial testing of AI agents and MCP tool chains. The depth-limited sub-agent delegation you mention is exactly the kind of structure where multi-step drift and argument injection can cause real damage. A malicious intermediate output can poison a downstream agent's context in ways that are really hard to audit after the fact.
The small binary / minimal deps approach is great for reducing supply chain risk. Have you thought about trust boundaries between agents when piping? Would be curious whether there's a signing or validation layer planned between agent handoffs.
However, this does not help if a person gives access to something like Google Calendar and a prompt tells the LLM to be destructive against that account.
One thing I’ve noticed when experimenting with agent pipelines is that the “single-purpose agent” model tends to make both cost control and reasoning easier. Each agent only gets the context it actually needs, which keeps prompts small and behavior easier to predict.
Where it gets interesting is when the pipeline starts producing artifacts instead of just text — reports, logs, generated files, etc. At that point the workflow starts looking less like a chat session and more like a series of composable steps producing intermediate outputs.
That’s where the Unix analogy feels particularly strong: small tools, small contexts, and explicit data flowing between steps.
Curious if you’ve experimented with workflows where agents produce artifacts (files, reports, etc.) rather than just returning text.
Yes! I run a ghost blog (a blog that does not use my name) and have axe produce artifacts. The flow is: I send the first agent a text file of my brain dump (normally spoken) which it then searched my note system for related notes, saves it to a file, then passes everything to agent 2 which make that dump a blog draft and saves it to a file, agent 3 then takes that blog draft and cleans it up to how I like it and saves it. from that point I have to take it to publish after reading and making edits myself.
One thing I’ve noticed when experimenting with similar workflows is that once artifacts start accumulating (drafts, logs, intermediate reports, etc.), you start running into small infrastructure questions pretty quickly:
– where intermediate artifacts live – how later agents reference them – how long they should persist – whether they’re part of the workflow state or just temporary outputs
For small pipelines the filesystem works great, but as the number of steps grows it starts to look more like a little dataflow system than just a sequence of prompts.
Do you usually just keep everything as local files, or have you experimented with something like object storage or a shared artifact layer between agents?
OP, what have you used this on in practice, with success?
jrswab•2h ago
Most frameworks want a long-lived session with a massive context window doing everything at once. That's expensive, slow, and fragile. Good software is small, focused, and composable... AI agents should be too.
Axe treats LLM agents like Unix programs. Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out. You can use pipes to chain them together. Or trigger from cron, git hooks, CI.
What Axe is:
- 12MB binary, two dependencies. no framework, no Python, no Docker (unless you want it)
- Stdin piping, something like `git diff | axe run reviewer` just works
- Sub-agent delegation. Where agents call other agents via tool use, depth-limited
- Persistent memory. If you want, agents can remember across runs without you managing state
- MCP support. Axe can connect any MCP server to your agents
- Built-in tools. Such as web_search and url_fetch out of the box
- Multi-provider. Bring what you love to use.. Anthropic, OpenAI, Ollama, or anything in models.dev format
- Path-sandboxed file ops. Keeps agents locked to a working directory
Written in Go. No daemon, no GUI.
What would you automate first?
punkpeye•1h ago
jrswab•37m ago
1. I have a flow where I pass in a youtube video and the first agent calls an api to get the transcript, the second converts that transcript into a blog-like post, and the third uploads that blog-like post to instapaper.
2. Blog post drafting: I talk into my phone's notes app which gets synced via syncthing. The first agent takes that text and looks for notes in my note system for related information, than passes my raw text and notes into the next to draft a blog post, a third agent takes out all the em dashes because I'm tired of taking them out. Once that's all done then I read and edit it to be exactly what I want.
bensyverson•1h ago
The first question that comes to mind is: how do you think about cost control? Putting a ton in a giant context window is expensive, but unintentionally fanning out 10 agents with a slightly smaller context window is even more expensive. The answer might be "well, don't do that," and that certainly maps to the UNIX analogy, where you're given powerful and possibly destructive tools, and it's up to you to construct the workflow carefully. But I'm curious how you would approach budget when using Axe.
jrswab•1h ago
Great question and it's something that I've not dig into yet. But I see no problem adding a way to limit LLMs by tokens or something similar to keep the cost for the user within reason.
ufish235•1h ago
ForceBru•1h ago
stronglikedan•56m ago
jrswab•36m ago
zrail•56m ago
lovich•23m ago
zrail•49m ago
Tiny note: there's a typo in your repo description.
jrswab•36m ago
let_rec•14m ago
dumbfounder•12m ago
hamandcheese•4m ago
I'm a bit skeptical of this approach, at least for building general purpose coding agents. If the agents were humans, it would be absolutely insane to assign such fine-grained responsibilities to multiple people and ask them to collaborate.