What it does:
- Runs against local providers (LM Studio / llama.cpp server / Ollama) - Tool calling with explicit hard gates (--allow-shell, --allow-write, write tools opt-in) - Trust layer: policy rules, approval workflows, audit trail - Replayable run artifacts + verification - MCP stdio tool support (including Playwright MCP) - Eval harness for deterministic coding/browser task packs
Why I built it:
I originally tried wrapping trust controls around existing agent CLIs, but tool execution is too native in those products to reliably enforce policy externally. So I built a runtime where tool calling + trust controls are first-class.
Defaults stay safe:
- trust off - shell/write disabled - write tools hidden unless enabled
Quickstart:
cargo install --path . localagent init localagent doctor --provider lmstudio localagent --provider lmstudio --model <model> chat --tui true
I’d appreciate feedback on:
1. trust/policy model ergonomics 2. eval design for local model benchmarking 3. TUI workflow for day-to-day coding use