Repo: https://github.com/llm-use/llm-use
OpenClaw-style agents are powerful but get expensive if every step runs on a single high-end model. llm-use helps by: • using a strong model only for planning and final synthesis • running most steps on cheaper or local models • mixing local and cloud models in the same workflow
Example:
python3 cli.py exec \ --orchestrator anthropic:claude-4-5-sonnet \ --worker ollama:llama3.1:8b \ --task "Monitor sources and produce a daily summary"
This setup keeps long-running agents predictable in cost while preserving quality where it matters.
Feedback welcome.