Peen is a small Node.js CLI that works the way local models do. The model outputs one-line JSON tool calls, and the CLI executes them. It streams responses, chains tool calls, handles multi-step TODO plans, and has repair logic that nudges the model when it outputs malformed JSON. It works with Ollama and I just started adding support for other OpenAI-compatible servers like LM Studio, llama.cpp, etc.
The whole thing is about 800 lines across a few files. No build step, no dependencies, self-updates from GitHub on startup. It's experimental but starting to become useful for small coding tasks with models like qwen2.5-coder:7b. And it can do it on a MacBook Air with 16GB of RAM.
GitHub: https://github.com/codazoda/peen
codazoda•1h ago