I built Nedster, a highly autonomous, CLI-based AI software engineer designed to run entirely locally. It’s optimized specifically for consumer hardware (tested extensively on an 8GB RTX 3060 Ti).
The biggest problem I faced with running local agents (especially mid-size models like Qwen) is what I call "Execution Theater"—the model hallucinates that it created a file.
To solve this, Nedster uses a heavily customized Ollama Modelfile (aria-qwen) and an aggressive python orchestrator:
1. Anti-Execution-Theater: Physically checks disk sizes after writes.
2. OpenClaw-Style Precision: Bypasses XML for exact-string replacement.
3. Emergency Context Reset: Auto-summarizes at 85% context usage to preserve tool instructions.
h2u2•1h ago
I built Nedster, a highly autonomous, CLI-based AI software engineer designed to run entirely locally. It’s optimized specifically for consumer hardware (tested extensively on an 8GB RTX 3060 Ti).
The biggest problem I faced with running local agents (especially mid-size models like Qwen) is what I call "Execution Theater"—the model hallucinates that it created a file.
To solve this, Nedster uses a heavily customized Ollama Modelfile (aria-qwen) and an aggressive python orchestrator:
1. Anti-Execution-Theater: Physically checks disk sizes after writes. 2. OpenClaw-Style Precision: Bypasses XML for exact-string replacement. 3. Emergency Context Reset: Auto-summarizes at 85% context usage to preserve tool instructions.
Repo: https://github.com/unrealumanga/Nedster