Every token your filesystem tools consume is context the model cannot use for reasoning. Most MCP file servers are O(file size) on every operation: reads return the whole file, edits rewrite the whole file. The context window fills up before the agent gets anything meaningful done, and the problem compounds silently as your files grow.
Chisel makes edits O(diff) and reads O(match). The agent sends a unified diff instead of a full rewrite, and queries with grep or sed instead of reading entire files. On a 500-line file this is nearly two orders of magnitude less context per operation. The savings scale linearly with file size, so the larger your codebase, the more it matters.
The other thing I cared about was security. Path confinement is enforced at the kernel level via cap-std and openat with O_NOFOLLOW, not a userspace prefix check. Shell commands run via direct execve against a fixed compile-time whitelist — no sh -c wrapper, no arbitrary execution. Atomic writes use a tmp-and-rename so a failed patch never corrupts the target file.
It ships as a standalone MCP server, as chisel-core, a plain sync Rust library you can embed in your own MCP server, and as a WASM build for Node.js and Python runtimes, you can bring your own MCP and build on top of it.
Would love feedback, especially from anyone building agents that do heavy file manipulation.
jeremy65431•2h ago
verdverm•2h ago
https://news.ycombinator.com/threads?id=jeremy65431
ckanthony•1h ago