Author here. This started as a simple frustration: LLMs re-derive everything from scratch, every time. A model that has correctly solved something a million times will spend the same tokens on the million-and-first.
KNOW proposes extracting proven reasoning patterns and compiling them to lightweight, deterministic programs that any model can invoke. The network builds itself - pattern detection becomes a pattern, extraction becomes a pattern. Intelligence accumulates in the commons, not in proprietary weights.
I wrote up the concept because I wanted to see if the idea survives contact with smarter people. There are open questions I don't have answers to (extraction fidelity, adversarial robustness, routing at scale). Happy to hear where this falls apart.
joostdejonge•1h ago
KNOW proposes extracting proven reasoning patterns and compiling them to lightweight, deterministic programs that any model can invoke. The network builds itself - pattern detection becomes a pattern, extraction becomes a pattern. Intelligence accumulates in the commons, not in proprietary weights.
I wrote up the concept because I wanted to see if the idea survives contact with smarter people. There are open questions I don't have answers to (extraction fidelity, adversarial robustness, routing at scale). Happy to hear where this falls apart.
https://github.com/JoostdeJonge/Know