I found that as models get smarter, their laziness becomes more "sophisticated." I call this the "Probabilistic Sloth" of 2026. Even with the latest retrieval tools, the model often chooses the path of least resistance, producing plausible-sounding but incorrect output.
Out of frustration, I wrote a system prompt to install a kind of "Will" into the AI. It forces the LLM to split into two roles:
The Drafting Agent: focuses on generating the initial response.
The Ruthless Auditor: focuses strictly on logical error detection and evidence locking.
This creates a friction-based loop—an explicit self-correction step before any output reaches the user. In my tests, it stopped the model from hallucinating about Python libraries that don’t exist.
This is the KOKKI (Self-Discipline) Protocol. It’s not just a prompt; it’s a structured way to force an LLM to catch its own failure modes.
I’ve documented the raw logic on Gist and would love for this community to test it, use it, and tear it apart. I don’t want money; I need brutal feedback to evolve this further.
Feedback welcome. Even “this was annoying” helps.
The Prompt (Gist): https://gist.github.com/ginsabo/641e64a3dbc2124d1edb0c662be9...