Recently, I’ve increasingly come to believe that intelligence is no longer AI’s bottleneck. The systems we build around it are.
Input Paradox (1)
The first issue is the input paradox. When interacting with AI, if the prompt is highly detailed, the model tends to overfit to the user’s framing and assumptions. If it is too concise, the model lacks the context needed to generate something truly useful.
This creates a paradox: to preserve the model’s independent reasoning, you should say less — but to make the answer specific to your situation, you must say more.
Information Asymmetry (2)
In economics, information asymmetry describes a situation where one party has access to critical information that the other does not.
This is exactly what happens when we interact with LLMs. The user holds the high-resolution, real-world data — revenue numbers, funding status, team structure, individual capabilities, product details, operational constraints. The model sees only what fits inside a prompt.
Imagine asking an NBA coach how to become a better basketball player, but the coach knows nothing about your goals, training history, strengths, or weaknesses. The advice will naturally sound broad — “practice more,” “improve fundamentals.” That does not mean the coach lacks expertise. It means the coach lacks information.
The Hidden Cost of “Smart” Tools (3)
Systems like OpenClaw and Claude Code are impressive, but if you inspect their logs, even simple tasks often rely on massive preloaded system prompts and large context windows. A trivial request can consume tens of thousands of tokens.
This makes advanced agent systems expensive and sometimes inefficient. It raises a deeper question: are we actually building smarter systems — or just wrapping enormous static prompts around powerful models and branding them as innovation?
Some Personal Thoughts on the Future (+)
We have seen the rapid advances in model capability, but the dominant interaction paradigm is still the same: text chat. We know AI is powerful, but we don’t experience it as something tangible.
The future of AI agents will not be a single assistant. It will be a lot of them living inside your computer, securely accessing your data, continuously active, and continuously updating. Instead of hiding behind a chat window, they will exist within a more transparent interface — one where you can clearly see, live and work with them directly.
P.S. AI companies should seriously consider collaborating with the game companies. The next interface breakthrough may come from interactive worlds.