AI dominates areas like chess and real-time strategy (RTS) games due to its ability to calculate deeply, execute precisely, and process actions quickly. Systems like AlphaZero and AlphaStar usually defeat human players because they can calculate better and have higher Actions Per Minute (APM).
However, we can take a different approach. Instead of trying to compete with AI on its own terms, we can find situations where AI assumes humans will make errors and use that to our advantage.
Concept: Zero-Tolerance Cognitive Consistency
The principle I examined is “zero-tolerance cognitive consistency.” This method means acting with strict consistency, removing errors and hesitation completely. Where AI expects mistakes, fatigue, or indecision, this method eliminates those factors.
Practical outcomes include:
- situations where AI makes mistakes based on assumed human errors, - long streaks of wins against both AI and human opponents, - finding logical contradictions when interacting with large language models.
All these cases show the same point: consistent human behavior reveals weaknesses in AI systems that depend on assumptions about statistical randomness or human shortcomings.
RTS Games as the Next Perimeter
The difficulties of RTS games extend beyond intricate calculations and require quick decisions in a variety of areas. AlphaStar's ability to make decisions quickly demonstrates its strength.
The cognitive bottleneck theory is relevant here too:
- humans base decisions on consistency instead of probabilistic expectations, - maintaining a uniform strategy across broad and detailed choices,
can lead AI to mismanage resources or make strategic errors, even if it is faster.
This approach does not focus on out-clicking AI, but rather emphasizes taking advantage of the structural assumptions that underlie AI decisions.
Research Question for the Community
The question I pose is:
Can we develop a repeatable strategy based on human-driven consistency to exploit AI assumptions in complex, fast-paced situations?
If the answer is yes, it indicates that the human advantage may not come from raw calculation but from consistency, i.e. using disciplined decision-making to get the upper hand in scenarios where AI anticipates mistakes.
Further Reading
I detailed my complete experimental documentation, including chess transcripts, probability analysis, and AI interaction breakdown here:
https://medium.com/@andrejbracun/the-1-in-8-billion-human-my-journey-at-the-edge-of-human-ai-limits-a9188f3e7def