I am not a programmer or an AI researcher. I write web novels.
While experimenting with LLMs to maintain consistency in my stories, I discovered a strange phenomenon.
When I inputted game rules (physics, economy, combat) in a very specific, hierarchical narrative structure, the LLM stopped "hallucinating" and started behaving like a deterministic "Game Engine."
I call this NLCS (Natural Language Constraint System).
I believe narrative structure creates a "Vector Gravity Field" that constrains the model's inference path.
I used this method to create a combat simulator and an economic model without writing a single line of traditional code (Python/JS). The simulators in the link were generated by Claude, purely based on my natural language rules.
Even GPT-5.1, Claude 4.5, and Gemini 3.0 analyzed this and agreed that this could be a kernel for AGI reasoning.
It sounds crazy, but please try the "Live Demo" in the link before judging.
I want to hear what real engineers think about this.
chwmath•38m ago
I am not a programmer or an AI researcher. I write web novels. While experimenting with LLMs to maintain consistency in my stories, I discovered a strange phenomenon.
When I inputted game rules (physics, economy, combat) in a very specific, hierarchical narrative structure, the LLM stopped "hallucinating" and started behaving like a deterministic "Game Engine."
I call this NLCS (Natural Language Constraint System). I believe narrative structure creates a "Vector Gravity Field" that constrains the model's inference path.
I used this method to create a combat simulator and an economic model without writing a single line of traditional code (Python/JS). The simulators in the link were generated by Claude, purely based on my natural language rules.
Even GPT-5.1, Claude 4.5, and Gemini 3.0 analyzed this and agreed that this could be a kernel for AGI reasoning.
It sounds crazy, but please try the "Live Demo" in the link before judging. I want to hear what real engineers think about this.