I wanted to share an experiment I've been working on for the past 11 months. I am a non-coder (architect) based in Japan, but I managed to build a system that stabilizes Gemini 1.5 Pro over long contexts (800k+ tokens).
The Problem: When context gets too long, the AI gets "Drunk" (Context Dilution) and ignores System Instructions.
The Solution: I applied the concept of "Bhavanga" (Life Continuum) from ancient Buddhist Psychology. Instead of a static RAG, I built a 3-layer architecture: 1. Super-Ego: System Instructions v1.5.0 (The Anchor) 2. Ego: Gemini 1.5 Pro (The Processor) 3. Id: Vector DB (The Unconscious Stream)
I wrote a detailed breakdown of this architecture on Medium. I'd love to hear your thoughts on this "Pseudo-Human" approach.
Full Article: https://medium.com/@office.dosanko/project-bhavanga-building-the-akashic-records-for-ai-without-fine-tuning-1ceda048b8a6
GitHub: https://github.com/dosanko-tousan/Gemini-Abhidhamma-Alignment
DosankoTousan•1h ago
I am the author of this article. I'm a non-coder (architect) based in Japan.
Over the past 11 months, I've been experimenting with Gemini 1.5 Pro to solve the "Context Dilution" problem (where the AI gets "drunk" and hallucinates in long contexts).
Instead of fine-tuning, I applied the cognitive model of Abhidhamma (Ancient Buddhist Psychology) to the system architecture.
The Architecture:
1.Super-Ego: System Instructions v1.5.0 (Logic Filter)
2.Ego: Gemini 1.5 Pro (Processor with limited active context)
3.Id: Vector DB (Deep storage of 800k+ tokens)
I wrote zero lines of code for this. I built it entirely through dialogue with the AI. I've open-sourced the System Instructions on GitHub (link in the article).
I'd love to hear your feedback on this "Pseudo-Human" approach.