The idea: Drop the relational DB. Everything is a Markdown file
Leads, contacts, email threads, client tech specs, and even system configs are stored as .md files (likely with YAML frontmatter for metadata). Redis is used purely as an indexing layer to map and search these files quickly
Why do this? Because the primary consumer of this system isn't a human interacting with a complex GUI; it's an autonomous LLM agent.
LLM Native: Markdown is the most digestible format for LLMs. Instead of forcing the agent to write complex SQL queries to understand a client's history, it just reads a directory of plain text files.
Easy Replication: Scaling or backing up the "database" is as simple as rsync-ing a directory.
The "Alive" System: Inside this architecture lives a background LLM agent. When the system is idle, the agent reads these files, updates summaries, categorizes clients, schedules follow-ups, and builds a "memory" text file.
The Architecture:
Storage Layer: Local file system with Markdown files
Index Layer: Redis (updates its index when a file is modified)
Brain: LLM Agent that reads/writes files and acts as the system OS
The obvious red flags: I know there are immediate issues here: file locks when the agent and a human try to edit at the same time, concurrency bottlenecks, OS inode limits at scale, and a complete lack of ACID compliance.
But for an AI first company where the agent is the backend, does a text-first architecture make more sense than a traditional RDBMS?
noemit•1h ago
My take is AI native apps that have longevity will prioritize token-efficiency - and I believe queries do that.