I've been building Chorus, a multi-agent system with a different approach than the typical role-based agents (AutoGen, CrewAI, etc.).
The core idea: instead of giving agents "roles" (researcher, critic, writer), each agent reasons through an epistemological framework – a set of rules about what counts as valid knowledge, what questions to ask, and what reasoning moves are allowed/forbidden.
When you run a debate, frameworks with incompatible validity tests are forced to collide. A "Metric" agent (everything must be quantifiable) arguing with a "Storyteller" agent (context and lived experience matter) creates productive tension that surfaces trade-offs a single perspective would miss.
The interesting part: the system can detect when agents synthesize something that doesn't fit any existing framework – and extract it as a new "emergent framework." I've got 33 of these now, discovered through debates, not designed by me. Whether these are genuinely novel epistemologies or sophisticated pattern matching is an open question I'm still investigating.
What it's not: consensus-seeking, voting, or "let's all agree." The goal is structured disagreement that produces insights.
Built with: Node.js backend, vanilla JS frontend, multiple LLM providers (Claude, GPT-4, Gemini, Mistral).
Live for waitlist signup at: https://chorusai.replit.app/
-I'll send a beta code to the email used for sign up
Feedback wanted: Is "epistemological frameworks" meaningfully different from good prompt engineering? Would love HN's honest take on whether this is genuine innovation or dressed-up multi-agent chat.