I built Prism AI because I was frustrated with the "wall of text" output typical of most AI research tools. While current LLMs are great at synthesis and citations, they often fail to communicate complex structural or numerical data effectively.
Prism AI is an open-source attempt to solve this by making the research process inherently visual.
Key Technical Details:
Orchestration: I'm using a "Plan-and-Execute" pattern powered by LangGraph. This allows the system to maintain a persistent state and perform recursive "gap analysis" on its own research.
Concurrency: The research nodes are built with Python’s asyncio, allowing it to scrape, crawl, and synthesize multiple sections of a report in parallel.
Visualization Engine: Rather than just generating Markdown, the agents are equipped with tools to generate 2D/3D illustrations, interactive animations, and dynamic charts. The system determines when a concept is better explained visually and generates the corresponding code on the fly.
Self-Hostable: Fully Dockerized and runs with a Next.js frontend.
I’m particularly interested in hearing how others are handling the "context drift" that happens in high-concurrency multi-agent systems. The code is MIT licensed.
GitHub: https://github.com/precious112/prism-ai-deep-research
Fh_•2h ago
Are user inputs supported in the planning phase? This would avoid implementational draft.
PreciousH•2h ago
No yet,but it's a feature that can be added to allow the planning phase have a human in the loop because currently the planning agent handles all the planning,will definitely look into this thanks.
PreciousH•2h ago