Hi HN,
I built this because I’ve been using deep-search AI agents for research, but I always found the "wall of text" output exhausting to parse. When we learn from textbooks, we rely on diagrams, charts, and 2D/3D illustrations to build a mental model. I wanted an AI researcher that does the same.
The agent doesn't just write a report; it identifies concepts that are better explained visually and generates the code to render them inline.
The Tech Stack:
Frontend: Next.js for the UI and report rendering.
Real-time Server: Go. I chose Go to handle the high-concurrency needs of streaming agentic thoughts and managing WebSocket connections during long-running research tasks.
AI Background Worker: Python. This handles the agentic logic, using various LLMs to orchestrate the research, search, and code generation for the visuals.
Visualization Engine: * 3D: Three.js for interactive models and animations.
2D: p5.js for generative illustrations.
Charts: D3.js for data-heavy visualizations.
Diagrams: Mermaid.js for quick flowcharts and architecture.
How it works:
The Python worker isn't just generating an image; it’s generating the actual code/schema for these libraries based on the research context. The Next.js client then sanitizes and renders these components dynamically within the report.
I’m really looking for feedback on the "agentic" part of the visualization—specifically how to better prompt the model to choose the right type of diagram for the data it finds.
GitHub:
https://github.com/precious112/prism_ai
I’ll be around to answer any questions about the architecture or the 3D generation logic!