What it does: - Ingests canvas data and mirrors it into a graph-vector database (Helix) - Performs semantic, relational, and spatial clustering of canvas elements - Lets you query your diagrams with natural language via LLM-powered analysis
Why I built it: I found myself creating complex diagrams in Excalidraw but struggling to extract insights from them later. Traditional search doesn't understand spatial relationships or semantic connections between elements. Treyspace bridges that gap by treating your canvas as a knowledge graph.
Demo: https://app.treyspace.app/ (no API key required)
Key features: - Works with in-memory mode by default (no DB setup needed) - Optional Helix DB backend for production use - OpenAI-compatible responses API - SSE streaming for real-time analysis - Use as a library or standalone server
Example use case: Load an architecture diagram, ask "What are the security vulnerabilities in this design?" and get context-aware answers based on spatial proximity, element relationships, and semantic understanding.
The SDK and source code is MIT licensed and designed to be hacked on. I’ve tried to make it as simple as possible to set up (all you should need is an OpenAI API key)
Repo: https://github.com/L-Forster/treyspace-sdk
Would love feedback on the approach and hear how you might use canvas-based RAG!