Previously I built Allyce AI (https://allyce.ai/), which helps answer questions, qualify leads, and handle complaints on websites (AI for sales/SDR + CS). When I started as an independent builder, the first thing I built was a sophisticated GraphRAG pipeline that powers Allyce. The customers using Allyce needed something to analyze sales data and ad performance data, which wasn't possible with RAG - that's what led me to build Tabwise as a separate product.
The key insight was that general AI tools fail at data analysis because they skip three critical steps: 1. Pre-processing: Cleaning data, inferring schema and structuring context specifically for analytical queries 2. Context engineering: Crafting prompts that understand data relationships, business context, and expected output formats 3. Post-processing: Converting raw LLM outputs into properly formatted charts, executive summaries, and actionable insights
This pipeline is why Tabwise can consistently outperform ChatGPT (GPT-5 Thinking) and Claude (Sonnet-4) on data analysis tasks - it's not just the model, it's the entire system.
Tech stack: Next.js, Vercel AI SDK, E2B, I automatically route tasks to the best model based on complexity (Powered by Claude sonnet-4 and OSS models via Fireworks AI)
Right now I'm focused on nailing spreadsheets (.xlsx, .csv), but have more data source integrations planned based on what users actually need.
Would love your thoughts.