I’ve just pivoted *AISheeter* from a simple formula generator into a full AI Agent. Last year, it was just auto-complete; recently, with the help of Claude Opus, I rewrote it to handle multi-step workflows. Think of it as *Cursor, but for spreadsheets.*
The problem that I often faces: Most existing tools (including Gemini in Sheets) treat every query as an isolated, one-off task. If you want to run complex operations on data, you have to manually prompt every single step. It’s tedious and stateless.
This app, the solution : it is an agent that persists context per spreadsheet with multiple sheets. It allows for chained workflow. I.e. (e.g., "Analyze raw data → Extract signals → Score priority") where the AI remembers the output of step 1 when performing step 2.
*The Stack:*
- *Frontend/Backend:* Next.js 16, Vercel AI SDK
- *Database:* Supabase (for context persistence)
- *Integration:* Google Apps Script
- *Models:* BYOK (OpenAI, Anthropic, Gemini, Groq)
*The Challenge:* The hardest part was managing the conversation memory without blowing up token costs. We implemented a system that maps conversation threads to specific Spreadsheet IDs, allowing the agent to "recall" previous context without needing to re-ingest the entire sheet history for every request.
*The Good Finding:* I do a lot of context engineering in the backend to handle tokens carefully. I actually found that smaller models like *gpt-5-mini* and *Claude Haiku* can actually handle this level of complexity surprisingly well if the context is structured correctly.
Link to repo: https://github.com/Ai-Quill/ai-sheeter
Download extension : https://aisheeter.com/
I’d love feedback on the architecture—specifically how we’re handling the context window management.
Thanks