I kept hitting the same wall at work every time we needed to ship an AI feature. What looked like a week of work turned into picking a model, setting up a vector DB, managing embeddings, wiring up chat history, handling retries — none of it was the actual feature.
So I built Modular. You register a function that returns your app's data, then call ai.run() for one-shot features or ai.chat() for stateful conversation. Everything else — context management, embeddings, session history, model routing, retries — is handled.
MCP-native from day one. Works with Claude, GPT-4o, and Gemini.
Still early — collecting feedback before building the full SDK. Would love to hear if others have hit this same wall, or if you think I'm solving the wrong problem.
Comments
modular_dev•1h ago
I'm a lead engineer and we've been trying to add AI features to our product for the past few months. Every single time it turned into the same rabbit hole — vector DBs, embeddings, context management, session history. None of it was the actual feature we were trying to ship.
Modular is my attempt to hide all of that behind two function calls. You tell it how to get your data, it handles everything else.
Still very early — no SDK yet. Mainly want to know if other devs have hit this same wall or if I'm the only one who found this painful.
modular_dev•1h ago