I built QWED – a verification layer that sits between your LLM and production.
The idea: Don't fix hallucinations, verify them. If AI output can't be mathematically proven, it doesn't ship.
11 specialized engines:
- Math (SymPy) – verify calculations
- Logic (Z3 SMT) – formal proofs
- SQL (SQLGlot) – detect injection/dangerous queries
- Code (AST) – security analysis + taint tracking
- Facts (KB) – entity verification without LLM
Works with ANY LLM – OpenAI, Claude, Gemini, or local models via Ollama ($0).
Model-agnostic: Your LLM choice, our verification.
Happy to answer questions about deterministic AI verification!
ninadpathak•1h ago
Good product, but "100% deterministic" is marketing—hard stop on the Fact engine (TF-IDF for fact verification), Reasoning engine, and Image verification. The Math and Logic layers using SymPy/Z3 are genuinely deterministic. But claiming TF-IDF can verify facts deterministically is just keyword-document similarity—it'll miss context, evolving information, and non-obvious logical chains. What you're really building is a strong gate for verifiable domains (math, structured logic, code security) layered on top of weaker heuristics for fuzzy domains. That's honest work and useful. The positioning should match: "deterministic where computable, high-confidence verification elsewhere." As it stands, teams will trust the whole output with 100% confidence when only parts earn it.
rahuldass•1h ago
Thanks for your input. It's deterministic because no embeddings. TF-IDF is used because it's not vector based and doesn't rely on vibes. Still figuring out how to make it better. If you can help suggest, that would be great.
rahuldass•1h ago