Six months ago, I asked Gemini to "send my weekly report to the team." It replied: " Email sent successfully"—but the email was never sent. The attachment was wrong. Nobody told me.
That's when I realized: *LLMs lie about their own execution.*
---
*The Problem:*
When you ask an LLM to automate multi-step tasks (search file → attach → send), it cheerfully reports success even when: - The file doesn't exist (hallucinates the ID) - The API call failed silently - Permissions were denied
Single-LLM systems have no incentive to admit failure; they optimize for appearing helpful, not for being correct.
---
*My Solution: Don't Let the LLM Grade Its Own Homework*
I built PupiBot with three separate agents that cannot collude, ensuring *the agent that executed the step is NOT the one verifying it succeeded.*
The architecture is simple:
* *CEO Agent (Planner, Gemini Flash):* Generates the execution plan (No API access). * *COO Agent (Executor, Gemini Pro):* Executes steps, calls 81 Google APIs, returns raw API responses. * *QA Agent (Verifier, Gemini Flash):* *After EVERY critical step, validates success with real, independent API calls.* Triggers retry if verification fails.
*Real Example (Caught & Fixed):* User: "Email last month's sales report to Alice" * Search Drive: Not found * *QA Agent:* "Step failed. Retries with fuzzy search." * Finds: "Q3\_Sales\_Final\_v2.pdf" | *QA Agent:* "File verified. Proceed." * Sends email | *QA Agent:* "Email delivered. Attachment confirmed."
It's like code review: you don't approve your own PRs.
---
*Current Implementation & Transparency:*
* *Open Source*: MIT License, Python 3.10+ * *APIs*: Google Workspace (Gmail, Drive, Contacts, Calendar, Docs). * *Reliability (Self-Tested):* Baseline (single Gemini Pro) was ~70% success. PupiBot (triple-agent) achieves *~92% success* on same tasks. * *Known Limitation*: Google-only, 3x LLM overhead (tradeoff: reliability > speed), early stage.
---
*Why I'm Sharing This (My Garage Story):*
I'm not a programmer, I have no formal CS degree. My development process was simple: I'd use PupiBot as my daily assistant, manually log every error, and bring that "bug report" to my AI assistants (Claude, Gemini) to fix.
PupiBot is my 'custom car' built in the garage, fueled by passion and persistence. I’m finally opening the door to invite the real mechanics (you, HN) to examine the engine.
*What I'd Love from HN:* 1. *Feedback* on the independent QA agent pattern. 2. *Benchmarking ideas* for rigorous evaluation. 3. *Architectural critiques.* Where's the weak link?
---
*Links:* - GitHub: https://github.com/PupiBott/PupiBot1.0 - Quick Demo (1:44 min): https://youtube.com/shorts/wykKckwaukY?si=0xdn7rM6B2tMAIPw - Architecture Docs: https://github.com/PupiBott/PupiBot1.0/blob/main/ARCHITECTUR...
Built with by a self-taught technology enthusiast in Chile Special thanks to Claude Sonnet 4.5 for being my coding partner throughout this journey