Except it was obvious to my cofounder I wasn't really validating anything. LLMs are incredibly good at agreeing with you in subtle ways, especially when you feed them context that already reflects your thoughts. I'd ask "Does this make sense?" and get a beautifully worded essay about why yes, obviously, this is the best thing ever.
I was using AI as an echo chamber without noticing. It was like a game, you bias the model a little, it biases you a little back.
So my cofounder hacked together a tiny internal tool over the weekend to break that loop. Instead of one model validating my thinking, it puts multiple personas with different expertise in the same conversation. They naturally push back on each other.
It's basically a rubber duck that argues with itself.
We gave it some UI and called it Roundtable: https://roundtable.ovlo.ai
Would love to hear what you'all think!