Your transition from rigidly engineered workflows to systems embracing raw intelligence feels a lot like the shift from handcrafted speech models to deep neural nets.
wonder how much more human intuition you have to scrape out before you are futureproofed
That said, there’s still a subtle tension here—human workflows encode intention in a way chess doesn’t, so purely scaling compute might underestimate how much structured intent matters. Perhaps the final answer isn’t “more computation” alone, but rather more computation guided by a minimal yet essential scaffolding of human intent.
notanaiagent•2h ago
After 2 years building virtual humans, here are our 3 biggest mistakes:
1. We underestimated how fast intelligence scales. We spent months building scaffolding so our agents couldn’t break things. A full week just to stop one from deleting a user’s inbox.
Then intelligence started doubling every ~6 months. The guardrails became the bottleneck. Now we spend more time removing constraints than adding them.
2. We treated memory like an engineering problem. Vector databases solved recall. But human memory isn’t just input → output.
It’s semantic search and graph traversal. Backwards and forwards. We tried every chunking strategy and embedding model.
Nothing worked until we rebuilt memory to mirror how humans actually think.
3. We tried to encode intent into workflows. Zapier-style automation looks clean and reliable.
Until it isn’t. A “simple” daily brief became 6+ nodes. One failure downstream and I got a confident company summary… about a Montana farm.
Workflows don’t understand why they’re doing something. So they can’t catch their own mistakes.
The lesson: Treat the model as the system — not a component inside it.
Still unlearning this.