frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

What breaks first when you try to build real world AI agents

1•raghavchamadiya•1d ago
I’ve been working on AI agents outside of demos and toy tasks, and a pattern keeps repeating: the first things to break are rarely model quality.

A few failure modes showed up almost immediately.

The biggest one was memory. Long term memory sounds clean on paper, but in practice it drifts. Old assumptions leak into new tasks, context gets overweighted, and agents become confidently wrong in ways that are hard to debug. Resetting memory often improved results more than adding more.

Tools were the second problem. Most agent architectures assume tools are deterministic and cheap. They aren’t. APIs fail, return partial data, change formats, or time out. Agents don’t just need tools, they need strategies for tool failure, retries, and graceful degradation.

Evaluation broke next. Benchmarks didn’t help much once tasks became multi step and open ended. We tried success heuristics, human review, and partial credit scoring. None were satisfying. Measuring “did this agent actually help” turned out to be far harder than measuring accuracy.

Cost and latency quietly limited everything. An agent that feels smart at 10 dollars per task or 30 seconds per response is unusable in most real systems. Optimizing prompts and models mattered less than reducing unnecessary reasoning steps.

Finally, trust degraded faster than expected. Once an agent makes a confident but wrong decision, users mentally downgrade it. Recovering that trust is much harder than preventing the failure in the first place.

The main lesson so far is that building useful agents feels more like distributed systems work than model tuning. Failure handling, observability, and clear contracts matter more than clever prompting.

Curious how others are handling these tradeoffs, especially evaluation and memory management.