frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

A simple heuristic for agents: human-led vs. human-in-the-loop vs. agent-led

1•fletchervmiles•10mo ago
tl;dr - the more agency your agent has, the simpler your use case needs to be

Most if not all successful production use cases today are either human-led or human-in-the-loop. Agent-led is possible but requires simplistic use cases.

---

Human-led:

An obvious example is ChatGPT. One input, one output. The model might suggest a follow-up or use a tool but ultimately, you're the master in command.

---

Human-in-the-loop:

The best example of this is Cursor (and other coding tools). Coding tools can do 99% of the coding for you, use dozens of tools, and are incredibly capable. But ultimately the human still gives the requirements, hits "accept" or "reject' AND gives feedback on each interaction turn.

The last point is important as it's a live recalibration.

This can sometimes not be enough though. An example of this is the rollout of Sonnect 3.7 in Cursor. The feedback loop vs model agency mix was off. Too much agency, not sufficient recalibration from the human. So users switched!

---

Agent-led:

This is where the agent leads the task, end-to-end. The user is just a participant. This is difficult because there's less recalibration so your probability of something going wrong increases on each turn… It's cumulative.

P(all good) = pⁿ

p = agent works correctly n = number of turns / interactions

Ok… I'm going to use my product as an example, not to promote, I'm just very familiar with how it works.

It's a chat agent that runs short customer interviews. My customers can configure it based on what they want to learn (i.e. why a customer churned) and send it to their customers.

It's agent-led because

→ as soon as the respondent opens the link, they're guided from there → at each turn the agent (not the human) is deciding what to do next

That means deciding the right thing to do over 10 to 30 conversation turns (depending on config). I.e. correctly decide:

→ whether to expand the conversation vs dive deeper → reflect on current progress + context → traverse a bunch of objectives and ask questions that draw out insight (per current objective)

Let's apply the above formula. Example:

Let's say:

→ n = 20 (i.e. number of conversation turns) → p = .99 (i.e. how often the agent does the right thing - 99% of the time)

That equals P(all good) = 0.99²⁰ ≈ 0.82

So if I ran 100 such 20‑turn conversations, I'd expect roughly 82 to complete as per instructions and about 18 to stumble at least once.

Let's change p to 95%...

→ n = 20 → p = .95

P(all good) = 0.95²⁰ ≈ 0.358

I.e. if I ran 100 such 20‑turn conversations, I’d expect roughly 36 to finish without a hitch and about 64 to go off‑track at least once.

My p score is high. I had to strip out a bunch of tools and simplify but I got there. And for my use case, a failure is just a slightly irrelevant response so it's manageable.

---

Conclusion:

Getting an agent to do the correct thing 99% is not trivial.

You basically can't have a super complicated workflow. Yes, you can mitigate this by introducing other agents to check the work but this then introduces latency.

There's always a tradeoff!

Know which category you're building in and if you're going for agent-led, narrow your use-case as much as possible.

Floyd is an enterprise-level world model

https://www.loom.com/share/7b3ba36113e446548f3a79cf5fc1e42c
1•tjarzu•1m ago•0 comments

Walk me through this "Safety Third" thing

https://mikerowe.com/2020/03/walk-me-through-this-safety-third-thing/
1•andsoitis•1m ago•0 comments

Perplexity Computer Is Groundbreaking

https://karozieminski.substack.com/p/perplexity-computer-review-examples-guide
2•Lunaboo•4m ago•0 comments

Jack Dorsey Blamed AI for Block's Layoffs. Skeptics Aren't Buying It

https://www.wsj.com/business/jack-dorseys-latest-far-out-bet-an-ai-future-with-fewer-employees-25...
1•nradov•4m ago•0 comments

A new 'uncertainty relation' for quantum measurement errors

https://phys.org/news/2026-03-uncertainty-quantum-errors.html
1•bikenaga•4m ago•0 comments

Building an Elite AI Engineering Culture in 2026

https://www.cjroth.com/blog/2026-02-18-building-an-elite-engineering-culture
1•mooreds•4m ago•0 comments

Idaho considers an 'apocalyptic' choice for disabled people and families

https://19thnews.org/2026/03/idaho-medicaid-budget-cuts-disability-programs/
1•mooreds•5m ago•0 comments

Where AI Agents Are Heading: What We Learned from Recent YC Startups

https://e2b.dev/blog/yc-companies-ai-agents
1•tizkovatereza•8m ago•2 comments

Show HN: AgentCost – Track, control, and optimize your AI spending (MIT)

https://github.com/agentcostin/agentcost
2•agentcostin•9m ago•0 comments

Spectre I prevents smart devices and AI recorders from picking up your voice

https://www.deveillance.com/
1•tnorthcutt•10m ago•1 comments

Show HN: VideoEvaluator, a Video Comparison Tool

https://www.videoevaluator.com/
1•ekinertac•10m ago•0 comments

Show HN: AI tool that brutally roasts your AI agent ideas

https://whycantwehaveanagentforthis.com
1•Sattyamjjain•12m ago•0 comments

Toyota Once Used a Fake Dining Room Set to Teach Execs How Big Americans Are

https://www.thedrive.com/news/toyota-once-used-a-fake-dining-room-set-to-teach-executives-how-big...
2•coloneltcb•12m ago•0 comments

Show HN: Agent Action Protocol (AAP) – MCP got us started, but is insufficient

https://github.com/agentactionprotocol/aap/
2•hank2000•13m ago•0 comments

Deveillance Spectre I blocks smart devices and AI recorders

https://twitter.com/aidaxbaradari/status/2028864606568067491
3•geekfactor•14m ago•1 comments

The Attention Tax

https://www.afox.dev/posts/the-attention-tax
1•wtfox•15m ago•0 comments

Attacks on GPS Spike Amid US and Israeli War on Iran

https://www.wired.com/story/gps-attacks-on-ships-spike-amid-the-us-and-israeli-war-on-iran/
1•speckx•16m ago•0 comments

Show HN: The Nova: Evolution for Evolution's Sake

https://fuchsia-broad-flamingo-157.mypinata.cloud/ipfs/bafybeihc6mom4oowr6afzofxi7gzpnrsi3smaruur...
1•Novaga•16m ago•0 comments

Justice Department Seeks to Reverse Course and Defend Law Firm Sanctions

https://www.wsj.com/us-news/law/justice-department-seeks-to-reverse-course-and-defend-law-firm-sa...
3•JumpCrisscross•17m ago•0 comments

Why MAGA suddenly loves solar power

https://www.washingtonpost.com/business/2026/03/02/katie-miller-solar-power-trump/
1•standeven•19m ago•0 comments

Show HN: RUOK – Self-hosted personal OKR system with AI-powered analytics

https://github.com/zli117/RUOK/
3•lzl1234•19m ago•0 comments

Show HN: VibePod CLI – Run AI agents with isolation and better observability

https://vibepod.dev/
2•nezhar•20m ago•0 comments

Block's Jack Dorsey thinks AI can do 40% of his job

https://www.theguardian.com/technology/2026/mar/03/jack-dorsey-block-ai-worker-jobs
2•skor•22m ago•0 comments

Show HN: A runtime authorization layer for AI agents

2•rkka•23m ago•0 comments

Bash Is Not Enough: Why Large-Scale CI Needs an Orchestrator

https://www.iankduncan.com/engineering/2026-02-06-bash-is-not-enough/
2•PaulHoule•25m ago•0 comments

Why Your Company's Digital Sovereignty Is a House of Cards

https://medium.com/@gastonbehar/why-your-companys-digital-sovereignty-is-a-house-of-cards-556b31c...
2•gastonbehar•26m ago•1 comments

Why Test Environments Fail and What Top Teams Do to Avoid the Chaos

https://sdtimes.com/test/why-test-environments-fail-and-what-top-teams-do-to-avoid-the-chaos/
2•mikece•26m ago•0 comments

Easterlin Paradox

https://en.wikipedia.org/wiki/Easterlin_paradox
2•gessha•26m ago•0 comments

Claude is an Electron App because we've lost native

https://tonsky.me/blog/fall-of-native/
3•todsacerdoti•26m ago•0 comments

Show HN: OculOS – Any desktop app as a JSON API via OS accessibility tree

https://github.com/huseyinstif/oculos
2•stif1337•27m ago•0 comments