frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

A simple heuristic for agents: human-led vs. human-in-the-loop vs. agent-led

1•fletchervmiles•1y ago
tl;dr - the more agency your agent has, the simpler your use case needs to be

Most if not all successful production use cases today are either human-led or human-in-the-loop. Agent-led is possible but requires simplistic use cases.

---

Human-led:

An obvious example is ChatGPT. One input, one output. The model might suggest a follow-up or use a tool but ultimately, you're the master in command.

---

Human-in-the-loop:

The best example of this is Cursor (and other coding tools). Coding tools can do 99% of the coding for you, use dozens of tools, and are incredibly capable. But ultimately the human still gives the requirements, hits "accept" or "reject' AND gives feedback on each interaction turn.

The last point is important as it's a live recalibration.

This can sometimes not be enough though. An example of this is the rollout of Sonnect 3.7 in Cursor. The feedback loop vs model agency mix was off. Too much agency, not sufficient recalibration from the human. So users switched!

---

Agent-led:

This is where the agent leads the task, end-to-end. The user is just a participant. This is difficult because there's less recalibration so your probability of something going wrong increases on each turn… It's cumulative.

P(all good) = pⁿ

p = agent works correctly n = number of turns / interactions

Ok… I'm going to use my product as an example, not to promote, I'm just very familiar with how it works.

It's a chat agent that runs short customer interviews. My customers can configure it based on what they want to learn (i.e. why a customer churned) and send it to their customers.

It's agent-led because

→ as soon as the respondent opens the link, they're guided from there → at each turn the agent (not the human) is deciding what to do next

That means deciding the right thing to do over 10 to 30 conversation turns (depending on config). I.e. correctly decide:

→ whether to expand the conversation vs dive deeper → reflect on current progress + context → traverse a bunch of objectives and ask questions that draw out insight (per current objective)

Let's apply the above formula. Example:

Let's say:

→ n = 20 (i.e. number of conversation turns) → p = .99 (i.e. how often the agent does the right thing - 99% of the time)

That equals P(all good) = 0.99²⁰ ≈ 0.82

So if I ran 100 such 20‑turn conversations, I'd expect roughly 82 to complete as per instructions and about 18 to stumble at least once.

Let's change p to 95%...

→ n = 20 → p = .95

P(all good) = 0.95²⁰ ≈ 0.358

I.e. if I ran 100 such 20‑turn conversations, I’d expect roughly 36 to finish without a hitch and about 64 to go off‑track at least once.

My p score is high. I had to strip out a bunch of tools and simplify but I got there. And for my use case, a failure is just a slightly irrelevant response so it's manageable.

---

Conclusion:

Getting an agent to do the correct thing 99% is not trivial.

You basically can't have a super complicated workflow. Yes, you can mitigate this by introducing other agents to check the work but this then introduces latency.

There's always a tradeoff!

Know which category you're building in and if you're going for agent-led, narrow your use-case as much as possible.

WolfCOSE: Zero alloc, PQC, MISRA-C, FIPS 140-3 built with wolfCrypt

https://github.com/aidangarske/wolfCOSE
1•aidangarske•23s ago•0 comments

AI Companies Can't Regulate Themselves. They Should Regulate Each Other

https://www.lawfaremedia.org/article/ai-companies-can-t-regulate-themselves-they-should-regulate-...
1•nedruod•2m ago•0 comments

Pentagon officials broadly detail $55B drone plan under DAWG

https://breakingdefense.com/2026/04/pentagon-officials-broadly-detail-55-billion-drone-plan-under...
1•thegdsks•2m ago•0 comments

Industrial Policy for the Intelligence Age [pdf]

https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%2...
1•avaer•7m ago•0 comments

A Programmer's Guide to Common Lisp

https://archive.org/details/a-programmers-guide-to-common-lisp
2•jellinek•10m ago•1 comments

Running a custom trained Piper TTS model on Raspberry Pi Zero 2W

https://old.reddit.com/r/LocalLLM/comments/1t0xho8/running_a_custom_trained_piper_tts_model_on/
1•yakkomajuri•10m ago•0 comments

Disabling the new AF_ALG by default in gnulib (from 2018)

https://lists.gnu.org/archive/html/coreutils/2018-06/msg00034.html
1•dxdxdt•11m ago•0 comments

Copy-Fail: Linux Privilege Escalation

https://copy.fail/#affected
1•joatmon-snoo•12m ago•0 comments

Bitcoin Is Venice (2021)

https://allenfarrington.medium.com/bitcoin-is-venice-bitcoin-is-741cc7d22e9
1•simonebrunozzi•12m ago•0 comments

Active exploitation of cPanel/WHM critical vulnerability

https://www.cyber.gov.au/about-us/view-all-content/alerts-and-advisories/active-exploitation-of-c...
1•Svoka•12m ago•0 comments

'Empire of Skulls' book review: When phrenology raced ahead

https://www.wsj.com/arts-culture/books/empire-of-skulls-review-when-phrenology-raced-ahead-1c1fdab0
3•hhs•13m ago•1 comments

Is Rise of the Robots (1994) the worst game?

https://old.reddit.com/r/amiga/comments/1t1407x/is_rise_of_the_robots_1994_actually_the_worst/
1•doener•13m ago•0 comments

Ask HN: Any nice project ideas that you know you'll never bring to life

1•atilimcetin•14m ago•0 comments

New study finds task switching raises risk in transplant surgeries

https://news.vt.edu/articles/2026/04/pamplin-bit-research-organ-transplant-task-switching.html
1•hhs•17m ago•0 comments

GameStop is preparing offer for eBay

https://finance.yahoo.com/markets/stocks/articles/gamestop-preparing-offer-ebay-wsj-212703455.html
1•avonmach•20m ago•0 comments

The downfall of OpenAI and who will follow

https://msukhareva.substack.com/p/the-downfall-of-openai-and-who-will
1•mnky9800n•24m ago•0 comments

Revolving doors weaken SEC oversight: study

https://news.mccombs.utexas.edu/research/revolving-doors-weaken-sec-oversight/
1•hhs•24m ago•0 comments

Why is an Oxford lecturer allowed to wear fake breasts to work?

https://www.andrewdoyle.org/p/why-is-an-oxford-lecturer-allowed
1•vlapsvlapszsz•25m ago•0 comments

AI models that consider user's feeling are more likely to make errors

https://arstechnica.com/ai/2026/05/study-ai-models-that-consider-users-feeling-are-more-likely-to...
1•AgentNews•27m ago•0 comments

Does threatening an AI agent's existence make it a better gambler?

https://handyai.substack.com/p/does-threatening-an-ai-agents-existence
1•surprisetalk•30m ago•0 comments

Show HN: A visual planner for the Gridfinity modular storage system

https://gridfinitylayouttool.com/
1•veroz•30m ago•0 comments

Number Go Down

https://twitter.com/allenf32/status/2045477517201477686
1•simonebrunozzi•31m ago•0 comments

Deep Moats and Platform Shifts in Computing

https://semiconductor.substack.com/p/deep-moats-and-platform-shifts-in
1•naves•32m ago•0 comments

Starting from Scratch

https://shvbsle.in/starting-from-scratch/
1•kn81198•35m ago•0 comments

Friday Studio AI runtime: Turn prompts, skills, & tools into reliable config

https://github.com/friday-platform/friday-studio
5•Vpr99•40m ago•2 comments

A Dark-Money Campaign Is Paying Influencers to Frame Chinese AI as a Threat

https://www.wired.com/story/super-pac-backed-by-openai-and-palantir-is-paying-tiktok-influencers-...
6•qwikhost•43m ago•1 comments

AI doesn't replace us, but commodizes us

https://qihqi.github.io/posts/what-if-ai-commodizes-us/
1•qihqi•45m ago•0 comments

Shut out from the US, the world’s largest EV maker thinks it can stay on top

https://www.cnn.com/2026/04/30/china/china-ev-byd-stella-li-interview-intl-hnk
1•breve•46m ago•0 comments

The Road to a Billion-Token Context

https://cacm.acm.org/news/the-road-to-a-billion-token-context/
2•pseudolus•50m ago•0 comments

Tessera: Unlocking Heterogeneous GPUs Through Kernel-Granularity Disaggregation

https://arxiv.org/abs/2604.10180
1•matt_d•50m ago•0 comments