frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

A simple heuristic for agents: human-led vs. human-in-the-loop vs. agent-led

1•fletchervmiles•11mo ago
tl;dr - the more agency your agent has, the simpler your use case needs to be

Most if not all successful production use cases today are either human-led or human-in-the-loop. Agent-led is possible but requires simplistic use cases.

---

Human-led:

An obvious example is ChatGPT. One input, one output. The model might suggest a follow-up or use a tool but ultimately, you're the master in command.

---

Human-in-the-loop:

The best example of this is Cursor (and other coding tools). Coding tools can do 99% of the coding for you, use dozens of tools, and are incredibly capable. But ultimately the human still gives the requirements, hits "accept" or "reject' AND gives feedback on each interaction turn.

The last point is important as it's a live recalibration.

This can sometimes not be enough though. An example of this is the rollout of Sonnect 3.7 in Cursor. The feedback loop vs model agency mix was off. Too much agency, not sufficient recalibration from the human. So users switched!

---

Agent-led:

This is where the agent leads the task, end-to-end. The user is just a participant. This is difficult because there's less recalibration so your probability of something going wrong increases on each turn… It's cumulative.

P(all good) = pⁿ

p = agent works correctly n = number of turns / interactions

Ok… I'm going to use my product as an example, not to promote, I'm just very familiar with how it works.

It's a chat agent that runs short customer interviews. My customers can configure it based on what they want to learn (i.e. why a customer churned) and send it to their customers.

It's agent-led because

→ as soon as the respondent opens the link, they're guided from there → at each turn the agent (not the human) is deciding what to do next

That means deciding the right thing to do over 10 to 30 conversation turns (depending on config). I.e. correctly decide:

→ whether to expand the conversation vs dive deeper → reflect on current progress + context → traverse a bunch of objectives and ask questions that draw out insight (per current objective)

Let's apply the above formula. Example:

Let's say:

→ n = 20 (i.e. number of conversation turns) → p = .99 (i.e. how often the agent does the right thing - 99% of the time)

That equals P(all good) = 0.99²⁰ ≈ 0.82

So if I ran 100 such 20‑turn conversations, I'd expect roughly 82 to complete as per instructions and about 18 to stumble at least once.

Let's change p to 95%...

→ n = 20 → p = .95

P(all good) = 0.95²⁰ ≈ 0.358

I.e. if I ran 100 such 20‑turn conversations, I’d expect roughly 36 to finish without a hitch and about 64 to go off‑track at least once.

My p score is high. I had to strip out a bunch of tools and simplify but I got there. And for my use case, a failure is just a slightly irrelevant response so it's manageable.

---

Conclusion:

Getting an agent to do the correct thing 99% is not trivial.

You basically can't have a super complicated workflow. Yes, you can mitigate this by introducing other agents to check the work but this then introduces latency.

There's always a tradeoff!

Know which category you're building in and if you're going for agent-led, narrow your use-case as much as possible.

Devil worshippers are using AI, exorcists are warned

https://www.thetimes.com/world/europe/article/ai-devil-worshipping-exorcists-9f7hqht36
1•noleary•1m ago•0 comments

Changing the Default Style in Slint – Deprecating Native-Looking Styles

https://slint.dev/blog/default-native-style-change
1•dabinat•2m ago•0 comments

Renoir, Cézanne and Matisse art among items stolen in Italian job

https://www.bbc.com/news/articles/cn4vw2xmpzzo
1•gmays•3m ago•0 comments

The future of work is world models

https://www.strangeloopcanon.com/p/the-future-of-work-is-world-models
1•walterbell•4m ago•0 comments

Show HN: Download Instagram Reels without login

1•ttdownsite•5m ago•0 comments

Why LLM-Generated Passwords Are Dangerously Insecure

https://www.irregular.com/publications/vibe-password-generation
1•zdw•10m ago•0 comments

The Beginning of Programming as We'll Know It

https://bitsplitting.org/2026/04/01/the-beginning-of-programming-as-well-know-it/
1•zdw•10m ago•0 comments

Garry's Mod successor s&box arrives on Steam on April 28 – PC Gamer

https://www.pcgamer.com/games/garrys-mod-successor-s-and-box-finally-arrives-on-steam-on-april-28/
1•evo_9•11m ago•0 comments

Show HN: Sixteen year trends in AI doom on HN

https://hn.ai-doom.cc/
1•easygenes•11m ago•0 comments

LIGO data hints at supernovae so powerful they leave nothing behind

https://arstechnica.com/science/2026/04/black-hole-mergers-put-limits-on-star-destroying-supernovae/
1•nobody9999•14m ago•0 comments

Websudoku

https://websudoku.me
1•pythonlord•15m ago•1 comments

Ship Elevators [video]

https://www.youtube.com/watch?v=O3X8attAerw
1•fuzzfactor•20m ago•1 comments

BUSA-TLS: Mac PSK Derivation for TLS 1.3 Using 2 Live Crew's "Banned in the USA"

https://www.rfc-editor.org/info/rfc9949
1•kmstout•21m ago•0 comments

Real Artists Ship

https://dan.bulwinkle.net/blog/real-artists-ship/
1•pilingual•23m ago•0 comments

Quantum computing bombshells that are not April Fools

https://scottaaronson.blog/?p=9665
2•Strilanc•25m ago•0 comments

Microwave hearing: thermoacoustic auditory stimulation by pulsed microwaves

https://pubmed.ncbi.nlm.nih.gov/4833827/
2•CGMthrowaway•25m ago•1 comments

Clone any web app in minutes

https://twill.ai/clone
2•danoandco•28m ago•0 comments

Peer-Preservation in Frontier Models

https://rdi.berkeley.edu/blog/peer-preservation/
1•simonpure•31m ago•0 comments

One of Apple's First Employees Looks Back at 50 Years

https://www.nytimes.com/2026/04/01/technology/apple-employee-50-years.html
1•tambourine_man•33m ago•1 comments

CougarLLM: A Global Inference Server

https://www.tigrisdata.com/blog/cougarllm/
2•excerionsforte•36m ago•0 comments

The Camps Promising to Turn You–Or Your Son–Into an Alpha Male

https://www.newyorker.com/magazine/2026/04/06/the-camps-promising-to-turn-you-or-your-son-into-an...
2•petethomas•39m ago•1 comments

My GitHub Suspension, a Thread

https://bsky.app/profile/rogerioromao.dev/post/3migx73xd2227
3•OuterVale•41m ago•2 comments

Do Graduate Degrees Pay Off?

https://www.peer-center.org/research/do-graduate-degrees-pay-off
2•gnabgib•41m ago•0 comments

Death of a refugee left at a doughnut shop by Border Patrol ruled homicide

https://apnews.com/article/buffalo-new-york-refugee-death-482894a96ba31b8945f4186f823c38d2
2•petethomas•46m ago•0 comments

From Steelworkers to Care Workers

https://chicagoreader.com/news/politics/advocate-hospital-quantum-computing-iqmp/
1•toomuchtodo•47m ago•0 comments

Show HN: Squire – CLI-first remote runtimes for Claude Code / Codex workflows

https://squire.run/
1•reidgoodbar•50m ago•0 comments

Auto industry group calls for scrapping US gas tax, adopting vehicle fee

https://www.reuters.com/business/autos-transportation/auto-industry-group-calls-scrapping-us-gas-...
2•geox•50m ago•3 comments

Show HN: Structured Python control over AI computer use agents

https://github.com/aadya940/orbit
1•aadyachinubhai•52m ago•0 comments

MetaLLM – Metasploit-inspired AI/ML security testing framework

https://github.com/scthornton/MetaLLM
1•perfecXion•55m ago•0 comments

We ran 8 Bedrock models on the same RAG pipeline. The cheapest Claude model won

https://www.outcomeops.ai/blogs/youre-probably-using-the-wrong-bedrock-model
1•linsys•56m ago•0 comments