frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

A simple heuristic for agents: human-led vs. human-in-the-loop vs. agent-led

1•fletchervmiles•9mo ago
tl;dr - the more agency your agent has, the simpler your use case needs to be

Most if not all successful production use cases today are either human-led or human-in-the-loop. Agent-led is possible but requires simplistic use cases.

---

Human-led:

An obvious example is ChatGPT. One input, one output. The model might suggest a follow-up or use a tool but ultimately, you're the master in command.

---

Human-in-the-loop:

The best example of this is Cursor (and other coding tools). Coding tools can do 99% of the coding for you, use dozens of tools, and are incredibly capable. But ultimately the human still gives the requirements, hits "accept" or "reject' AND gives feedback on each interaction turn.

The last point is important as it's a live recalibration.

This can sometimes not be enough though. An example of this is the rollout of Sonnect 3.7 in Cursor. The feedback loop vs model agency mix was off. Too much agency, not sufficient recalibration from the human. So users switched!

---

Agent-led:

This is where the agent leads the task, end-to-end. The user is just a participant. This is difficult because there's less recalibration so your probability of something going wrong increases on each turn… It's cumulative.

P(all good) = pⁿ

p = agent works correctly n = number of turns / interactions

Ok… I'm going to use my product as an example, not to promote, I'm just very familiar with how it works.

It's a chat agent that runs short customer interviews. My customers can configure it based on what they want to learn (i.e. why a customer churned) and send it to their customers.

It's agent-led because

→ as soon as the respondent opens the link, they're guided from there → at each turn the agent (not the human) is deciding what to do next

That means deciding the right thing to do over 10 to 30 conversation turns (depending on config). I.e. correctly decide:

→ whether to expand the conversation vs dive deeper → reflect on current progress + context → traverse a bunch of objectives and ask questions that draw out insight (per current objective)

Let's apply the above formula. Example:

Let's say:

→ n = 20 (i.e. number of conversation turns) → p = .99 (i.e. how often the agent does the right thing - 99% of the time)

That equals P(all good) = 0.99²⁰ ≈ 0.82

So if I ran 100 such 20‑turn conversations, I'd expect roughly 82 to complete as per instructions and about 18 to stumble at least once.

Let's change p to 95%...

→ n = 20 → p = .95

P(all good) = 0.95²⁰ ≈ 0.358

I.e. if I ran 100 such 20‑turn conversations, I’d expect roughly 36 to finish without a hitch and about 64 to go off‑track at least once.

My p score is high. I had to strip out a bunch of tools and simplify but I got there. And for my use case, a failure is just a slightly irrelevant response so it's manageable.

---

Conclusion:

Getting an agent to do the correct thing 99% is not trivial.

You basically can't have a super complicated workflow. Yes, you can mitigate this by introducing other agents to check the work but this then introduces latency.

There's always a tradeoff!

Know which category you're building in and if you're going for agent-led, narrow your use-case as much as possible.

Kagi Small Web

https://kagi.com/smallweb
1•susam•4m ago•0 comments

Intel Underestimates Error Bounds by 1.3 quintillion (2014)

https://randomascii.wordpress.com/2014/10/09/intel-underestimates-error-bounds-by-1-3-quintillion/
1•antonly•4m ago•0 comments

Show HN: Whisper Money – a zero-knowledge personal finance app (E2E encrypted)

https://github.com/whisper-money/whisper-money
1•falcon_•5m ago•1 comments

Prompt Repetition Improves Non-Reasoning LLMs

https://arxiv.org/abs/2512.14982
1•UntitledNo4•8m ago•0 comments

Writing an LLM from scratch, part 31 – the models are now on Hugging Face

https://www.gilesthomas.com/2026/01/llm-from-scratch-31-models-on-hugging-face
1•gpjt•8m ago•0 comments

Histomat of F/OSS: We should reclaim LLMs, not reject them

https://writings.hongminhee.org/2026/01/histomat-foss-llm/
1•birdculture•11m ago•0 comments

Book Review: Ping by Andrew Brodsky

https://www.mattrutherford.co.uk/book-ping-by-andrew-brodsky/
1•walterbell•12m ago•0 comments

Private LLM Inference on Consumer Blackwell GPUs

https://arxiv.org/abs/2601.09527
1•Teever•12m ago•0 comments

I made a cursor clone just for taking notes

https://galileo.sh/
1•zaais•13m ago•3 comments

A Brief Genealogy of Anti-Modernity

https://thewaxingcrescent.substack.com/p/a-brief-genealogy-of-anti-modernity
1•XzetaU8•14m ago•0 comments

Signature Reduction

https://www.newsweek.com/exclusive-inside-militarys-secret-undercover-army-1591881
1•barrister•15m ago•0 comments

List of Common Misconceptions

https://en.wikipedia.org/wiki/List_of_common_misconceptions
2•xthe•16m ago•1 comments

Yaël D. Eisenstat

https://en.wikipedia.org/wiki/Yael_Eisenstat
1•barrister•16m ago•0 comments

Mandiant releases rainbow table that cracks weak admin password in 12 hours

https://arstechnica.com/security/2026/01/mandiant-releases-rainbow-table-that-cracks-weak-admin-p...
1•mannykannot•19m ago•0 comments

Show HN: Hydra – Capture and share AI Playbooks across your stack

https://hydra.opiusai.com/
1•Bharath_Koneti•20m ago•0 comments

The Bitter Lesson of Agent Frameworks

https://twitter.com/gregpr07/status/2012052139384979773
2•arbayi•22m ago•0 comments

Revisiting the Joys and Woes of the Craft in 2026

https://www.paritybits.me/joys-and-woes-2026/
1•NiloCK•22m ago•1 comments

Show HN: I built a Go TUI to clean dev caches on macOS

https://github.com/2ykwang/mac-cleanup-go
2•immutable000•24m ago•1 comments

Show HN: UAIP Protocol – Secure settlement layer for autonomous AI agents

https://github.com/jahanzaibahmad112-dotcom/UAIP-Protocol
2•Jahanzaib687•24m ago•0 comments

ClickHouse Launches Managed PostgreSQL

https://clickhouse.com/cloud/postgres
2•thenaturalist•25m ago•0 comments

How to make LLMs and Agents work on large amounts of data

https://blog.datatune.ai/how-to-make-llms-work-on-large-amounts-of-data
1•abhijithneil•25m ago•0 comments

Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API)

https://github.com/whispem/minikv
13•whispem•27m ago•5 comments

Ben Affleck and Matt Damon on the Limits of AI in Movie Making [video]

https://www.youtube.com/watch?v=O-2OsvVJC0s
2•thunderbong•28m ago•0 comments

Vinted Sells Children

https://morsdei.uk/vinted-sells-children/
2•NoGimmies•29m ago•0 comments

Meta has discontinued its metaverse for work, too

https://www.theverge.com/tech/863209/meta-has-discontinued-its-metaverse-for-work-too
10•malshe•31m ago•1 comments

Hair Ice

https://en.wikipedia.org/wiki/Hair_ice
2•cl3misch•31m ago•0 comments

Pastable Signatures

https://pastable-sig.site/
1•andyvtn•35m ago•0 comments

OpenAI to test ads in ChatGPT as it burns through billions

https://arstechnica.com/information-technology/2026/01/openai-to-test-ads-in-chatgpt-as-it-burns-...
5•Terretta•35m ago•0 comments

Why Water Is the Real Achilles Heel of the Chip Market

https://macronotes.substack.com/p/why-water-is-the-real-achilles-heel
1•rochansinha•36m ago•0 comments

Canada's deal with China signals it is serious about shift from US

https://www.bbc.com/news/articles/cm24k6kk1rko
33•breve•38m ago•5 comments