frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Administration Planning to Blow Off FISA Court's Ordered Section 702 Fixes

https://www.techdirt.com/2026/04/15/administration-apparently-planning-to-blow-off-fisa-courts-or...
1•Cider9986•1m ago•0 comments

Connect maker devices to Claude Code and Cowork

https://github.com/anthropics/claude-desktop-buddy
1•felixrieseberg•1m ago•0 comments

Comfy.Guide

https://comfy.guide/
3•jethronethro•2m ago•0 comments

QSOlog – Offline-first PWA logbook for amateur radio operators

https://nuetzliches.github.io/qso-log/
1•nutz-bob•14m ago•0 comments

I Hate AI

5•jwpapi•18m ago•1 comments

100M commits to GitHub without using Git push

https://github.com/Wuerfelhusten/commitment
1•Wuerfelhusten•22m ago•1 comments

A type of bike theft in San Francisco

https://shub.club/writings/2026/april/a-type-of-bike-theft/
3•forthwall•31m ago•0 comments

Hospital at centre of child HIV outbreak caught reusing syringes in Pakistan

https://www.bbc.com/news/articles/clyrd818gd2o
3•flykespice•31m ago•0 comments

Breaking from Your Parents [video]

https://www.youtube.com/watch?v=VhpF9jC3a18
1•Aerbil313•32m ago•0 comments

I automated my local barbershop's chaos with code

https://ravoor.com/ar
5•megoxv•33m ago•0 comments

Stop New York's Attack on 3D Printing

https://www.eff.org/deeplinks/2026/04/stop-new-yorks-attack-3d-printing
3•iamnothere•35m ago•0 comments

Unwritten – 3-minute AI short film, Top at Soulscape 2026

https://www.youtube.com/watch?v=rzdvt-qOysI
1•gltanaka•36m ago•0 comments

Show HN: VCoding – A 5 MB native Windows IDE with no dynamic dependencies

1•Tonyjw2002•40m ago•0 comments

Planning and Monitoring Indoor Vertical Green Living Walls with Remote Sensing

https://onlinelibrary.wiley.com/doi/10.1155/ina/5782002
1•PaulHoule•41m ago•0 comments

George Orwell Predicted the Rise of "AI Slop" in Nineteen Eighty-Four (1949)

https://www.openculture.com/2026/04/how-george-orwell-predicted-the-rise-of-ai-slop.html
13•doener•42m ago•3 comments

Show HN: LLMs don't hallucinate because they're bad at math, it's the format

https://github.com/yvonboulianne/laeka-rational
2•yvonboulianne•43m ago•0 comments

Ne, the Nice Editor

https://github.com/vigna/ne
2•Lyngbakr•43m ago•0 comments

Everything we like is a psyop

https://techcrunch.com/2026/04/16/everything-we-like-is-a-psyop/
2•evo_9•44m ago•0 comments

North Korea targets macOS users in latest heist

https://www.theregister.com/2026/04/16/north_korea_social_engineering_macos/
2•Bender•44m ago•0 comments

Google Chrome lacks fingerprinting protection

https://www.theregister.com/2026/04/16/google_chrome_lacks_browser_fingerprinting/
3•Bender•45m ago•2 comments

QUIC will soon be as important as TCP – but it's vastly different

https://www.theregister.com/2026/04/16/quic_explained/
3•Bender•46m ago•0 comments

Frank Dudley Beane's Experience with Ergot and Cannabis Indica (1884)

https://publicdomainreview.org/collection/experience-with-ergot-and-cannabis/
3•apollinaire•51m ago•0 comments

The Book News Isn't All Bad

https://reactormag.com/the-book-news-isnt-all-bad/
2•samclemens•52m ago•0 comments

Claude Opus 4.7 System Prompt Leaked

https://twitter.com/elder_plinius/status/2044857095439421885
4•giancarlostoro•58m ago•0 comments

The cover of C++ The Programming Language raises questions not answered by cover

https://devblogs.microsoft.com/oldnewthing/20260401-00/?p=112180
3•ibobev•58m ago•1 comments

Isolating AI Coding Agents on Bare Metal

https://blog.singlr.ai/isolating-ai-coding-agents-bare-metal-incus-podman/
3•jacobobryant•59m ago•0 comments

Cave under castle with prehistoric hippo bones 'once in a lifetime' find

https://www.bbc.com/news/articles/c8ejjw7377jo
3•Lyngbakr•59m ago•0 comments

How customer lists and trademarks help companies borrow

https://www.chicagobooth.edu/review/how-customer-lists-trademarks-help-companies-borrow
2•hhs•1h ago•0 comments

Parcae: Doing More with Fewer Parameters Using Stable Looped Models

https://sandyresearch.github.io/parcae/
2•matt_d•1h ago•0 comments

Runway CEO: AI could help Hollywood make 50 films instead of 1 $100M blockbuster

https://techcrunch.com/2026/04/16/runway-ceo-says-ai-could-help-hollywood-make-50-films-instead-o...
1•bookofjoe•1h ago•1 comments
Open in hackernews

A simple heuristic for agents: human-led vs. human-in-the-loop vs. agent-led

1•fletchervmiles•11mo ago
tl;dr - the more agency your agent has, the simpler your use case needs to be

Most if not all successful production use cases today are either human-led or human-in-the-loop. Agent-led is possible but requires simplistic use cases.

---

Human-led:

An obvious example is ChatGPT. One input, one output. The model might suggest a follow-up or use a tool but ultimately, you're the master in command.

---

Human-in-the-loop:

The best example of this is Cursor (and other coding tools). Coding tools can do 99% of the coding for you, use dozens of tools, and are incredibly capable. But ultimately the human still gives the requirements, hits "accept" or "reject' AND gives feedback on each interaction turn.

The last point is important as it's a live recalibration.

This can sometimes not be enough though. An example of this is the rollout of Sonnect 3.7 in Cursor. The feedback loop vs model agency mix was off. Too much agency, not sufficient recalibration from the human. So users switched!

---

Agent-led:

This is where the agent leads the task, end-to-end. The user is just a participant. This is difficult because there's less recalibration so your probability of something going wrong increases on each turn… It's cumulative.

P(all good) = pⁿ

p = agent works correctly n = number of turns / interactions

Ok… I'm going to use my product as an example, not to promote, I'm just very familiar with how it works.

It's a chat agent that runs short customer interviews. My customers can configure it based on what they want to learn (i.e. why a customer churned) and send it to their customers.

It's agent-led because

→ as soon as the respondent opens the link, they're guided from there → at each turn the agent (not the human) is deciding what to do next

That means deciding the right thing to do over 10 to 30 conversation turns (depending on config). I.e. correctly decide:

→ whether to expand the conversation vs dive deeper → reflect on current progress + context → traverse a bunch of objectives and ask questions that draw out insight (per current objective)

Let's apply the above formula. Example:

Let's say:

→ n = 20 (i.e. number of conversation turns) → p = .99 (i.e. how often the agent does the right thing - 99% of the time)

That equals P(all good) = 0.99²⁰ ≈ 0.82

So if I ran 100 such 20‑turn conversations, I'd expect roughly 82 to complete as per instructions and about 18 to stumble at least once.

Let's change p to 95%...

→ n = 20 → p = .95

P(all good) = 0.95²⁰ ≈ 0.358

I.e. if I ran 100 such 20‑turn conversations, I’d expect roughly 36 to finish without a hitch and about 64 to go off‑track at least once.

My p score is high. I had to strip out a bunch of tools and simplify but I got there. And for my use case, a failure is just a slightly irrelevant response so it's manageable.

---

Conclusion:

Getting an agent to do the correct thing 99% is not trivial.

You basically can't have a super complicated workflow. Yes, you can mitigate this by introducing other agents to check the work but this then introduces latency.

There's always a tradeoff!

Know which category you're building in and if you're going for agent-led, narrow your use-case as much as possible.