frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Using spec-driven development with Claude Code

https://heeki.medium.com/using-spec-driven-development-with-claude-code-4a1ebe5d9f29
1•csbrooks•1m ago•0 comments

Study: Online comments shape perception of political posts despite fixed opinion

https://news.exeter.ac.uk/faculty-of-humanities-arts-and-social-sciences/even-when-peoples-opinio...
1•giuliomagnifico•2m ago•0 comments

Index: The paid API directory for AI agents: indexed, verified, searchable

https://402index.io/
1•janandonly•2m ago•0 comments

Show HN: Loopback – Verified interview feedback that builds your live resume

https://www.joinloopback.com/
1•DataMind•3m ago•0 comments

Building luxury houses pushes down price of more affordable units

https://www.slowboring.com/p/build-baby-build
1•lando2319•4m ago•0 comments

Show HN: Helm Terminal – Free portfolio intelligence that tells you what to do

https://helmterminal.dev
1•helmterminal•5m ago•0 comments

Show HN: I built an app that charges you real money when you miss your goals

https://trykiri.io
1•shawn_xu•7m ago•0 comments

AI is a self-esteem test

https://mvrckhckr.com/articles/ai-is-a-self-esteem-test
1•mvrckhckr•7m ago•0 comments

The Weekly Vibe – podcast with taout.tv founders about vibecoding [video]

https://www.taout.tv/view-podcast/1cdc7765-f8ea-4a4d-8099-9731e82f191d
1•fcpguru•8m ago•0 comments

Just released Rewind: a "Spotify wrapped"-style experience for Navidrome

https://github.com/BernardoGiordano/rewind
1•berngiordano•9m ago•0 comments

I build a wispr flow alternative that works entirely on the browser

https://snipoint.vercel.app/snipoint/home
1•absolutedev•11m ago•0 comments

Vuetify0: Headless composables and components for Vue (alpha)

https://link.vuetifyjs.com/announcing-the-vuetify0-alpha
1•zeroskillz•13m ago•1 comments

Agents Don't Need Your Passport. They Need Your Authority

https://notes.karlmcguinness.com/notes/agents-dont-need-your-passport-they-need-your-authority/
1•mooreds•14m ago•0 comments

How Passive Radar Works

https://www.passiveradar.com/how-passive-radar-works/
1•surprisetalk•14m ago•0 comments

Open Letter to One Person

https://www.tumblr.com/justisdevan/813272282561642496/open-letter-to-one-person
1•surprisetalk•14m ago•0 comments

Most people can't juggle one ball

https://www.lesswrong.com/posts/jTGbKKGqs5EdyYoRc/most-people-can-t-juggle-one-ball
1•surprisetalk•15m ago•0 comments

The Super Mario Galaxy Movie doesn't care about rehabilitative justice

https://justinkuiper.substack.com/p/mario-galaxy-movie-bowser
1•surprisetalk•15m ago•0 comments

The Roadmap to Mastering Agentic AI Design Patterns

https://machinelearningmastery.com/the-roadmap-to-mastering-agentic-ai-design-patterns/
1•eigenBasis•15m ago•0 comments

Setting Up AppSignal for a Node.js App Running on Kubernetes

https://blog.appsignal.com/2026/04/09/setting-up-appsignal-for-a-nodejs-app-running-on-kubernetes...
1•Thorino•15m ago•0 comments

Encrypted Client Hello: How it was blocked in Russia and next steps

https://cdt.org/insights/do-not-stick-out-the-dynamics-of-the-ech-rollout/
3•grittygrease•17m ago•0 comments

Virgil – Claude Code used as a daily journal with long-term memory capacity

https://github.com/andrewspode/virgil
1•andrewspode•18m ago•1 comments

Relicensing versus License Compatibility

https://www.fsf.org/blogs/licensing/relicensing-versus-compatibility
2•cfreksen•19m ago•0 comments

Mississippi's Education Miracle

https://theconversation.com/mississippis-education-miracle-a-model-for-global-literacy-reform-251895
1•kpetermeni•21m ago•0 comments

Show HN: I built Memory Sync so I don't have to reteach every AI who I am

https://chromewebstore.google.com/detail/memory-sync-take-your-ai/cbaeapjnadfphlddeimfapaiffnldhbe
1•Jenqyang•21m ago•1 comments

Shipping faster, thinking less? The AI code verification trap

https://leaddev.com/ai/shipping-faster-thinking-less-the-ai-code-verification-trap
1•chhum•22m ago•0 comments

AbortController Beyond Fetch: Timeouts, Cleanup, and Signal Composition

https://www.jamdesk.com/blog/abortcontroller-javascript-guide
1•gbourne1•23m ago•0 comments

Show HN: 41 years sea surface temperature anomalies

https://ssta.willhelps.org
5•willmeyers•24m ago•0 comments

Scan any LLM chatbot for vulnerabilities. Built by Mozilla

https://github.com/0din-ai/ai-scanner
3•0DINai•24m ago•1 comments

Applying "Programming Without Pointers" to an mbox indexer using Zig

https://simonhartcher.com/posts/2026-04-08-applying-programming-without-pointers-to-an-mbox-index...
2•mpweiher•25m ago•0 comments

A One-Man Workshop for Ultrapotent Drugs

https://www.nytimes.com/interactive/2026/04/08/health/potency-drug-trade.html
1•mitchbob•25m ago•1 comments
Open in hackernews

A simple heuristic for agents: human-led vs. human-in-the-loop vs. agent-led

1•fletchervmiles•11mo ago
tl;dr - the more agency your agent has, the simpler your use case needs to be

Most if not all successful production use cases today are either human-led or human-in-the-loop. Agent-led is possible but requires simplistic use cases.

---

Human-led:

An obvious example is ChatGPT. One input, one output. The model might suggest a follow-up or use a tool but ultimately, you're the master in command.

---

Human-in-the-loop:

The best example of this is Cursor (and other coding tools). Coding tools can do 99% of the coding for you, use dozens of tools, and are incredibly capable. But ultimately the human still gives the requirements, hits "accept" or "reject' AND gives feedback on each interaction turn.

The last point is important as it's a live recalibration.

This can sometimes not be enough though. An example of this is the rollout of Sonnect 3.7 in Cursor. The feedback loop vs model agency mix was off. Too much agency, not sufficient recalibration from the human. So users switched!

---

Agent-led:

This is where the agent leads the task, end-to-end. The user is just a participant. This is difficult because there's less recalibration so your probability of something going wrong increases on each turn… It's cumulative.

P(all good) = pⁿ

p = agent works correctly n = number of turns / interactions

Ok… I'm going to use my product as an example, not to promote, I'm just very familiar with how it works.

It's a chat agent that runs short customer interviews. My customers can configure it based on what they want to learn (i.e. why a customer churned) and send it to their customers.

It's agent-led because

→ as soon as the respondent opens the link, they're guided from there → at each turn the agent (not the human) is deciding what to do next

That means deciding the right thing to do over 10 to 30 conversation turns (depending on config). I.e. correctly decide:

→ whether to expand the conversation vs dive deeper → reflect on current progress + context → traverse a bunch of objectives and ask questions that draw out insight (per current objective)

Let's apply the above formula. Example:

Let's say:

→ n = 20 (i.e. number of conversation turns) → p = .99 (i.e. how often the agent does the right thing - 99% of the time)

That equals P(all good) = 0.99²⁰ ≈ 0.82

So if I ran 100 such 20‑turn conversations, I'd expect roughly 82 to complete as per instructions and about 18 to stumble at least once.

Let's change p to 95%...

→ n = 20 → p = .95

P(all good) = 0.95²⁰ ≈ 0.358

I.e. if I ran 100 such 20‑turn conversations, I’d expect roughly 36 to finish without a hitch and about 64 to go off‑track at least once.

My p score is high. I had to strip out a bunch of tools and simplify but I got there. And for my use case, a failure is just a slightly irrelevant response so it's manageable.

---

Conclusion:

Getting an agent to do the correct thing 99% is not trivial.

You basically can't have a super complicated workflow. Yes, you can mitigate this by introducing other agents to check the work but this then introduces latency.

There's always a tradeoff!

Know which category you're building in and if you're going for agent-led, narrow your use-case as much as possible.