frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Russia Launches Far-Right Network "Paladins" Calling for Violence in Europe

https://balticsentinel.eu/8431721/russia-launches-far-right-network-paladins-calling-for-violence...
1•mnewme•42s ago•0 comments

Gone Almost Phishin'

https://ma.tt/2026/03/gone-almost-phishin/
1•tosh•1m ago•0 comments

MacBook Neo Is the Most Repairable MacBook in 14 Years

https://www.ifixit.com/News/116152/macbook-neo-is-the-most-repairable-macbook-in-14-years
1•tosh•1m ago•0 comments

Site that turns yesterdays AI security research papers into news articles

https://shortspan.ai/
1•insidetrust•1m ago•0 comments

Show HN: Excalihub – Chrome Extension to extend Excalidraw's capabilities

https://github.com/AykutSarac/excalihub
1•iCutMoon•2m ago•0 comments

Who is using Ollama day-to-day?

1•brauhaus•4m ago•0 comments

Every Product Starts in Oklahoma

https://www.caseyaccidental.com/p/every-product-starts-in-oklahoma
1•aamederen•4m ago•0 comments

Google scraps AI search feature that crowdsourced amateur medical advice

https://www.theguardian.com/technology/2026/mar/16/google-scraps-ai-search-feature-that-crowdsour...
1•mindracer•8m ago•0 comments

Pywho – Python Environment Inspector

https://github.com/AhsanSheraz/pywho
1•ahsan143•8m ago•0 comments

Simple multi-user SSH bastion

https://j6b72.de/article/simple-multi-user-ssh-bastion/
1•q2loyp•11m ago•0 comments

Positive Disintegration

https://en.wikipedia.org/wiki/Positive_disintegration
1•wjb3•11m ago•0 comments

Can AI Replace RedHat?Can AI Replace Linus Torvalds

1•GeekUses9527•14m ago•1 comments

AI is making open source more about intent than syntax

https://eldadfux.com/blog/ai-is-reshaping-open-source-contribution
1•eldad_fux•14m ago•0 comments

Show HN: I built an MCP server that lets you manage Meta's Threads from Claude

https://blacktwist.app/mcp
1•ikoichi2112•14m ago•0 comments

Apple at 50: Five Decades of Thinking Different

https://www.youtube.com/watch?v=eCSNJgI2LFI
1•tosh•16m ago•0 comments

Political Ponerology

https://en.wikipedia.org/wiki/Political_ponerology
2•wjb3•19m ago•0 comments

Show HN: Zmorse- Morse Code Learning, Guessing, Converting and Simulating

https://zmorse.vercel.app
1•zsphinx•25m ago•0 comments

Block ads system-wide on Android using local VPN-based DNS filtering

https://github.com/pass-with-high-score/blockads-android
1•thunderbong•29m ago•0 comments

Virtual iPhone (iOS26) on Apple Private Cloud Compute (PCC) research VM

https://github.com/Lakr233/vphone-cli
1•transpute•30m ago•0 comments

Show HN: I was laid off, so I built a NerdWallet for startup equity liquidity

https://www.strikerates.com
1•rafaelvalle03•31m ago•1 comments

Cursor Rules and Prompts

https://github.com/thehimel/cursor-rules-and-prompts
1•theorchid•32m ago•0 comments

Using phone's STT to type on my laptop

1•theSage•40m ago•0 comments

10 Years Today

https://mastodon.social/@Gargron/116237286336865985
1•robin_reala•40m ago•0 comments

Neuroscope: Real-time "x-ray vision" into LLMs' minds

https://github.com/cjroth/neuroscope
1•thoughtfulchris•43m ago•0 comments

Ghostty's Quick Terminal Is Keeping Me Sane

https://blog.razzle.cloud/ghostty-s-quick-terminal-is-keeping-me-sane/
2•razzlegpt•44m ago•0 comments

Queensland town hopes dark sky certification attracts stargazers to the outback

https://www.abc.net.au/news/2026-03-06/winton-queensland-becomes-international-dark-sky-community...
3•Tomte•46m ago•0 comments

Cloudflare forked just-bash and they should not have

https://twitter.com/cramforce/status/2033285112478171373
2•tosh•52m ago•0 comments

They won't listen to a word we say (2025)

https://bjr.org.uk/they-wont-listen-to-a-word-we-say/
1•Tomte•54m ago•0 comments

Show HN: Javadecompiler.org – a unified Java decompiler and transformer API

https://javadecompiler.org/
1•nbauma109•55m ago•0 comments

LLMs gave the overconfident colleague an unlimited ceiling

https://medium.com/@groundtruthpost/you-know-that-colleague-who-always-has-an-answer-198e5749237c
2•thefeedbackloop•58m ago•0 comments
Open in hackernews

A simple heuristic for agents: human-led vs. human-in-the-loop vs. agent-led

1•fletchervmiles•10mo ago
tl;dr - the more agency your agent has, the simpler your use case needs to be

Most if not all successful production use cases today are either human-led or human-in-the-loop. Agent-led is possible but requires simplistic use cases.

---

Human-led:

An obvious example is ChatGPT. One input, one output. The model might suggest a follow-up or use a tool but ultimately, you're the master in command.

---

Human-in-the-loop:

The best example of this is Cursor (and other coding tools). Coding tools can do 99% of the coding for you, use dozens of tools, and are incredibly capable. But ultimately the human still gives the requirements, hits "accept" or "reject' AND gives feedback on each interaction turn.

The last point is important as it's a live recalibration.

This can sometimes not be enough though. An example of this is the rollout of Sonnect 3.7 in Cursor. The feedback loop vs model agency mix was off. Too much agency, not sufficient recalibration from the human. So users switched!

---

Agent-led:

This is where the agent leads the task, end-to-end. The user is just a participant. This is difficult because there's less recalibration so your probability of something going wrong increases on each turn… It's cumulative.

P(all good) = pⁿ

p = agent works correctly n = number of turns / interactions

Ok… I'm going to use my product as an example, not to promote, I'm just very familiar with how it works.

It's a chat agent that runs short customer interviews. My customers can configure it based on what they want to learn (i.e. why a customer churned) and send it to their customers.

It's agent-led because

→ as soon as the respondent opens the link, they're guided from there → at each turn the agent (not the human) is deciding what to do next

That means deciding the right thing to do over 10 to 30 conversation turns (depending on config). I.e. correctly decide:

→ whether to expand the conversation vs dive deeper → reflect on current progress + context → traverse a bunch of objectives and ask questions that draw out insight (per current objective)

Let's apply the above formula. Example:

Let's say:

→ n = 20 (i.e. number of conversation turns) → p = .99 (i.e. how often the agent does the right thing - 99% of the time)

That equals P(all good) = 0.99²⁰ ≈ 0.82

So if I ran 100 such 20‑turn conversations, I'd expect roughly 82 to complete as per instructions and about 18 to stumble at least once.

Let's change p to 95%...

→ n = 20 → p = .95

P(all good) = 0.95²⁰ ≈ 0.358

I.e. if I ran 100 such 20‑turn conversations, I’d expect roughly 36 to finish without a hitch and about 64 to go off‑track at least once.

My p score is high. I had to strip out a bunch of tools and simplify but I got there. And for my use case, a failure is just a slightly irrelevant response so it's manageable.

---

Conclusion:

Getting an agent to do the correct thing 99% is not trivial.

You basically can't have a super complicated workflow. Yes, you can mitigate this by introducing other agents to check the work but this then introduces latency.

There's always a tradeoff!

Know which category you're building in and if you're going for agent-led, narrow your use-case as much as possible.