frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Meta: Shut Down Your Invasive AI Discover Feed. Now

https://www.mozillafoundation.org/en/campaigns/meta-shut-down-your-invasive-ai-discover-feed-now/
137•speckx•1h ago•54 comments

Decreasing Gitlab repo backup times from 48 hours to 41 minutes

https://about.gitlab.com/blog/2025/06/05/how-we-decreased-gitlab-repo-backup-times-from-48-hours-to-41-minutes/
59•immortaljoe•1h ago•11 comments

Why Bell Labs Worked

https://links.fabiomanganiello.com/share/683ee70d0409e6.66273547
23•speckx•1h ago•9 comments

Odyc.js – A tiny JavaScript library for narrative games

https://odyc.dev
95•achtaitaipai•2h ago•17 comments

Sandia turns on brain-like storage-free supercomputer – Blocks and Files

https://blocksandfiles.com/2025/06/06/sandia-turns-on-brain-like-storage-free-supercomputer/
29•rbanffy•1h ago•7 comments

An Interactive Guide to Rate Limiting

https://blog.sagyamthapa.com.np/interactive-guide-to-rate-limiting
55•sagyam•1h ago•18 comments

A masochist's guide to web development

https://sebastiano.tronto.net/blog/2025-06-06-webdev/
71•sebtron•2h ago•7 comments

Free Gaussian Primitives at Anytime Anywhere for Dynamic Scene Reconstruction

https://zju3dv.github.io/freetimegs/
17•trueduke•1h ago•0 comments

Too Many Open Files

https://mattrighetti.com/2025/06/04/too-many-files-open
18•furkansahin•1h ago•7 comments

See how a dollar would have grown over the past 94 years [pdf]

https://www.newyorklifeinvestments.com/assets/documents/education/investing-essentials-growthofadollar.pdf
5•mooreds•13m ago•0 comments

Curate Your Shell History

https://esham.io/2025/05/shell-history
30•todsacerdoti•2h ago•21 comments

4-7-8 Breathing

https://www.breathbelly.com/exercises/4-7-8-breathing
15•cheekyturtles•1h ago•3 comments

VPN providers in France ordered to block pirate sports IPTV

https://torrentfreak.com/major-vpn-providers-ordered-to-block-pirate-sports-streaming-sites-250516/
35•gasull•1h ago•8 comments

Weaponizing Dependabot: Pwn Request at its finest

https://boostsecurity.io/blog/weaponizing-dependabot-pwn-request-at-its-finest
50•chha•5h ago•29 comments

Deepnote (YC S19) is hiring engineers to build an AI-powered data notebook

https://deepnote.com/join-us
1•Equiet•4h ago

Silicon Valley aghast at the Musk-Trump divorce

https://www.ft.com/content/df15f13d-310f-47a5-89ed-330a6a379068
15•gitgudflea•22m ago•11 comments

Swift and Cute 2D Game Framework: Setting Up a Project with CMake

https://layer22.com/swift-and-cute-framework-setting-up-a-project-with-cmake
61•pusewicz•5h ago•43 comments

Self-hosting your own media considered harmful according to YouTube

https://www.jeffgeerling.com/blog/2025/self-hosting-your-own-media-considered-harmful
1297•DavideNL•11h ago•547 comments

Ask HN: Any good tools for viewing congressional bills?

17•tlhunter•49m ago•9 comments

How to (actually) send DTMF on Android without being the default call app

https://edm115.dev/blog/2025/01/22/how-to-send-dtmf-on-android
19•EDM115•5h ago•3 comments

Top researchers leave Intel to build startup with 'the biggest, baddest CPU'

https://www.oregonlive.com/silicon-forest/2025/06/top-researchers-leave-intel-to-build-startup-with-the-biggest-baddest-cpu.html
50•dangle1•2h ago•29 comments

Jepsen: TigerBeetle 0.16.11

https://jepsen.io/analyses/tigerbeetle-0.16.11
170•aphyr•5h ago•45 comments

ThornWalli/web-workbench: Old operating system as homepage

https://github.com/ThornWalli/web-workbench
18•rbanffy•4h ago•4 comments

The impossible predicament of the death newts

https://crookedtimber.org/2025/06/05/occasional-paper-the-impossible-predicament-of-the-death-newts/
534•bdr•1d ago•178 comments

OpenAI is retaining all ChatGPT logs "indefinitely." Here's who's affected

https://arstechnica.com/tech-policy/2025/06/openai-confronts-user-panic-over-court-ordered-retention-of-chatgpt-logs/
12•Bender•1h ago•7 comments

Small Programs and Languages

https://ratfactor.com/cards/pl-small
64•todsacerdoti•3h ago•21 comments

Show HN: Air Lab – A portable and open air quality measuring device

https://networkedartifacts.com/airlab/simulator
438•256dpi•1d ago•177 comments

The Coleco Adam Computer

https://dfarq.homeip.net/coleco-adam-computer/
17•rbanffy•6h ago•8 comments

Tokasaurus: An LLM inference engine for high-throughput workloads

https://scalingintelligence.stanford.edu/blogs/tokasaurus/
198•rsehrlich•19h ago•23 comments

Apple warns Australia against joining EU in mandating iPhone app sideloading

https://www.neowin.net/news/apple-warns-australia-against-joining-eu-in-mandating-iphone-app-sideloading/
30•bundie•1h ago•13 comments
Open in hackernews

Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations

https://masonyarbrough.com/blog/ask-human
104•echollama•17h ago
While building my startup i kept running into the issue where ai agents in cursor create endpoints or code that shouldn't exist, hallucinates strings, or just don't understand the code.

ask-human-mcp pauses your agent whenever it’s stuck, logs a question into ask_human.md in your root directory with answer: PENDING, and then resumes as soon as you fill in the correct answer.

the pain:

your agent screams out an endpoint that never existed it makes confident assumptions and you spend hours debugging false leads

the fix:

ask-human-mcp gives your agent an escape hatch. when it’s unsure, it calls ask_human(), writes a question into ask_human.md, and waits. you swap answer: PENDING for the real answer and it keeps going.

some features:

- zero config: pip install ask-human-mcp + one line in .cursor/mcp.json → boom, you’re live - cross-platform: works on macOS, Linux, and Windows—no extra servers or webhooks. - markdown Q\&A: agent calls await ask_human(), question lands in ask_human.md with answer: PENDING. you write the answer, agent picks back up - file locking & rotation: prevents corrupt files, limits pending questions, auto-rotates when ask_human.md hits ~50 MB

the quickstart

pip install ask-human-mcp ask-human-mcp --help

add to .cursor/mcp.json and restart: { "mcpServers": { "ask-human": { "command": "ask-human-mcp" } } }

now any call like:

answer = await ask_human( "which auth endpoint do we use?", "building login form in auth.js" )

creates:

### Q8c4f1e2a ts: 2025-01-15 14:30 q: which auth endpoint do we use? ctx: building login form in auth.js answer: PENDING

just replace answer: PENDING with the real endpoint (e.g., `POST /api/v2/auth/login`) and your agent continues.

link:

github -> https://github.com/Masony817/ask-human-mcp

feedback:

I'm Mason a 19yo solo-founder at Kallro. Happy to hear any bugs, feature requests, or weird edge cases you uncover - drop a comment or open an issue! buy me a coffee -> coff.ee/masonyarbrough

Comments

throwaway314155•16h ago
Not certain that your definition of hallucination matches mine precisely. Having said that, this is so simple yet kinda brilliant. Surprised it's not a more popular concept already.
loloquwowndueo•16h ago
- someone sets up an “ask human as a service mcp” - demand quickly outstrips offer of humans willing to help bots - someone else hooks up AI to the “ask human saas” - we now have a full loop of machines asking machines
TZubiri•15h ago
This is pretty much already possible in any economy, but quite a waste.

Not much is stopping you from buying products from a retailer and selling them at a wholesaler, but you'd lose money in doing so.

olalonde•11h ago
I built this - but mostly as a joke / proof-of-concept: https://github.com/olalonde/mcp-human
aziaziazi•9h ago
Cool project! Naive question: does mechanical turk uses llm now?
lordmauve•10h ago
Finally, the "AI" turns out to be 700 Indians. We now have the full loop of humans asking machines asking humans pretending to be machines. Civilisation collapses
franky47•4h ago
AI stands for Actual Indians.
kajkojednojajko•3h ago
please do the promptful
conception•16h ago
What sort of prompt are you using for this?
kordlessagain•3h ago
The prompt is (mostly) built using the tool loads in the MCP server. In Python, the @mcp.tool() decorators provide the context of tool to the prompt, which is then submitted (I believe) with each call to the LLM.
rgbrenner•16h ago
Sounds similar to `ask_followup_question` in Roo
kjhughes•16h ago
Cool conceptually, but how exactly does the agent know when it's unsure or stuck?
Groxx•16h ago
The same way it knows anything else.

So not at all, but that doesn't mean it's not useful.

TZubiri•15h ago
So we are just pushing the issue to another, less debuggable layer. Cool.
kjhughes•15h ago
I'll try to give you credit for more than dismissing my question off-hand...

Yes, it may not need to know with perfect certainty when it's unsure or stuck, but even to meet a lower bar of usefulness, it'll need at least an approximate means of determining that its knowledge is inadequate. To purport to help with the hallucination problem requires no less.

To make the issue a bit more clear, here are some candidate components to a stuck() predicate:

- possibilities considered

- time taken

- tokens consumed/generated (vs expected? vs static limit? vs dynamic limit?)

If the unsure/stuck determination is defined via more qualitative prompting, what's the prompt? How well has it worked?

Groxx•15h ago
I don't believe[1] any of those are part of the MCP protocol - it's essentially "the LLM decided to call it, with X arguments, and will interpret the results however it likes". It's an escape hatch for the LLM to use to do stuff like read a file, not a monitoring system that acts independently and has control over the LLM itself.

(But you could build one that does this, and ask the LLM to call it and give your MCP that data... when it feels like it)

So you'd be using this by telling the LLM to run it when it thinks it's stuck. Or needs human input.

1: I am not anything even approaching deeply knowledgeable about MCP, so please, someone correct me if I'm wrong! There do seem to be some bi-directional messaging abilities, e.g. notification, but to figure out thinking time / token use / etc you would need to have access to the infrastructure running the LLM, e.g. Cursor itself or something.

threeseed•13h ago
You are trying to control a system that is inherently chaotic.

You can probably get some where by indeed running a task 1000 times and looking for outliers in the execution time or token count. But that is of minimal use and anything more advanced than that is akin to water divining.

kordlessagain•3h ago
The system is only nondeterministic (and a model of nondeterminism at that) when it's emitting tokens. It (the system) becomes completely deterministic when it calls a tool and a result is returned from the tool.

This is little different than how I wrote this. Now it is deterministic, when I hit reply.

echollama•12h ago
the reasoning aspect of most llms these days knows when its unsure or stuck, you can get that from its thinking tokens. It will see this mcp and call it when its in that state. Though this could benefit from some rules file to use it, although cursor doesn't quite follow ask for help rules, hence making this.
kjhughes•12h ago
Does all thinking end up getting replaced by calls to Ask-human-mcp then? Or only thinking that exceeds some limit (and how do you express that limit)?
aziaziazi•9h ago
I had the same question reading your post:

> (problem description) your agent […] makes confident assumptions

> (solution description) when it’s unsure

I read this as a contradiction: in one sentence you describe the problem as an agent being confident while hallucinating and in the next phrase the solution is that the agent can ask you if it’s unsure.

You tool is interesting but you may consider rephrasing that part.

mgraczyk•16h ago
If you are answering these questions yourself, why not just add something like this to your cursor rules?

"If you don't know the answer to a question and need the answer to continue, ask me before continuing"

Will you have some other person answer the question?

deadbabe•15h ago
Having another person answer the question is pretty much the obvious route this will go.
mgraczyk•15h ago
But then that means they are editing a markdown file on your computer? How is that meant to work?

I like the idea but would rather it use Slack or something if it's meant to ask anyone.

echollama•12h ago
this is mainly meant as a way to conversate with the model while you are programming with it. This is not meant to pull questions to a team but more to pair program. a markdown file is best for syntax in an llm prompt and also just easiest to have open and answer questions with. If i had more time and could i would build an extension into cursor.
mgraczyk•12h ago
Why not have the model ask in the chat? It's a lot easier to just talk to it than open a file. The article mentions cursor so it sounds like you're already using cursor?
echollama•10h ago
would probably work better, this is just how i threw it together as an internal tool a long time ago. i just improved it and shipped it to opensource it.
multjoy•1h ago
Conversate is not a word.
echollama•31m ago
yes it is
bckr•15h ago
I’ve tried putting “stop and ask for help” in prompts/rules and it seems like Cursor + Claude, up to 3.7, is highly aligned against asking for help.
ramesh31•13h ago
>If you are answering these questions yourself, why not just add something like this to your cursor rules?

What you are asking for is AGI. We still need human in the loop for now.

mgraczyk•12h ago
What I'm describing is a human in the loop. It's just a different UX, one that is easier to use and closer to what the model is trained to use.
ramesh31•2h ago
Human in the loop means despite your best efforts at initial prompting (which is what rules are), there will always be the need to say "no, that's wrong, now do this instead". Expecting to be able to write enough rules for the model to work fully autonomously through your problem is indeed wishing for AGI.
mgraczyk•40m ago
In my example, the human would be in the loop in exactly the same way as the technique in the article. The human can tell the model that it's wrong and what to do instead.

Tools like th one in the article are also "rules".

superb_dev•15h ago
This site is impossible to read on my phone. Part of the left side of the screen is cut off and I can’t scroll it into view
lobsterthief•14h ago
Same here
tyzoid•14h ago
Completely blank for me on mobile (javascript disabled)
banner520•13h ago
I also have this problem on my phone
rfl890•13h ago
Switching to desktop mode fixed it for me
kbouck•12h ago
Rotate phone to landscape
multjoy•1h ago
lol, no
threeseed•13h ago
> an mcp server that lets the agent raise its hand instead of hallucinating

a) It doesn't know when it's hallucinating.

b) It can't provide you with any accurate confidence score for any answer.

c) Your library is still useful but any claim that you can make solutions more robust is a lie. Probably good enough to get into YC / raise VC though.

echollama•12h ago
reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question.

this is a streamlined implementation of a interanlly scrapped together tool that i decided to open-source for people to either us or build off of.

geraneum•11h ago
> reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question.

I’m interested. Where can I read more about this?

threeseed•10h ago
> reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question

You've just described AGI.

If this were possible you could create an MCP server that has a continually updated list of FAQ of everything that the model doesn't know.

Over time it would learn everything.

xeonmc•2h ago
Unless there is as yet insufficient data for meaningful answer.
exclipy•11h ago
Would be great if it pinged me on slack or whatsapp. I wouldn't notice if it simply paused waiting for the MCP call to return
spacecadet•5h ago
Easy enough to do with smolagents and fastmcp, its 20 lines of code.
atoav•9h ago
I am running an electronics/medialab in an university, the amount of fires bad electronics advice from LLMs caused already is probably non-zero.

It is amazing how bad LLMs are when it comes to reasoning about simple dynamics within trivial electronic circuits and how eager they are to insist the opposite of how things work in the real world is the secured truth.

spacecadet•5h ago
If the model responds with an obvious incorrect answer or hallucination, start over. Rephrase your input. Consider what output you are actually after... Adding to original shit output wont help you.
ddalex•4h ago
Why wouldn't a rag-enabled ai be faster and better then humans at answering these documentation-grounded questions ?
kordlessagain•3h ago
The same technique can be had by creating a "universal MCP tool" for the LLM to use if it thinks the existing tools aren't up to the job. The MCP language calls these "proxies".