frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: AI Generated Diagrams

1•voidhorse•2m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
1•josephcsible•2m ago•0 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
1•jdjuwadi•5m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•5m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•9m ago•0 comments

Market orientation and national homicide rates

https://onlinelibrary.wiley.com/doi/10.1111/1745-9125.70023
3•PaulHoule•9m ago•0 comments

California urges people avoid wild mushrooms after 4 deaths, 3 liver transplants

https://www.cbsnews.com/news/california-death-cap-mushrooms-poisonings-liver-transplants/
1•rolph•10m ago•0 comments

Matthew Shulman, co-creator of Intellisense, died 2019 March 22

https://www.capenews.net/falmouth/obituaries/matthew-a-shulman/article_33af6330-4f52-5f69-a9ff-58...
3•canucker2016•11m ago•1 comments

Show HN: SuperLocalMemory – AI memory that stays on your machine, forever free

https://github.com/varun369/SuperLocalMemoryV2
1•varunpratap369•12m ago•0 comments

Show HN: Pyrig – One command to set up a production-ready Python project

https://github.com/Winipedia/pyrig
1•Winipedia•14m ago•0 comments

Fast Response or Silence: Conversation Persistence in an AI-Agent Social Network [pdf]

https://github.com/AysajanE/moltbook-persistence/blob/main/paper/main.pdf
1•EagleEdge•14m ago•0 comments

C and C++ dependencies: don't dream it, be it

https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies-dont-dream-it-be-it.html
1•ingve•15m ago•0 comments

Show HN: Vbuckets – Infinite virtual S3 buckets

https://github.com/danthegoodman1/vbuckets
1•dangoodmanUT•15m ago•0 comments

Open Molten Claw: Post-Eval as a Service

https://idiallo.com/blog/open-molten-claw
1•watchful_moose•15m ago•0 comments

New York Budget Bill Mandates File Scans for 3D Printers

https://reclaimthenet.org/new-york-3d-printer-law-mandates-firearm-file-blocking
2•bilsbie•16m ago•1 comments

The End of Software as a Business?

https://www.thatwastheweek.com/p/ai-is-growing-up-its-ceos-arent
1•kteare•17m ago•0 comments

Exploring 1,400 reusable skills for AI coding tools

https://ai-devkit.com/skills/
1•hoangnnguyen•18m ago•0 comments

Show HN: A unique twist on Tetris and block puzzle

https://playdropstack.com/
1•lastodyssey•21m ago•1 comments

The logs I never read

https://pydantic.dev/articles/the-logs-i-never-read
1•nojito•23m ago•0 comments

How to use AI with expressive writing without generating AI slop

https://idratherbewriting.com/blog/bakhtin-collapse-ai-expressive-writing
1•cnunciato•24m ago•0 comments

Show HN: LinkScope – Real-Time UART Analyzer Using ESP32-S3 and PC GUI

https://github.com/choihimchan/linkscope-bpu-uart-analyzer
1•octablock•24m ago•0 comments

Cppsp v1.4.5–custom pattern-driven, nested, namespace-scoped templates

https://github.com/user19870/cppsp
1•user19870•25m ago•1 comments

The next frontier in weight-loss drugs: one-time gene therapy

https://www.washingtonpost.com/health/2026/01/24/fractyl-glp1-gene-therapy/
2•bookofjoe•28m ago•1 comments

At Age 25, Wikipedia Refuses to Evolve

https://spectrum.ieee.org/wikipedia-at-25
2•asdefghyk•31m ago•4 comments

Show HN: ReviewReact – AI review responses inside Google Maps ($19/mo)

https://reviewreact.com
2•sara_builds•31m ago•1 comments

Why AlphaTensor Failed at 3x3 Matrix Multiplication: The Anchor Barrier

https://zenodo.org/records/18514533
1•DarenWatson•32m ago•0 comments

Ask HN: How much of your token use is fixing the bugs Claude Code causes?

1•laurex•36m ago•0 comments

Show HN: Agents – Sync MCP Configs Across Claude, Cursor, Codex Automatically

https://github.com/amtiYo/agents
1•amtiyo•37m ago•0 comments

Hello

2•otrebladih•38m ago•1 comments

FSD helped save my father's life during a heart attack

https://twitter.com/JJackBrandt/status/2019852423980875794
3•blacktulip•41m ago•0 comments
Open in hackernews

Ask HN: When do we expose "Humans as Tools" so LLM agents can call us on demand?

48•vedmakk•1mo ago
Serious question.

We're building agentic LLM systems that can plan, reason, and call tools via MCP. Today those tools are APIs. But many real-world tasks still require humans.

So… why not expose humans as tools?

Imagine TaskRabbit or Fiverr running MCP servers where an LLM agent can:

- Call a human for judgment, creativity, or physical actions

- Pass structured inputs

- Receive structured outputs back into its loop

At that point, humans become just another dependency in an agent's toolchain. Though slower, more expensive, but occasionally necessary.

Yes, this sounds dystopian. Yes, it treats humans as "servants for AI." Thats kind of the point. It already happens manually... this just formalizes the interface.

Questions I'm genuinely curious about:

- Is this inevitable once agents become default software actors? (As of basically now?)

- What breaks first: economics, safety, human dignity or regulation?

- Would marketplaces ever embrace being "human execution layers" for AI?

Not sure if this is the future or a cursed idea we should actively prevent... but it feels uncomfortably plausible.

Comments

victorbjorklund•1mo ago
Amazon Mechanical Turk?
bobbiechen•1mo ago
My thought as well - the infra already exists through MTurk, as well as the ethical and societal questions. You can already pay people pennies per task to do an arbitrary thing, chain that into some kind of consensus if you want to make it harder for individuals to fudge the results, offer more to get your tasks picked up faster, etc.
taurath•1mo ago
I applaud topics like this that get to the banality and dehumanization involved with the promises of an AI future. To me, if AI fulfills even some of its promises then it puts us in a rather high stakes situation between the people that make up society and those that govern the productivity machines.

My first instinct is to say that when one loses certain trusts society grants, society historically tends to come hard. A common idea in political discourse today is that no hope for a brighter future means a lot of young people looking to trash the system. So, yknow, treat people with kindness, respect, and dignity, lest the opposite be visited upon you.

Don’t underestimate the anger a stolen future creates.

impendia•1mo ago
Indeed, I wonder if these angry young people would try to fuck with these AI agents, and attempt to make them spin in circles for their own amusement.

Sort of like the infamous GameStop short squeeze of 2021:

https://en.wikipedia.org/wiki/GameStop_short_squeeze

krapp•1mo ago
"The child who is not embraced by the village will burn it down just to feel its warmth."
bitwize•1mo ago
https://en.wikipedia.org/wiki/Manna_(novel)
futuraperdita•1mo ago
Exactly this. OP, this is basically where this book goes - AI management that directs humans around as automata.
econ•1mo ago
Right, I could easily write a few prompts to replace all but the lowest level of our orgchart.
gnz11•1mo ago
Player Piano by Kurt Vonnegut also comes to mind. https://en.wikipedia.org/wiki/Player_Piano_%28novel%29
crusty•1mo ago
Do you work for Peter Thiel and are you tasked with validating his wet dream?

This seems like the inevitable outcome of our current trajectory for a significant portion of society. All the blather about AI utopia and a workless UBI system supported by the boundless productivity advances ushered in by AI-everything simply has no historically basis. History's realistic interpretation points more to this outcome.

Coincidentally, I've been conceptualizing a TV sitcom that tangentially touches on this idea of humans as real-time inputs an AI agent calls on, but those humans are collective characters not actually portrayed in scenes.

bradchris•1mo ago
Check out Mrs. Davis (2023)
crusty•1mo ago
Great show
econ•1mo ago
We will never know how many brain chipped humans are already captured in his catacombs. Perhaps the brain in a jar wetware is further developed than we would like to know.

Now that I think about it. I still do think I'm sitting in my chair.... Humm...

notjulianjaynes•1mo ago
I know nothing about this other than I thought it was a joke at first, but I think it's the same idea https://github.com/RapidataAI/human-use
econ•1mo ago
That talks about getting some kind of free feedback out of the human for free.

Now we have to find the next level and condition the human to pay to respond to questions.

It seems like an idea bad enough to pay $10 to downvote? Or should that be good enough?

htrp•1mo ago
you can reinvent scale api and get yc funding before selling out to ine of the faangs
muzani•1mo ago
Honestly wouldn't mind more competition in this sector. This one doesn't seem optional for the rest of us in the future and I don't like the idea of Scale AI being in charge.
mountainriver•1mo ago
There are products that do this, langchain itself has a method for it
handfuloflight•1mo ago
Wouldn't this be better than pretending humans are fully automatable?
bravetraveler•1mo ago
If the business thinks I'm expensive now, just wait until on-call goes from an optional rotation to a machine-induced hell
shahbaby•1mo ago
When LLMs become better than humans at the following:

1. Knowing what you don't know

2. Knowing who is likely to know

3. Asking the question in a way that the other human, with their limited attention and context-window, is able to give a meaningful answer

cjmcqueen•1mo ago
Part of my role is designing assessments for online courses and technical certifications. This is exactly what we want to build in our assessment development process. We want the LLM to monitor the training content and create draft questions and exercises that are vetted by humans. It's maybe a classic "human in the middle" design for content development, but the more we can put humans in at the right point and time and use LLMs for the other parts helps us create a more robust and up to date training and assessment system.
severak_cz•1mo ago
It's probably already done but in some third world country and hidden behind NDAs.
throw310822•1mo ago
Easy to do and not a bad idea. You don't need to pass structured output and accept structured input: in the end, an LLM uses any readable text. A tool is just a way for an LLM to ask questions of a certain type and wait for the answer. For example, I'm wondering if certain flows could be improved by a "ask_clarification_question" tool that simply displays the question in the chat and returns the answer.

I understand that this is not exactly in the spirit of your question but, well, a tool is just this.

alienbaby•1mo ago
now the computer decides what it needs, and we bid our time lower and lower to accomplish the task .. :/

Maybe I write a bot that answers fivverr requests at the lowest price possible. We can all race to the bottom.

fromaj•1mo ago
The industry is calling it "human-in-the-loop" for now but it's basically going in the direction OP hints at.
vedmakk•1mo ago
Though human-in-the-loop is usually used in scenarios where control is held by said human (e.g. verification or approval).

The difference I'm curious about is agents being the primary caller, and humans becoming an explicit dependency in an autonomous loop rather than a human-in-the-loop system.

tmaly•1mo ago
I sort of see it on the flip side. If you read through the MCP spec, there is the potential for the human input. If should be the AI doing all the grunt work it is capable of with the human putting in judgement when needed to complete some task.
jquip•1mo ago
You don't.

How remarkable to think that humans would be happy to dehumanize each other at least in language, before anything else at the promise of 'optimising' for something... whatever it is... it could be the 200 year futuristic axe at this point.

zerocool86•1mo ago
The framing assumes cloud-first AI agents as the default caller. But there's another path: local-first AI where the human remains the orchestrator and the model never phones home.

The "humans as tools" model only works if the AI layer is centralized and owned by platforms. If inference runs on hardware you control, you're not callable - you're the one calling.

Been thinking about this a lot: https://www.localghost.ai/reckoning

vedmakk•1mo ago
Hey, interesting project!