frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
475•klaussilveira•7h ago•116 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
813•xnx•12h ago•487 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
33•matheusalmeida•1d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
157•isitcontent•7h ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
156•dmpetrov•7h ago•67 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
92•jnord•3d ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
50•quibono•4d ago•6 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
260•vecti•9h ago•123 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
207•eljojo•10h ago•134 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
328•aktau•13h ago•158 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
327•ostacke•13h ago•86 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
411•todsacerdoti•15h ago•219 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
23•kmm•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
337•lstoll•13h ago•242 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
52•phreda4•6h ago•9 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
4•romes•4d ago•0 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
195•i5heu•10h ago•145 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
115•vmatsiiako•12h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
152•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
245•surprisetalk•3d ago•32 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
996•cdrnsf•16h ago•420 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
26•gfortaine•5h ago•3 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
46•rescrv•15h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
67•ray__•3h ago•30 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
38•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
30•betamark•14h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
7•gmays•2h ago•2 comments

Evolution of car door handles over the decades

https://newatlas.com/automotive/evolution-car-door-handle/
41•andsoitis•3d ago•62 comments
Open in hackernews

MIT study finds AI can replace 11.7% of U.S. workforce

https://www.cnbc.com/2025/11/26/mit-study-finds-ai-can-already-replace-11point7percent-of-us-workforce.html
61•tiahura•2mo ago

Comments

ChrisArchitect•2mo ago
Source: https://iceberg.mit.edu/report.pdf / https://arxiv.org/abs/2510.25137
dinkblam•2mo ago
My study finds AI can replace 96.83% of U.S. study makers
xhkkffbf•2mo ago
I love that it's not 11% but 11.7% even though it's all just guesses. Somehow they have that much precision.
cinntaile•2mo ago
They should give us a span that they believe in and then we check in a few years how accurate their guess was.
lo_zamoyski•2mo ago
By then, they will have received their promotions and salary bumps and it won't matter.
pydry•2mo ago
There was a previous study that said 47% by 2033: https://fortune.com/2015/04/22/robots-white-collar-ai/

It predates LLMs so they werw predicting that poets and artists would be the last jobs to be automated. Which is kinda funny.

Economists' predictions about investors' wet dreams have always been a little bit whimsical.

fHr•2mo ago
haha real
Der_Einzige•2mo ago
This but unironically:

https://arxiv.org/abs/2403.20252

zkmon•2mo ago
This is so true.
syngrog66•2mo ago
I bet its 98.251% (+/- 0.00032%)

clowns, all of them

paxys•2mo ago
I wonder if these researchers include their own jobs in the analysis. Because AI can very easily spit out random numbers and a lengthy explanation to make them seem believable.
ghkbrew•2mo ago
This title is clickbait.

From the abstract: "The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines." (emphasis mine)

The 11.7% figures is the modeled reduction in "wage value", which appears to be marketplace value of (human) work.

a-posteriori•2mo ago
This is the same group (Ayush Chopra & Ramesh Raskar) that previously published the highly circulated (clickbait) article saying that 95% of AI pilots were failing based on extremely weak study design and questions that didn't even support the takeaways.

Anything coming from Ayush and Ramesh should be highly scrutinized. Ramesh should stick to studying Camera Culture in the Media Lab.

I will believe a study from MIT when it comes out of CSAIL.

zkmon•2mo ago
Yep. Take it with some salt. Unfortunately, the quality of research is struck by sales pitch and hype mongering.
a-posteriori•2mo ago
It's been really disheartening to see the impact of media / hype mongering on groups within research institutions.

IMO, it's clear there is massive demand for any research that shows large positive or negative impacts of AI on the economy. The recent WSJ article about Aiden Toner-Rodgers is another great example of demand for AI impact outstripping the supply of AI impact. Obviously this thread's example is just shoddy research vs. the outright data fraud of Toner-Rodgers, but it's hard to not see the pattern.

I hope that MIT and other research institutions can figure this out...

balaclava9•2mo ago
fascinating story. amazing how people want to believe in the AI savior.
mistrial9•2mo ago
science says rebut the sources and the thesis, not a personal attack on the authors
gus_massa•2mo ago
Science says people have reputation, journal have impact index, ...

Life is too short to read every single article, once someone cry wolf a few times, other researchers in the area will just ignore them.

mistrial9•2mo ago
> Science says people have reputation, journal have impact index

can you show me your primary reference for that, please

mistrial9•2mo ago
SAPO-NABU ?
sciencegeek123•2mo ago
You should read the paper (or at least the abstract) before making personal attacks. It makes no claims about job disruption (quite the opposite actually).
lesuorac•2mo ago
I'll give a hot take.

The real advantage AI gives is cover to change current processes. There's a million tiny tasks that could be automated and in aggregate would reduce labor needs by making labor more productive.

AI isn't a feature. Spellcheck is a feature. Templates are a feature. Search is a feature. A database of every paywalled article is a feature. AI can't do anything but it gives cover for features that do.

falcor84•2mo ago
Following with my own hot take, AI SWE agents, while very flawed, allow people to quickly iterate on possible approaches to change those processes. I think that once people have had more time to explore this capability, we'll see massive productivity increases.
iambateman•2mo ago
The fact that these very-smart people did not include ranges is absurd.

They know that 11.7% is WAY too precise to report. The truth is it's probably somewhere between 5-15% over the next 20 years and nobody has any idea which side of that range is correct.

sciencegeek123•2mo ago
Yes, agree. There should be range.

Similar precision appears in other exposure studies also. E.g. This one was trending from OpenAI and Wharton a short while back: arxiv.org/pdf/2303.10130

hahahacorn•2mo ago
This is like unbelievably awful journalism. From the abstract:

>The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines. Analysis shows that visible AI adoption concentrated in computing and technology (2.2% of wage value, approx $211 billion) represents only the tip of the iceberg. Technical capability extends far below the surface through cognitive automation spanning administrative, financial, and professional services (11.7%, approx $1.2 trillion). [https://arxiv.org/abs/2510.25137]

Does the author not know what displacement outcomes are?

It's possible we got 2.2% better quality software by augmenting engineers.

I expect we'll see at least 11.7% <metrixX> improvements in admin, financial, and professional services.

There is likely also a depressive affect on the labor market - there is nuance here and it would be equally disingenuous to believe there will be zero displacement (although, there is a case for more labor participation is administrative bottlenecks / cost are solved, tbd).

Either way this is like a textbook example of zero-sum minded journalist grossly misrepresenting the world.

emp17344•2mo ago
Too many people fall into the trap of believing the economy is zero-sum. You see it all the time on HN.
signatoremo•2mo ago
I think it’s a textbook example of HN skimming through the paper and the summary.

The paper basically said:

1) AI may affect 2.2% of tech adoption, in terms of wage values,

2) but that’s only the surface. The rippled impact may be as much as 11.7% wage values.

That’s it. That’s all the index that they came up with measures, nothing else. They didn’t say there would be no displacement outcome, only that the index doesn’t quantify it. In other words, it’s the worst case scenario.

Give it a read and come back with better critics.

hahahacorn•2mo ago
That's not true. They didn't measure wages, but used it as a proxy. What they're actually measuring is work done, or tasks.

Last I checked, most people work a job where there is more work to do than time in the day to do it - which would be the conditions for believing that wage value index would be closely correlated with displacement.

Not only does the article title say the thing the paper says it's not saying, there is little reason to believe that the thing it says is the outcome, even if the paper wasn't explicit about not saying the thing.

atonse•2mo ago
Interesting that their website (https://iceberg.mit.edu) looks quite obviously vibe coded.

Products like v0.dev (and gemini-3 with nano banana in general) continue to get better at building website designs that don't look obviously vibe coded.

rs186•2mo ago
I don't remember ever seeing a website that has a loading screen with words "Initializing React" on it. It's almost comical. Like that information is of any value to site visitors.
vlovich123•2mo ago
Interesting - that’s a 1T market just in the US alone. Probably another 1T in EU. It’s unclear how much there is in the rest of the world (China is basically inaccessible to US firms and after that it’ll depend on low wage local labor vs AI models).

There’s also models getting more capable (larger share of the GDP) and GDP growing more quickly due to automation of GDP activities. But even without that it’s at least a 2T/year opportunity (assuming the model is even a little accurate).

To me this validates the bull case that is being raised in private equity. The major risks are not if the market or valuations exist but whether it’ll be captured by a few major players or if open models and local inference eat away at centralization.

psunavy03•2mo ago
And then when 1T worth of workers are laid off, who is going to buy the stuff that the companies who laid them off make?
vlovich123•2mo ago
I am in no way making a value statement of whether this is good or bad. Just analyzing the opportunity.
brazukadev•2mo ago
That was not a question of good and bad. There's no point in optimizing production if there's no demand for products. Then most businesses would go bankrupt and we would get into a huge recession until things get to a balance again, something worse than 1929
vlovich123•2mo ago
Maybe maybe not. If AI is really taking over, that means the goods are also getting cheaper. It’s too difficult to prognosticate on the impact this has on human labor and society and the economy writ large
nextworddev•2mo ago
There you go, that’s all the AI revenue needed to justify capex
pydry•2mo ago
>Beneath the surface lies the total exposure, the $1.2 trillion in wages, and that includes routine functions in human resources, logistics, finance, and office administration. Those are areas sometimes overlooked in automation forecasts.

Those routine functions could have been automated before LLMs.

Usually when theyre not it's due to some sort of corporate dysfunction which is not something LLMs can solve.

pizlonator•2mo ago
Here's a realistic path for how AI "replaces"/"displaces" a large chunk of the workforce:

- Even without AI most corpos could shed probably 10% of their workforce - or maybe more - and still be about as productive as they are now. Bunch of reasons why that's true, but here are two I can easily think of: (1) after the layoffs work shifts to the people who remain, who then work harder; (2) underperformers are often not let go for a long time or ever because their managers don't want to do the legwork (and the layoffs are a good opportunity to force that to happen).

- It's hard for leadership to initiate layoffs, because doing so seems like it'll make the company look weak to investors, customers, etc. So if you really want to cut costs by shedding 10%+ of your workforce and making the remaining 90% work harder, then you have to have a good story to tell for why you are doing it.

- AI makes for a good story. It's a way to achieve what you would have wanted to achieve anyway, while making it seem like you're cutting edge.

api•2mo ago
I wonder if AI also reveals unnecessary parts of the workforce by demonstrating that what they do is actually pretty trivial.

There are a ton of basically BS office jobs that could probably be replaced by AI, or in some cases just revealed as superfluous.

We need to just stop pretending we still need a 1:1 connection between employment and income and do UBI. Useless jobs help us preserve the illusions of a pre-post-industrial civilization. Instead of just paying people, we pay people to do work we don't need.

sharpshadow•2mo ago
There is this joke about socialism where hundreds of workers digging with shovels and somebody asks “Why not use that excavator? One machine could do it in no time” and the other answers “And put 20 men out of work? We’re creating jobs!”.
api•2mo ago
This is why a lot of modern leftists are anti-tech. Tech destroys jobs. If we are going to maintain the fiction that full employment is necessary for a modern civilization, everyone has to have a job, and for that to be true we have to restrict our technological progress.

Which is really just making a ton of people waste their time doing bullshit work. I fail to see how this is progressive.

AnimalMuppet•2mo ago
Well, I was going to say that many people perceive unemployment as "society does not value you", and that message can be really destructive to people.

But then I remembered how dehumanizing meaningless jobs are, and... I'm not sure how much of a win either direction is.

starlust2•2mo ago
The joke about someone using chatGPT to write a lengthy email that the recipient will summarize with ChatGPT is the perfect example of how pretend much work is.
AnimalMuppet•2mo ago
Processes are the problem.

Something went wrong once. Maybe not even in your organization, but it went wrong somewhere. Someone added a process to make sure that the problem didn't happen again, because that's what well-run organizations are supposed to do.

But too often, people don't think about the cost of the procedure. People are going to have to follow this procedure every time the situation happens for the next N years. How much does that cost in peoples' time? In money? How much did the mistake cost? How often did it happen? So was the procedure a net gain or a net loss? People don't ask that, but instead the procedure gets written and becomes "industry best practice".

(And for some industries, it is! Aviation, medical, defense... some of those have really tight regulation, and they require strict procedures. But not every organization is in those worlds...)

So now you have poor corporate drones that have to run through that maze of procedures, over and over. Well, if GPT can run the maze for you, that's really tempting. It can cut your boredom and tedium, cut out a ton of meaningless work, and make you far faster.

But on the other hand, if you are the person who wrote the procedure, you think that it matters that it be done correctly. The form has to be filled out accurately, not with random gibberish, not even with correct-sounding-but-not-actually-accurate data. So you cannot allow GPT to do the procedures.

The procedure-writers and procedure-doers live in different worlds and have different goals, and GPT doesn't fix that at all.

SAI_Peregrinus•2mo ago
Reason 3: those people are mostly buffer to absorb variable workloads. Firing them increase efficiency at the expense of being unable to keep up with spikes in demand. Productivity will stay about the same until the next crisis hits, then drop.
fulafel•2mo ago
Running a shop at maximum productivity is not sustainable of course. Quality and morale suffers, best workers leave, and turns out you really need slack for things to work well. ("You should overprovision your capacity" for the engineer mindset)
zkmon•2mo ago
But they should also look at the other side of the story. How many new problems will be created by that requires new jobs and investment. Most likely it's migration of jobs from one kind of work to other kind of work.
giva•2mo ago
Much like "the Cloud" solved a lot of problems in IT, and replaced them with more, different, harder problems.
add-sub-mul-div•2mo ago
There's always a lot of bending over backwards in these comments to create explanations for why the invention whose purpose is to replace labor won't replace labor.
stego-tech•2mo ago
I suspect part of that is denial: “AI won’t replace my job!” Which, sure, maybe this era of AI won’t. Maybe this LLM era won’t replace your job, this time.

The problem is that we will eventually create tools that can and will replace labor. The Capital class is salivating over that prospect quite openly without any shame whatsoever for its consequences.

Fighting against AI is the wrong move. Instead, we should be fighting against a system that fails to provide for human necessities and victim-blames those displaced by Capital, before Capital feels AI can sufficiently displace the workforce.

hahahacorn•2mo ago
Great point, tractors replaced labor and society has never recovered. We used to have a noble population of farmhands walking behind animals for miles, guiding plows with their bare hands. But thanks to tractors, all that fulfilling communal suffering vanished overnight.

Tragic.

stego-tech•2mo ago
Read the project and its key paper before commenting:

arxiv.org/abs/2510.25137

The key takeaway buried between technical jargon is that these figures aren’t measuring workforce replacement, but task replacement. They aren’t saying AI can replace 12% of the workforce, rather that AI can replace 12% of the work performed, and its associated wage values, expected concentrations, and diverse impacts (across the lower 48). There does not seem to be a more user-friendly visual available to tinker with, at least that I could readily find on mobile.

They try to couch this conclusion at the end, stating that workforce displacement isn’t going to happen by AI so much as by decision-makers in government and enterprise. It’s entirely possible to use AI tools to amplify productivity and output and lead to smaller work weeks with better labor outcomes, but we have ample evidence that, barring appropriate carrots and sticks, enterprises will fire folks to keep the profit for themselves while governments will victim-blame the unemployed for “not being current on skills”. This creates a strong disincentive for labor to cooperate with AI, because it’s a lose-lose Prisoner’s Dilemma for them: cooperation will either result in a boost in productivity that hurts those around them through displacement and an increased workload on themselves, or cooperation results in their own replacement in the midst of a difficult job market and broader economy. Cooperation is presently the worst choice for labor, and the authors do a milquetoast job highlighting this reality - but do better than most of their predecessors, at least.

Really, it comes back to what I spoke about in 2023 when it comes to AI: the problem isn’t AI so much as a system that will hand its benefits to those of already immense wealth and means, and that is the problem that needs solving immediately.

syngrog66•2mo ago
bonus points for the ".7%"

only thing better than pulling numbers out of the air is being very very precise

(not)

throw0101c•2mo ago
If anyone is curious about automation and people's/worker's reaction to it, I recommend Blood in the Machine: The Origins of the Rebellion Against Big Tech by Brian Merchant:

> The most urgent story in modern tech begins not in Silicon Valley but two hundred years ago in rural England, when workers known as the Luddites rose up rather than starve at the hands of factory owners who were using automated machines to erase their livelihoods.

> The Luddites organized guerrilla raids to smash those machines—on punishment of death—and won the support of Lord Byron, enraged the Prince Regent, and inspired the birth of science fiction. This all-but-forgotten class struggle brought nineteenth-century England to its knees.

> Today, technology imperils millions of jobs, robots are crowding factory floors, and artificial intelligence will soon pervade every aspect of our economy. How will this change the way we live? And what can we do about it?

* https://www.hachettebookgroup.com/titles/brian-merchant/bloo...

* https://www.bloodinthemachine.com/p/introducing-blood-in-the...

* https://www.goodreads.com/book/show/59801798-blood-in-the-ma...

* https://read.dukeupress.edu/critical-ai/article/doi/10.1215/...

coffeecoders•2mo ago
I think the real story isn’t that AI will replace 11.7% of workers. It is that we are about to discover that far more than 11.7% of the work we do was never actually work in the first place.

Workflows that were untouchable will now be overhauled and the productivity gains just raises the throughput ceiling.

sublinear•2mo ago
You're right that there are inefficiencies, but almost entirely communication overhead (pointless meetings, synchronous work, etc).

What AI brings is the ability to bridge those communication gaps. Instead of bugging the engineer people can ask the AI for a summary of completed and ongoing work. Instead of needing so many meetings the AI can coordinate when people check in with it.

siliconc0w•2mo ago
The difficulty is in the implementation. Many jobs could already be mostly replaced with just a basic system of record (i.e. a database) but it hasn't happened. The world still runs on paper, email, or maybe a shared spreadsheet if they're sophisticated.

Organizations are glued together with interpersonal relationships and unwritten expertise so it's really hard to just drop in an AI solution - especially if it isn't reliable enough to entirely replace a person because then you need both which is more expensive.

JohnMakin•2mo ago
Then why aren't they? Why have we not seen that reflected anywhere at all?
koakuma-chan•2mo ago
Because nobody knows how to use AI. Nobody cares to figure out. PMs just want features features features, and if something doesn't seem like there would be "business value," it is dismissed immediately.
vb-8448•2mo ago
actually, there are plenty of office jobs nowadays that can be optimized/removed, reliably, with non-ai tools ....

what we will probably see is AI used to build tools and automations that will optimize/remove these jobs

nacozarina•2mo ago
there isn’t a govt on earth that can survive that large & sudden an increase in long-term unemployment; overthrown or bankrupted, they’re gone either way. the pitchfork mob will proceed to start burning data centers. the idea they’ll all quietly choose serfdom over revolution is wildly unrealistic. ai needs much stronger regulation to have a chance at survival.