frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The universal weight subspace hypothesis

https://arxiv.org/abs/2512.05117
109•lukeplato•3h ago•38 comments

Kroger acknowledges that its bet on robotics went too far

https://www.grocerydive.com/news/kroger-ocado-close-automated-fulfillment-centers-robotics-grocer...
85•JumpCrisscross•3h ago•78 comments

Icons in Menus Everywhere – Send Help

https://blog.jim-nielsen.com/2025/icons-in-menus/
272•ArmageddonIt•7h ago•108 comments

Jepsen: NATS 2.12.1

https://jepsen.io/analyses/nats-2.12.1
293•aphyr•8h ago•106 comments

Horses: AI progress is steady. Human equivalence is sudden

https://andyljones.com/posts/horses.html
195•pbui•3h ago•119 comments

The Lost Machine Automats and Self-Service Cafeterias of NYC (2023)

https://www.untappedcities.com/automats-cafeterias-nyc/
33•walterbell•2h ago•15 comments

OSHW: Small tablet based on RK3568 and AMOLED screen

https://oshwhub.com/oglggc/rui-xin-wei-rk3568-si-ceng-jia-li-chuang-mian-fei-gong-yi
29•thenthenthen•5d ago•2 comments

Strong earthquake hits northern Japan, tsunami warning issued

https://www3.nhk.or.jp/nhkworld/en/news/20251209_02/
272•lattis•12h ago•135 comments

AMD GPU Debugger

https://thegeeko.me/blog/amd-gpu-debugging/
217•ibobev•11h ago•35 comments

Scientific and Technical Amateur Radio

https://destevez.net/
25•gballan•2h ago•2 comments

Let's put Tailscale on a jailbroken Kindle

https://tailscale.com/blog/tailscale-jailbroken-kindle
243•Quizzical4230•11h ago•57 comments

Latency Profiling in Python: From Code Bottlenecks to Observability

https://quant.engineering/latency-profiling-in-python.html
15•rundef•6d ago•2 comments

Hunting for North Korean Fiber Optic Cables

https://nkinternet.com/2025/12/08/hunting-for-north-korean-fiber-optic-cables/
226•Bezod•10h ago•58 comments

Microsoft increases Office 365 and Microsoft 365 license prices

https://office365itpros.com/2025/12/08/microsoft-365-pricing-increase/
277•taubek•13h ago•323 comments

IBM to acquire Confluent

https://www.confluent.io/blog/ibm-to-acquire-confluent/
352•abd12•13h ago•283 comments

Has the cost of building software dropped 90%?

https://martinalderson.com/posts/has-the-cost-of-software-just-dropped-90-percent/
185•martinald•8h ago•333 comments

Trials avoid high risk patients and underestimate drug harms

https://www.nber.org/papers/w34534
78•bikenaga•8h ago•31 comments

Show HN: Fanfa – Interactive and animated Mermaid diagrams

https://fanfa.dev/
71•bairess•4d ago•16 comments

Cassette tapes are making a comeback?

https://theconversation.com/cassette-tapes-are-making-a-comeback-yes-really-268108
46•devonnull•4d ago•57 comments

Microsoft Download Center Archive

https://legacyupdate.net/download-center/
130•luu•3d ago•15 comments

Paramount launches hostile bid for Warner Bros

https://www.cnbc.com/2025/12/08/paramount-skydance-hostile-bid-wbd-netflix.html
261•gniting•13h ago•246 comments

AI should only run as fast as we can catch up

https://higashi.blog/2025/12/07/ai-verification/
118•yuedongze•9h ago•114 comments

A series of tricks and techniques I learned doing tiny GLSL demos

https://blog.pkh.me/p/48-a-series-of-tricks-and-techniques-i-learned-doing-tiny-glsl-demos.html
146•ibobev•10h ago•16 comments

Launch HN: Nia (YC S25) – Give better context to coding agents

https://www.trynia.ai/
94•jellyotsiro•10h ago•68 comments

Deep dive on Nvidia circular funding

https://philippeoger.com/pages/deep-dive-into-nvidias-virtuous-cycle
282•jeanloolz•8h ago•159 comments

We collected 10k hours of neuro-language data in our basement

https://condu.it/thought/10k-hours
96•nee1r•10h ago•58 comments

Legion Health (YC S21) is hiring a founding engineer (SF, in-person)

1•the_danny_g•10h ago

Nova Programming Language

https://nova-lang.net
90•surprisetalk•12h ago•47 comments

No more O'Reilly subscriptions for me

https://zerokspot.com/weblog/2025/12/05/no-more-oreilly-subscriptions-for-me/
124•speckx•11h ago•114 comments

Intel 8086 Microcode Explorer

https://nand2mario.github.io/8086_microcode.html
20•todsacerdoti•4d ago•3 comments
Open in hackernews

Horses: AI progress is steady. Human equivalence is sudden

https://andyljones.com/posts/horses.html
193•pbui•3h ago

Comments

barbazoo•2h ago
Engine efficiency, chess rating, AI cap ex. One example is not like the other. Is there steady progress in AI? To me it feels like it’s little progress followed by the occasional breakthrough but I might be totally off here.
GaggiX•2h ago
ChatGPT was released 3 years ago and that was complete ass compared to what we have today.
dcre•1h ago
I think you are totally off. Individual benchmarks are not very useful on their own, but as far as I’m aware they all tell the same story of continual progress. I don’t find this surprising since it matches my experience as well.
raincole•1h ago
What example do you need? In every single benchmark AI is getting better and better.

Before someone says "but benchmark doesn't reflect real world..." please name what metric you think is meaningful if not benchmark. Token consumption? OpenAI/Anthropic revenue?

jacobsenscott•1h ago
Whenever I try and use a "state of the art" LLM to generate code it takes longer to get a worse result than if I just wrote the code myself from the start. That's the experience of every good dev I know. So that's my benchmark. AI benchmarks are BS marketing gimmicks designed to give the appearance of progress - there are tremendous perverse financial incentives.

This will never change because you can only use an LLM to generate code (or any other type of output) you already know how to produce and are expert at - because you can never trust the output.

whycombinetor•1h ago
Third party benchmarks like terminalbench exist.

W.r.t code changes especially small ones (say 50 lines spread across 5 files), if you can't get an agent to make nearly exactly the code changes you want, just faster than you, that's a you problem at this point. If it maybe would take you 15 minutes, grok-code-fast-1 can do it in 2.

trollbridge•1h ago
Right. With careful use of AIs, I can use it to gather information to help me make better designs (like giving me summaries of the current best available frameworks or libraries to choose for a given project), but as far as just generating an architecture and then generating the code and devops and so on for that? It's just not there, unless you're creating an app that effectively already exists, like some basic CRUD app.

If you're creating basic CRUDs, what on earth are you doing? That kind of thing should have been automated a long time ago.

whycombinetor•1h ago
What do you mean when you say building crud apps should be automated?
beeflet•52m ago
conventionally, it should have been abstracted by a higher-level language.
bluefirebrand•1h ago
> please name what metric you think is meaningful

Job satisfaction and human flourishing

By those metrics, AI is getting worse and worse

philipwhiuk•42m ago
OpenAI net profit.

The figures for cost are wildly off to start with.

Calamityjanitor•1h ago
The only 'line go up' graph they have left is money invested. I'm even dubious of the questions answered graph. It looks more like a feature added to internal wiki that went up in usage. Instead it's portrayed as a measure of quality or usefulness.
personjerry•2h ago
I think it's a cool perspective, but the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative.

And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?

danpalmer•1h ago
> the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative

This is an assumption for the best-case scenario, but I think you could also just take the marginal case. Steady progress builds until you get past the state of the art system, and then the switch becomes easy to justify.

s17n•2h ago
This is a fun piece... but what killed off the horses wasn't steady incremental progress in steam engine efficiency, it was the invention of the internal combustion engine.
dcre•1h ago
According to Wikipedia, the IC engine was invented around 1800 and only started to get somewhere in the late 1800s. Sounds like the story doesn’t change.

https://en.wikipedia.org/wiki/Internal_combustion_engine

ible•2h ago
People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.

The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.

If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.

traverseda•1h ago
I'd be more worried about the implicit power imbalance. It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
jordwest•1h ago
Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.

But from the perspective of a human being, an animal, and the environment that needs love, connection, generosity and care, another human being who can provide those is priceless.

I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.

Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.

vkou•3m ago
The thing that the ultra-wealthy desire above all else is power and privilege, and they won't be getting either of that in those bunkers.

They sure as shit won't be content to leave the rest of us alone.

jondwillis•1h ago
Workshopping this tortured metaphor:

AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood.

The owners of the tech need to reinvest in the hosts.

hephaes7us•57m ago
Realistically, at a certain point the training would likely involve interaction with reality (by sensors and actuators), rather than relying on secondhand knowledge available in textual form.
ghssds•1h ago
People are animals.
d--b•37m ago
I was trying to phrase something like this, but you said it a lot better than I ever could.

I can’t help but smile at the possibility that you could be a bot.

burroisolator•2h ago
"In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.

And not very long after, 93 per cent of those horses had disappeared.

I very much hope we'll get the two decades that horses did."

I'm reminded of the idiom "be careful what you wish for, as you might just get it." Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.

falcor84•1h ago
My reading of tfa is exactly that - the author is hoping that we'll have at least a generation or so to adapt, like horses did, but is concerned that it might be significantly more rapid.
OccamsMirror•1h ago
To be clear though, the horses didn't adapt. Their populate was reduced by orders of a magnitude.
sendes•1h ago
True, but the horses' population started (slightly) rising again when they went from economic tools to recreational tools for humans. What will happen to humans?
burroisolator•1h ago
"You're absolutely right!" Thanks for pointing it out. I was expecting that kind of perspective when the author brought up horses, but found the conclusion to be odd. Turns out it was just my reading of it.
nacozarina•1h ago
the stability of no govt faced risk over a 20% increase in horse unemployment
mxfh•56m ago
I just have no idea how rigerously the data was reviewed. The 95% decline simply does no compute with 4,500,000 in 1959 and even an increase to 7,000,000 in 1968

https://time.com/archive/6632231/recreation-return-of-the-ho...

turns out the chart is about farm horses only as counted by the USDA not including any recreational horses. So this is nore about tractors for horses, not passenger cars.

So thats an optional future humans can hope for too.

---

City horses (the ones replaced by cars and trucks) were nearly extinct by 1930 already.

City horses were formerly almost exclusively bred on farms but because of their practical disappearance such breeding is no longer necessary. They have declined in numbers from 3,500,000 in 1910 to a few hundred thousand in 1930.

https://www2.census.gov/library/publications/decennial/1930/...

kangs•2h ago
hello faster horses
narrator•1h ago
Wait till the robots arrive. That they will know how to do a vast range of human skills, some that people train their whole lives for, will surprise people the most. The future shock I get from Claude Code, knowing how long stuff takes the hard way, especially niche difficult to research topics like the alternate applicable designs of deep learning models to a modeling task, is a thing of wonder. Imagine now that a master marble carver shows up at an exhibition and some sci-fi author just had robots make a perfect beautiful equivalent of a character from his novel, equivalent in quality to Michaelangelo's David, but cyberpunk.
sothatsit•1h ago
This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:

1. The release of Claude Code in February

2. The release of Opus 4.5 two weeks ago

In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.

Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.

AIorNot•1h ago
I would add Gemini Nano Banna Pro to that list - (its words with image ability) is amazing..
wrs•1h ago
Point taken, but it's hard to take a talk seriously when it has a graph showing AI becoming 80% of GDP! What does the "P" even stand for then?
adventured•1h ago
It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.

As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.

I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.

bwfan123•1h ago
> The hollowing out of Silicon Valley is imminent

I think AI tools are great, and I use them daily and know their limits. Your view is commonly held by management or execs who don't have their boots on the ground.

trollbridge•1h ago
That's what I've observed. I currently have more work booked than I can reasonably get done in the next year, and my customers would be really delighted if I could deliver it to them sooner, and take on even more projects. But I have yet to find any way that just adding AI tools to the mix makes us orders-of-magnitude better. The most I've been able to squeeze out is a 5% to 10% increase.
glitchc•1h ago
But they do have their hands on your budget, and they are responsible for creating and filling positions.
trollbridge•1h ago
I don't think is the case; I think what's actually going on is that the HN crowd are the people who are stuck actually trying to use AI tools and aware of their limitations.

I have noticed, however, that people who are either not programmers or who are not very good programmers report that they can derive a lot of benefit from AI tools, since now they can make simple programs and get them to work. The most common use case seems to be some kind of CRUD app. It's very understandable this seems revolutionary for people who formerly couldn't make programs at all.

For those of us who are busy trying to deliver what we've promised customers we can do, I find I get far less use out of AI tools than I wish I did. In our business we really do not have the budget to add another senior software engineer, and we don't the spare management/mentor/team lead capacity to take on another intern or junior. So we're really positioned to be taking advantage of all these promises I keep hearing about AI, but in practical terms, it saves me at an architect or staff level maybe 10% of my time and for one of our seniors maybe 5%.

So I end up being a little dismissive when I hear that AI is going to become 80% of GDP and will be completely automating absolutely everything, when what I actually spend my day on is the same-old same-old of trying to get some vendor framework to do what I want to get some sensor data out of their equipment and deliver apps to end customers that use enough of my own infrastructure that they don't require $2,000 a month of cloud hosting services per user. (I picked that example since at one customer, that's what we were brought in to replace: that kind of cost simply doesn't scale.)

shermantanktop•1h ago
It's not subtle.

But the temptation of easy ideas cuts both ways. "Oldsters hate change" is a blanket dismissal, and there are legitimate concerns in that body of comments.

Lerc•1h ago
>It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.

I don't think you can characterise it as a sentiment of the community as a whole. While every AI thread seems to have it's share of AI detractors, the usernames of the posters are becoming familiar. I think it might be more accurate to say that there is a very active subset of users with that opinion.

This might hold true for the discourse in the wider community. You see a lot of coverage about artists outraged by AI, but when I speak to artists they have a much more moderate opinion. Cautious, but intrigued. A good number of them are looking forward to a world that embraces more ambitious creativity. If AI can replicate things within a standard deviation of the mean, the abundance of that content there will create an appetite for something further out.

magarnicle•40m ago
I value this comment even though I don't really agree about how useful AI is. I recognise in myself that my aversion to AI is at least partly driven by fear of it taking my job.
jsheard•1h ago
Cost per word is a bizarre metric to bring up. Since when is volume of words a measure of value or achievement?
StilesCrisis•1h ago
It also puts a thumb on the scale for AI, which tends to emit pages of text to answer simple questions.
jsheard•1h ago
The chart is actually words "thought or written" so I guess they are running up the numbers even more by counting Claudes entire inner monologue, on top of what it ultimately outputs.
garciasn•1h ago
Sounds like any post-secondary, graduate student, or management consultant out there being there are, very often, page/word count or hours requirements. Considering the model corpora, wordiness wins out.
bdangubic•1h ago
these are not just “words” but answers to questions from people who got a job at anthropic had…
websiteapi•1h ago
funny how we have all of this progress yet things that actually matter (sorry chess fans) in the real world are more expensive: health care, housing, cars. and what meager gains there are seem to be more and more concentrated in a smaller group of people.

plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that's the preview. that's it. you will lose jobs and you will pay more. congrats.

even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn't endlessly keep disrupting itself and have a little bit of discipline.

so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won't let more be built.

atleastoptimal•1h ago
Those are all expensive because of artificial barriers meant to keep their prices high. Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.

Tech and AI have taken off in the US partially because they’re in the domain of software, which hasnt bee regulated to the point of deliberate inefficiency like other industries in the US.

tyre•1h ago
If we had less regulation of insurance companies, do you think they’d be cheaper?

(I pick this example because our regulation of insurance companies has (unintuitively) incentivized them to pay more for care. So it’s an example of poor regulation imo)

davidw•1h ago
Health care is the more complicated one of the examples cited, but housing definitely is an 'own goal' in how we made it too difficult to build in too many places - especially "up and in" rather than outward expansion.

Stuff like this isn't Wall Street or Billionaires or whatever bogeyman - it's our neighbors: https://bendyimby.com/2024/04/16/the-hearing-and-the-housing...

wyre•33m ago
Health care is complicated, but I don't think it would hard to understand how less regulations could lower prices. More insurers could enter markets, could compete across state lines, and compliance costs could be lowered.

However regulation is helpful for those already sick or with pre-existing conditions. Developed countries with well-regulated systems also have better health outcomes than the US does.

websiteapi•50m ago
you mean the same Asia that has the same problem? USA enjoying arbitrage is not actually a solution nor is it sustainable. not to mention that if you control for certain things, like house size for instance relative to inflation adjusted income it isn't actually much different despite popular belief.
refactor_master•25m ago
> Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.

What do you mean? Several Asian cities have housing crises far worse than the US in local purchasing power, and I'd even argue that a "cheap" home in many Asian countries is going to be of a far lower quality than a "cheap" home in the US.

jordwest•1h ago
It would be kinda funny if not so tragic how economists will argue both "[productive improvement] will make things cheaper" and then in the next breath "deflation is bad and must be avoided at all costs"
actionfromafar•1h ago
But is it really, though? Dollars aren't meant to be held.
jordwest•52m ago
I think the idea of dollars as purely a trading medium where absolute prices don't matter wouldn't be such an issue if wages weren't always the last thing to rise with inflation.

As it is now anyone with assets is only barely affected by inflation while those who earn a living from wages have their livelihood eroded over time covertly.

actionfromafar•30m ago
Exactly as the current owners… ahem, leaders of this country want it.
AnotherGoodName•1h ago
To give backing i’m from Australia which has ~2.5x the median wealth per capita of US citizens but a lower average wealth. This shows through in the wealth of a typical citizen. Less homelessness, better living standards (hdi in australia is higher) etc.

Compare sorting by median vs average to get a sense of the issue; https://en.wikipedia.org/wiki/List_of_countries_by_wealth_pe...

This is a recent development where the median wealth of citizens in progressively taxes nations has quickly overtaken the median wealth of USA citizens.

All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right? Yet things gave consistently been going the other way for along time in the USA.

jacquesm•55m ago
I think by the time the wealthy realize they're setting themselves up for the local equivalent of the French Revolution it will be a bit late. It's a really bad idea to create a large number of people with absolutely nothing to lose.
tadfisher•50m ago
They already know, and do not care. Their plan is quite literally to retreat into bunkers with shock collars enforcing the loyalty of their guards.

The richest of the rich have purchased islands where they can hole up.

AstroBen•47m ago
Stripped of their infinite freedom out here to hide in a bunker? No chance

The bunkers are in case of nuclear war or serious pandemics. Absolutely worst case last resort scenario, not just "oh I don't care if I end up there"

hsuduebc2•29m ago
Moreover when you acting absolutely relentlessly like certain car maker.

People usually change their behavior after some pretty horrific events. So I would predict something like that in future. For both Europe and US too.

overfeed•29m ago
I suspect the wealthy think they can shield themselves by exerting control over mass media, news outlets, the press, and domestic surveillance, all amplified by AI.

If all that fails, they have their underground bunkers on faraway islands and/or backup citizenships.

zdragnar•53m ago
> All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right?

You could tax 100% of all of the top 1%'s income (not progressively, just a flat 100% tax) and it'd cover less than double the federal government's budget deficit in the US. There would be just enough left over to pay for making the covid 19 ACA subsidies permanent and a few other pet projects.

Of course, you can't actually tax 100% of their income. In fact, you'd need higher taxes on the top 10% than anywhere else in the West to cover the deficit, significantly expand social programs to have an impact, and lower taxes on the middle class.

It should be pointed out that Australia has higher taxes on their middle class than the US does. It tops out at 45% (plus 2% for medicare) for anyone at $190k or above.

If you live in New York City, and you're in the top 1% of income earners (taking cash salary rather than equity options) you're looking at a federal tax rate of 37%, a state tax rate of 10.9%, and a city income tax rate of 3.876% for a total of 51.77%. Some other states have similarly high tax brackets, others are less, and others yet use other schemes like no income tax but higher sales and property taxes.

Not quite so obvious when you look closer at it.

yulker•48m ago
The point isn't to just cover the tax bill, it's that by shifting the burden up the class ladder, there is more capital available to the classes that spend and circulate their money in the economy rather than merely accumulate it
dzonga•1h ago
yeah that my question to the author too - if A.I is to really earn its keep it means A.I should help in getting more physical products into people's hands & helping with producing more energy.

physical products & energy are the two things that are relevant to people's wellbeing.

right now A.I is sucking up the energy & the RAM - so is it gonna translate into a net positive ?

Avicebron•1h ago
That's the question though isn't it. If everyone got a subscription to claude-$Latest would they be able to pay their rent with it?
renewiltord•1h ago
Well, politically, housing becoming cheaper is considered a failure. And this is true for all ages. As an example, take Reddit. Skews younger, more Democrat-voting, etc. You'd think they'd be for lower housing prices. But not really. In fact, they make fun of states like Texas whose cities act to allow housing to become cheaper: https://www.reddit.com/r/LeopardsAteMyFace/comments/1nw4ef9/...

That's just an example, but the pattern will easily repeat. One thing that came out of the post-pandemic era is that the lowest deciles saw the biggest rises in income. Consequently, things like Doordash became more expensive, and stuff like McDonald's stopped staffing as much.

This isn't some grand secret, but most Americans who post on Twitter, HN, or Reddit consider the results some kind of tragedy, though it is the natural thing that happens when people become much higher income: you can't hire many of them to do low-productivity jobs like bus a McD's table.

That's what life looks like when others get richer relative to you. You can't consume the fruits of their labor for cheap. And they will compete for you with the things that you decided to place supply controls on. The highly-educated downwardly-mobile see this most acutely, which is why you see it commonly among the educated children of the past elite.

mlrtime•38m ago
Thank you, I've replied too many times that if people want low priced housing, it's easily found in Texas. The replies are empty or stating that they don't want to live there because... it's Texas.

So the young want cheap affordable housing, right in the middle of Manhattan, never going to happen.

lowbloodsugar•1h ago
>in the real world are more expensive: health care, housing, cars.

Think of it another way. It's not that these things are more expensive. It's that the average US worker simply doesn't provide anything of value. China provides the things of value now. How the government corrected for this was to flood the economy with cash. So it looks like things got more expensive, when really it's that wages reduced to match reality. US citizens selling each other lattes back and forth, producing nothing of actual value. US companies bleeding people dry with fees. The final straw was an old man uniting the world against the USA instead of against China.

If you want to know where this is going, look at Britain: the previous world super power. Britain governed far more of the earth than the USA ever did, and now look at it. Now the only thing it produces is ASBOs. I suppose it also sells weapons to dictators and provides banking to them. That is the USA's future.

copypaper•27m ago
Yep. My grandma bought her house in ~1962 for $20k working at a factory making $2/hr. Her mortgage was $100/m; about 1 weeks worth of pay. $2/hr then is the equivalent of ~$21/hr today.

If you were to buy that same house today, your mortgage would be about $5100/m-- about 6 weeks of pay.

And the reason is exactly what you're saying: the average US worker doesn't provide as much value anymore. Just as her factory job got optimized/automated, AI is going to do the same for many. Tech workers were expensive for a while and now they're not. The problem is that there seems to be less and less opportunity where one can bring value. The only true winners are the factory owners and AI providers in this scenario. The only chance anybody has right now is to cut the middleman out, start their own business, and pray it takes off.

galangalalgol•15m ago
But the us is China's market, so the ccp goes along even though they are the producer. Because a domestic consumer economy would mean sharing the profits of that manufacturing with the workers. But that would create a middle class not dependent on the party leading (at least in their minds, and perhaps not wrongly) to instability. It is a dance of two, and neither can afford to let go. And neither can keep dancing any longer. I think it will be very bad everywhere.
hsuduebc2•29m ago
It's interesting to see Cyberpunk 2077 became somehow relatable more and more.
cal_dent•13m ago
Housing is a funny old one and speaks to it being a human problem. One thing a lot of people dont truly engage with with the housing issue is that its a massive issue of distribution. Too many people want to live in too few places. Yes, central banks & interest rates (being too low and also now being relatively too high), nimbyism, and rent seeking play an important role too but solving the "too many people live in too few places" issue actually fixes that problem (slowly, and possibly unpalatably slow for some, but a fix nonetheless)

The key issue upstream is that too many good jobs are concentrated in too few places, and that leads to consumerism stimulating those places and making them further more attractive. Technology, through Covid, actually gave governments a get out of jail free card by allowing remote work to become more mainstream. Only to just not grasp the golden egg they were given. Pivot economies more to remote working more actively helps distribute people to other places with more affordable home. Over time, and again slowly, those places become more attractive because people now actually live there.

Existing homeowners can still wrap themselves in the warm glow of their high house prices which only loses "real" value through inflation which people tend not to notice as much.

But we decided to try to go back to the status quo so oh well

jameslk•1h ago
> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.

> Then in December, Claude finally got good enough to answer some of those questions for us.

> … Six months later, 80% of the questions I'd been being asked had disappeared.

Interesting implications for how to train juniors in a remote company, or in general:

> We find that sitting near teammates increases coding feedback by 18.3% and improves code quality. Gains are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when sitting near colleagues.

https://pallais.scholars.harvard.edu/sites/g/files/omnuum592...

1970-01-01•1h ago
How about we stop trying the analogy clothing on and just tell it like it is? AI is unlike any other technology to date. Just like predicting the weather, we don't know what it will be like in 20 months. Everything is a guesstimate.
stego-tech•1h ago
This is the correct take. We all have that "Come to Jesus" moment eventually, where something blows our minds so profoundly that we believe anything is possible in the immediate future. I respect that, it's a great take to have and promotes a lot of discussion, but now more than ever we need concretes and definitives instead of hype machines and their adjacent counterparts.

Too much is on the line here regardless of what ultimately ends up being true or just hype.

WhyOhWhyQ•1h ago
Humans design the world to our benefit, horses do not.
bluefirebrand•1h ago
Most humans don't. Only the wealthy and powerful are able to do this

And they often do it at the expense of the rest of us

AstroBen•1h ago
Cool, now lets make a big list of technologies that didn't take off like they were expected to
echelon•1h ago
> And not very long after, 93 per cent of those horses had disappeared.

> I very much hope we'll get the two decades that horses did.

> But looking at how fast Claude is automating my job, I think we're getting a lot less.

This "our company is onto the discovery that will put you all out of work (or kill you?)" rhetoric makes me angry.

Something this powerful and disruptive (if it is such) doesn't need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.

I've seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.

Meanwhile Google, apart from perhaps Kilpatrick, is just silent.

trollbridge•1h ago
At this point "we're going to make all office work obsolete" feels more like a marketing technique than anything actually connected to reality. It's sort of like how Coca-Cola implies that drinking their stuff will make you popular and well-liked by other attractive, popular people.

Meanwhile, my own office is buried in busywork that there are currently no AI tools on the market that will do the work for us, and AI entering a space sometimes increases busywork workloads. For example, when writing descriptions of publications or listings for online sales, we have to put more effort now into not sounding like it was AI-generated or we will lose sales. The AI tools for writing descriptions / generating listings are not very helpful either. (An inaccurate listing/description is a nightmare.)

I was able to help set up a client with AI tools to help him generate basically a faux website in a few hours that has lots of nice graphic design, images, etc. so that his new venture looks like a real company. Well, except for the "About Us" page that hallucinated an executive team plus a staff of half a dozen employees. So I guess work like that does get done faster now.

glitchc•1h ago
Well, tbf the author was hired to answer newbie questions. Perhaps the position is that of an evangelist, not a scientist.
kazinator•1h ago
Ironically, you could use the sigmoid function instead of horses. The training stimulus slowly builds over multiple iteration and then suddenly, flip: the wrong prediction reverses.
fizlebit•1h ago
yeah but machines don't produce horseshit, or do they? (said in the style of Vsauce)
conartist6•1h ago
I thought this was going to be about how much more intelligent horses are than AIs and I was disappointed
billisonline•1h ago
An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.

I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. I agree with the article—in that moment AI will overtake us suddenly! But I disagree that progress will be linear. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.

rvz•1h ago
Remember, these companies (including the author) have an incentive to continue selling fear of job displacement not because of how disruptive LLMs are, but because of how profitable it is if you scare everyone into using your product to “survive”.

To companies like Anthropic, “AGI” really means: “Liquidity event for (AI company)” - IPO, tender offer or acquisition.

Afterwards, you will see the same broken promises as the company will be subject to the expectations of Wall St and pension funds.

johnsmith1840•1h ago
I mean it's hard to argue that if we invented a human in a box (AGI) human work would be irrelevent. But I don't know how we could watch current AI and anyone can say we have that.

The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.

Maybe it's one massive breakthrough away or maybe it's dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don't know.

namesbc•1h ago
Software engineers used to know that measuring lines of code written was a poor metric for productivity...

https://www.folklore.org/Negative_2000_Lines_Of_Code.html

actionfromafar•58m ago
So, a free idea from me: train the next coding LLM to produce not regular text, but patches which shortens code while still keeping the code working the same.
wyre•25m ago
gonna tell claude to write all my code in one line
underyx•24m ago
Ctrl-F 'lines', 0 results

Ctrl-F 'code', 0 results

What is this comment about?

kalkin•5m ago
Charitably I'm guessing it's supposed to be an allusion to the chart with cost per word? Which is measuring an input cost not an output value, so the criticism still doesn't quite make sense, but it's the best I can do...
pbw•1h ago
This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn't mean it won't still subjectively happen "fast".
glitchc•1h ago
Conclusion: Soylent..?
nextworddev•45m ago
damn
leowoo91•1h ago
We still have chess grandmasters if you have noticed..
xlbuttplug2•57m ago
Yes, and we'll continue to have human coding competitions for entertainment purpose. Good luck trying to live off the prize money though.
nextworddev•45m ago
Hikaru makes good money streaming on Twitch tho
byronic•1h ago
my favorite part was where the graphs are all unrelated to each other
richardles•1h ago
I've also noticed that LLMs are really good at speeding up onboarding. New hires basically have a friendly, never tired mentor available. It gives them more confidence in the first drafted code changes / design docs. But I don't think the horse analogy works.

It's really changing cultural expectations. Don't ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions / historical context.

What LLMs are killing is:

- noisy Slacks with junior folks questions. Those are now your Gemini / chat gpt sessions.

- tedious implementation sessions.

The vast majority of the work is still human led from what I can tell.

john-radio•1h ago
I've never visited this blog before but I really enjoy the synthesis of programming skill (at least enough skill to render quick graphs and serve them via a web blog) and writing skill here. It kind of reminds me of the way xkcd likes to drive home his ideas. For example, "Surpassed by a system that costs one thousand times less than I do... less, per word thought or written, than ... the cheapest human labor" could just be a throwaway thought, and wouldn't serve very well on its own, unsupported, in a serious essay, and of course the graph that accompanies that thought in Jones's post here is probably 99.9% napkin math / AI output, but I do feel like it adds to the argument without distracting from it.

(A parenthetical comment explaining where he ballparked the measurements for himself, the "cheapest human labor," and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)

blondie9x•1h ago
This post is kind of sad. It feels like he's advocating for human depopulation since the trajectory aligns with horse populations declining by 93% also.
taneq•59m ago
Not advocating, just predicting. And not necessarily actual population, just population in paid employment.
cuttothechase•1h ago
>>This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.

Glad I noticed that footnote.

Article reeks of false equivalences and incorrect transitive dependencies.

kgk9000•1h ago
I think the author's point is that each type of job will basically disappear roughly at once, shortly after AI crosses the bar of "good enough" in that particular field.
xlbuttplug2•42m ago
I think the turning point will be when AI assisted individuals or tiny companies are able to deliver comparable products/value as the goliaths.
COAGULOPATH•50m ago
> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.

But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?

The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.

BeefySwain•44m ago
The equivalency here is not 9 billion versus 90 billion, it's 9 billion versus 90 million, and the question is how does the decline look? Does it look like the standard of living for everyone increasing so high that the replacement rate is in the single digit percentage range, or does it look like some version of Elysium where millions have immense wealth and billions have nothing and die off?
schoen•43m ago
> No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion.

I have met some transhumanists and longtermists who would really like to see some orders of magnitude increase in the human population. Maybe they wouldn't say "tragedy", but they might say "burning imperative".

I also don't think it's clearly better for more beings to exist rather than fewer, but I just want to assure you that the full range of takes on population ethics definitely exists, and it's not simply a matter of straightforward common sense how many people (or horses) there ought to be.

ternus•39m ago
Regarding horses vs. engines, what changed the game was not engine efficiency, but the widespread availability of fuel (gas stations) and the broad diffusion of reliable, cheap cars. Analogies can be made to technologies like cell phones, MP3 players, or electric cars: beyond just the quality of the core technology, what matters is a) the existence of supporting infrastructure and b) a watershed level of "good/cheap enough" where it displaces the previous best option.
pansa2•33m ago
> 90% of the horses in the US disappeared

Where did they go?

xwolfi•23m ago
they grew old and died ?
tomxor•30m ago
Terrible comparison.

Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.

Now LLMs, what is their purpose? What is the purpose of a human?

I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).

They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.

twodave•3m ago
Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:

  1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
  
  2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.