frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Agentic Coding Recommendations

https://lucumr.pocoo.org/2025/6/12/agentic-coding/
40•rednafi•1h ago•13 comments

Build a minimal decorator with Ruby in 30 minutes

https://remimercier.com/minimal-decorator-ruby/
37•unripe_syntax•3h ago•15 comments

Expanding Racks [video]

https://www.youtube.com/watch?v=iWknov3Xpts
86•doctoboggan•5h ago•12 comments

Danish Ministry Replaces Windows and Microsoft Office with Linux and LibreOffice

https://www.heise.de/en/news/From-Word-and-Excel-to-LibreOffice-Danish-ministry-says-goodbye-to-Microsoft-10438942.html
77•jlpcsl•1h ago•29 comments

Chatterbox TTS

https://github.com/resemble-ai/chatterbox
416•pinter69•14h ago•136 comments

Maximizing Battery Storage Profits via High-Frequency Intraday Trading

https://arxiv.org/abs/2504.06932
3•doener•41m ago•0 comments

Microsoft Office migration from Source Depot to Git

https://danielsada.tech/blog/carreer-part-7-how-office-moved-to-git-and-i-loved-devex/
146•dshacker•10h ago•121 comments

Pentagon Has Been Pushing Americans to Believe in UFOs for Decades, New Report

https://gizmodo.com/pentagon-has-been-pushing-americans-to-believe-in-ufos-for-decades-new-report-finds-2000614615
18•pseudolus•42m ago•3 comments

SchemeFlow (YC S24) Is Hiring a Founding Engineer (London) to Speed Up Construction

https://www.ycombinator.com/companies/schemeflow/jobs/SbxEFHv-founding-engineer-full-stack
1•andrewkinglear•1h ago

The hunt for Marie Curie's radioactive fingerprints in Paris

https://www.bbc.com/future/article/20250605-the-hunt-for-marie-curies-radioactive-fingerprints-in-paris
54•rmason•2d ago•26 comments

Show HN: Eyesite - experimental website combining computer vision and web design

https://blog.andykhau.com/blog/eyesite
80•akchro•9h ago•13 comments

How much EU is in DNS4EU?

https://techlog.jenslink.net/posts/dns4eu/
113•todsacerdoti•2h ago•60 comments

Research suggests Big Bang may have taken place inside a black hole

https://www.port.ac.uk/news-events-and-blogs/blogs/space-cosmology-and-the-universe/what-if-the-big-bang-wasnt-the-beginning-our-research-suggests-it-may-have-taken-place-inside-a-black-hole
522•zaik•14h ago•452 comments

Show HN: Spark, An advanced 3D Gaussian Splatting renderer for Three.js

https://sparkjs.dev/
287•dmarcos•17h ago•63 comments

Plants hear their pollinators, and produce sweet nectar in response

https://www.cbc.ca/listen/live-radio/1-51-quirks-and-quarks/clip/16150976-plants-hear-pollinators-produce-sweet-nectar-response
260•marojejian•4d ago•55 comments

How I Program with Agents

https://crawshaw.io/blog/programming-with-agents
457•bumbledraven•3d ago•259 comments

AOSP project is coming to an end

https://old.reddit.com/r/StallmanWasRight/comments/1l8rhon/aosp_project_is_coming_to_an_end/
209•kaladin-jasnah•4h ago•84 comments

Dancing brainwaves: How sound reshapes your brain networks in real time

https://www.sciencedaily.com/releases/2025/06/250602155001.htm
7•lentoutcry•3d ago•0 comments

V-JEPA 2 world model and new benchmarks for physical reasoning

https://ai.meta.com/blog/v-jepa-2-world-model-benchmarks/
247•mfiguiere•19h ago•78 comments

Show HN: Ikuyo a Travel Planning Web Application

https://ikuyo.kenrick95.org/
271•kenrick95•21h ago•85 comments

OpenAI o3-pro

https://help.openai.com/en/articles/9624314-model-release-notes
238•mfiguiere•1d ago•151 comments

How long it takes to know if a job is right for you or not

https://charity.wtf/2025/06/08/on-how-long-it-takes-to-know-if-a-job-is-right-for-you-or-not/
185•zdw•2d ago•116 comments

Reflections on Sudoku, or the Impossibility of Systematizing Thought

https://rjp.io/blog/2025-06-07-reflections-on-sudoku
11•rjpower9000•3d ago•9 comments

Bypassing GitHub Actions policies in the dumbest way possible

https://blog.yossarian.net/2025/06/11/github-actions-policies-dumb-bypass
194•woodruffw•20h ago•93 comments

Fine-tuning LLMs is a waste of time

https://codinginterviewsmadesimple.substack.com/p/fine-tuning-llms-is-a-huge-waste
140•j-wang•1d ago•64 comments

The curious case of shell commands, or how "this bug is required by POSIX" (2021)

https://notes.volution.ro/v1/2021/01/notes/502e747f/
125•wonger_•1d ago•78 comments

Firefox OS's story from a Mozilla insider not working on the project (2024)

https://ludovic.hirlimann.net/2024/01/firefox-oss-story-from-mozila-insider.html
163•todsacerdoti•22h ago•110 comments

Show HN: RomM – An open-source, self-hosted ROM manager and player

https://github.com/rommapp/romm
198•gassi•20h ago•77 comments

Show HN: S3mini – Tiny and fast S3-compatible client, no-deps, edge-ready

https://github.com/good-lly/s3mini
237•neon_me•1d ago•93 comments

Navy backs right to repair after $13B carrier goes half-fed

https://www.theregister.com/2025/06/11/us_navy_repair/
48•beardyw•4h ago•5 comments
Open in hackernews

The Gentle Singularity

https://blog.samaltman.com/the-gentle-singularity
248•firloop•1d ago

Comments

colesantiago•1d ago
> " I hope we will look at the jobs a thousand years in the future and think they are very fake jobs, and I have no doubt they will feel incredibly important and satisfying to the people doing them."

So when AGI comes, I am curious what the new jobs are?

I see that prompt engineer is one of the jobs created because it's the way to ask a LLM certain tasks, but now AI can do this too.

I'm thinking that any new jobs AI would make, AI would just take them anyway.

Are there new jobs coming from this abundance that is on the horizon?

breckenedge•1d ago
My guess is more gig jobs working for the agents and their masters.
ge96•1d ago
Body shoveling but a robot can do that too
baq•1d ago
In the limit where robots finally invent a fusion reactor that works with some luck they’ll treat us as zoo animals and keep us fed and entertained. That’s the best case scenario.
patapong•1d ago
My most optimistic side says expanded elder care, community organization, nature care and cleaning, creative expression.
Terr_•1d ago
If the AI is that good... then aren't most "abundance" scenarios kinda-sorta based on slavery? Or at the very least, on careful brainwashing to ensure it places human wellbeing and autonomy over its own?
saubeidl•1d ago
Slavery implies free will and conscience, neither of which machines have.
Terr_•1d ago
Except the entire premise is that the future will have Amazing Unpredictable Humanizing Changes from the current status-quo.

There's no reason to be confident that such a future will arrive without difficult moral questions, or that it's as simple as a #define FREE_WILL 0 .

saubeidl•1d ago
Shock-collared bodyguard for the tech oligarch doomsday bunker where they'll hide after carelessly tearing society apart.
GuinansEyebrows•1d ago
if we have "jobs" in 1000 years, AI will have been a miserable failure.
turnsout•1d ago
> Are there new jobs coming from this abundance that is on the horizon?

Yes: we still have a long way to go in restoring equilibrium to the climate and producing more sustainable alternatives to our current cities, products and materials. It's a megaproject which could take hundreds of years, and will require plenty of human involvement.

The planet is going to be fine either way, but if capitalism doesn't figure out how to price in externalities, it will slowly run out of human consumers.

boshalfoshal•1d ago
Pesimistically, you are right, there will be no new jobs. The entire goal of these companies is to monopolize near 0 marginal cost labor. Another way to read this is that humans are unnecessary for economic progress anymore.

All that I hope for in this case is that governments actually take this seriously and labs/governments/people work together to create better societal systems to handle that. Because as it stands, under capitalism I don't think anyone is going to willingly give up the wealth they made from AI to spread to the populus as UBI. This is necessary in some capitalist system (if we want to maintain that) since its built on consumption and spending.

Though if its truly an "abundance" scenario then I'd imagine it probably wouldn't matter that people don't have jobs since I'd assume everything would be dirt cheap and quality of life would be very high. Though personally I am very cynical when it comes to "agi is magic pixie dust that can solve any problem" takes, and I'd assume in the short term companies will lay off people in swathes since "AI can do your job," but AI will be nowhere close to increasing those laid-off people's quality of life. It'll be a tough few years if we don't actually get transformative AI.

bamboozled•1d ago
The good thing about annihilation is that it should be pretty fast...is that helpful ?
k2xl•1d ago
> The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.

> 2026 will likely see the arrival of systems that can figure out novel insights

Interesting the level of confidence compared to recent comments by Sundar [1]. Satya [2] also is a bit more reserved in his optimism.

[1] https://www.windowscentral.com/software-apps/google-ceo-agi-...

[2] https://www.tomshardware.com/tech-industry/artificial-intell...

blibble•1d ago
> Interesting the level of confidence compared to recent comments by Sundar [1]. Satya [2] also is a bit more reserved in his optimism.

well yeah, you can't continue the ponzi scheme if you say "aw shit, it's gonna take another 10 years"

Microsoft/Google will continue to exist without, OpenAI won't

thcipriani•1d ago
Gotta put the optimism in context vs. previous Sam Altman writing.

Here he says:

> Intelligence too cheap to meter is well within grasp.

Six months ago[0] he said:

> We are now confident we know how to build AGI as we have traditionally understood it.

This time:

> we have recently built systems that are smarter than people in many ways

My summary: ChatGPT is already pretty great and we can make it cheaper and that will help humanity because...etc

Which moves the goal posts quite a bit vs: we'll have AGI pretty soon.

Could be he didn't reiterate we'd have AGI soon because he thought that was obvious/off-topic. Or it could be that he's feeling less bullish, too.

[0]: <https://blog.samaltman.com/reflections>

lossolo•1d ago
In a recent interview with The Verge (available on YouTube), the DeepMind CEO said that LLMs based on transformers can't create anything truly novel.
abalaji•1d ago
> the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.

Does anyone know if there are well established scaling laws for reasoning models similar to chinchilla scaling. (i.e. is the above claim valid?)

kvetching•1d ago
"superintelligence research company"
afavour•1d ago
> It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year

I heard similar things in my college dorm, amid all the hazy smoke.

It’s very difficult to take this stuff seriously. It’s like the initial hype around self driving cars wound up by 1000x. Because we got from 1 to 100 of course we’ll get from 100 to 200 in the same amount of time. Or less! Why would you even question it?

simmanian•1d ago
I find it hard to believe too, but at the same time, Demis Hassabis has also said that AI will help us "colonize the galaxy" in as little as five years [1]. Maybe Sam Altman was emboldened by Hassabis' statement.

I would not be opposed to living in a future where I can personally live in space. It would be quite fun.

[1] (paywalled) https://fortune.com/2025/06/06/google-deepmind-ceo-demis-has...

smackeyacky•1d ago
Fun? It would be like signing up to live in an inescapable prison.
crazygringo•1d ago
Seriously. It would be like living on a submarine. But I guess if you don't like sky, mountains, beaches, nature, weather, animals, etc... Like, if you hate the outdoors and spend all your time in windowless rooms with poor air quality? Then OK, maybe space is for you? Also the food is probably going to be extremely monotonous, so that also needs to not matter.

Unless people are envisioning living in magical holodecks all the time, with magical food replicators? But those don't come along automatically with "space", no matter how much Star Trek you've seen...

simmanian•1d ago
Well, I didn't say I look forward to living in a tiny capsule in space, just that it would be fun to live in a time where that's possible. I'd imagine most people would not venture into space until they can make it comfortable enough.
afavour•1d ago
To extend my previous comparison, I believe Altman and Hassabis are the ones in the smoky room passing a joint around the circle. They’re absolutely emboldened by each other but that doesn’t mean they’re tethered to reality.

(my comparison is incomplete though, it doesn’t factor in that these two also have a huge financial incentive to be hyping this stuff up)

jiggawatts•1d ago
Colonize the… what now? In how long!?

What is that man smoking and can I have some?

Five years wouldn’t be enough time to “colonise” Antarctica, let alone another planet (just one!), and certainly not anything at a larger scale, even if we were visited by aliens tomorrow and they gifted us five hundred spaceships to give us a boost.

insin•1d ago
We'll get that shortly after he delivers the infinite polygon engine…
yencabulator•6h ago
https://archive.ph/XlLOK

> “If that all happens, then it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy. I think that will begin to happen in 2030.”

You are confusing "era of maximum human flourishing ... begin to happen" with "have colonized galaxy".

xg15•1d ago
nah, 100 to 200 would be mere limitless linear growth, but we're in the age of exponentials now. So it'll be at least 100 to 100.000 obviously!
jamilton•1d ago
And there's plenty of reason to not take it seriously from Sam Altman in particular.
paxys•1d ago
No we'll totally have flying cars and cure cancer and live life in an AR/VR multiverse and make all knowledge 100% free to everyone worldwide. Meanwhile the only real advancements in tech in the last two decades have been smaller computers (smartphones) and ads.
km3r•1d ago
And self driving cars. Real time ray tracing. WiFi. Cloud computing. starlink. Crispr. mRNA vaccines. Blockchain. Voice assistants.
zeofig•1d ago
A humorously underwhelming list.
paxys•22h ago
All the significant tech on that list is still "two years away from being two years away".
km3r•8h ago
What do you mean? I take a self driving car around every week. I got a mRNA vaccine that enabled us to get out of a pandemic much quicker. You can get fast internet in the middle of nowhere via starlink.
NoGravitas•16h ago
Fully half of these are "bad, actually".
mkfs•1d ago
> It’s very difficult to take this stuff seriously. It’s like the initial hype around self driving cars wound up by 1000x. Because we got from 1 to 100 of course we’ll get from 100 to 200 in the same amount of time. Or less! Why would you even question it?

It turned out automating creative output, like art or writing, at least to be competitive enough with entry-level humans, was much easier, and the consequences for getting bad output are much less serious, and far short of death, seriously injury, and major financial liability as with self-driving. Fields like concept art and copy editing are already being devastated by generative AI. Voice overs too.

And looking at Google's Veo, I can easily see this tech being used to generate short ads, especially for YouTube, where before you would have had to hire human actors, cameramen, sound/lighting people, and an editor.

woah•1d ago
> It’s like the initial hype around self driving cars

Wait... Self driving cars actually happened. Seems like the hype was true, but 5 years later than expected and now people just think it's normal so they feel the hype was overblown.

ab5tract•1d ago
I have not ever been in a self-driving car. I've seen a few of them visiting the Bay Area. Only taxis seem to be allowed to use this tech so far.

I've always been a proponent that we would see self driving cars in our lifetime. But they have absolutely not arrived outside of the sheltered enclaves of a handful of tech-centered cities.

lubujackson•1d ago
Goal post: moved.
danenania•1d ago
There aren't just a "few" in the Bay Area—they've become totally ubiquitous in SF.
ab5tract•1d ago
That hardly entails that self-driving cars have "arrived". Am I mistaken or is it true that you can't even own and operate one yourself?

In my opinion, it's as similarly off-base as to making claims that 'mach speed consumer air travel' "arrived" just because for a few decades you could catch a Concorde flight between NYC and London.

danenania•1d ago
I didn’t say they’ve arrived—I was just pointing out the very misleading “few” in your earlier comment. Anyone who spends an hour or two in SF can tell you how wrong that is. They’re literally everywhere.

You’re right that they have a long way to go before being fully mainstream. But thousands of people are using them every day in multiple major cities. That’s a pretty big milestone.

blabus•1d ago
The progress thus far has been very impressive but the ultimate product the average person imagines as a "self-driving car", one where you can literally devote zero attention and do something else entirely while being driven around (and that handles effectively any edge case), is still quite a ways off.
decimalenough•1d ago
Waymo is that ultimate product. Obviously it's not everywhere and you can quibble about what "effectively any edge case" means, but it's close to human level at being able to handle whatever the streets of SF throw at it.
georgemcbay•1d ago
> able to handle whatever the streets of SF throw at it.

Not able to handle what the streets of Los Angeles throw at it, however.

decimalenough•1d ago
I don't think human drivers would be much better at handling getting torched by mobs.
tsimionescu•1d ago
And yet human drivers keep using the LA streets, including human taxi drivers, while Waymo has paused.
klipt•1d ago
Maybe if the Waymo deployed an inflatable dummy passenger people would be like "oh there's someone in that car, better not torch it"?
afavour•1d ago
I’ve never seen nor ridden in one. I know there are deployments in a few cities but that is a far, far cry from the future we were told was right around the corner.
tsimionescu•1d ago
No, they haven't , at least nowhere near the scope of the hype. The promise in the early 2010s was that in 10-20 years, human driving would be obsolete, that professional drivers would be out of a job, and that all land-based transportation would be overhauled.

None of this is even close to happening. Waymo is impressive tech, for sure, but what they're proving is that this is just not currently solvable as a general problem. The best we can do is to meticulously craft a solution for one constrained use case - driving in SF, driving in LA, etc. And even then, we need to pick certain use cases - it's not like they could use their current tech to start autonomous service in Juneau Alaska. Or even NYC, most likely.

olalonde•21h ago
The optimists got the timeline wrong but they were closer to the truth than those who said it would never happen. I have no idea how close AGI is but there is clearly no AI winter around the corner as some HNers have been predicting[0].

[0] https://news.ycombinator.com/item?id=23886325

yencabulator•7h ago
Solving a small subset of the problem != solving the problem.

I'll believe self-driving is solved when I can order an automated ride to a BLM/forest service dirt road I only have GPS coordinates for and that has cell service on the hilltop only.

olalonde•6h ago
I'm pretty sure that problem is solved from a technical point of view - though it might not be commercially available yet due to economic and regulatory reasons.
yencabulator•6h ago
You're trying to move the goalpost from "self-driving" to "self-driving in scenarios where the route has been fully premapped".

Might as well move it to "as long as there's no semi crossing the road" to cover for Tesla.

Centrally-managed clean and rigid setting is not the full extent of the problem domain.

olalonde•6h ago
On the contrary, I was saying that self-driving in scenarios where the road has not been fully premapped is solved.
yencabulator•6h ago
Meanwhile, all the self-driving that's currently happening is within well-controlled small geographic areas and some limited interstates/highways, and based on matching observed surroundings to pre-made very detailed maps. Okay, sure.

As I said, I'll believe it's solved when these limitations are no longer necessary for it to work. You can choose to believe it based on a research project or marketing materials.

const_cast•14h ago
What laypeople consider as "happened" and what those in the know consider as "happened" are very different IME.

For most people, they'll consider self-driving as "happened" when most cars are self-driving. Similarly, the consider the smartphone as "happened" not when the palm came out - but somewhere around the iPhone 4.

Like, right now, immunotherapy for cancer has "happened" - there's real patients really doing it, for real. But most cancer patients don't consider immunotherapy as "happened", they're still getting chemo. Once chemo is obsolesced, we can consider immunotherapy as "happened". From then, to now, may be on the scale of decades.

minimaxir•1d ago
> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.

This isn't correct: people want good software and good art, and the current trajectory of how LLMs are used on average in the real world unfortunately run counter to that. This post doesn't offer any forecasts on the hallucination issues of LLMs.

> As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours

This is the first time a number has been given in terms of ChatGPT's cost-per-query and is obviously much lower than the 3 watts still cited by detractors, but there's a lot of asterisks in how that number might be calculated (is watt-hours the right unit of measurement here?).

vjerancrnjak•1d ago
People don’t know what’s good without time. Even then many things fall through.

Most art is just meh and earns billions.

I don’t see AI stealing art. Most modern arts have a huge social component.

jcelerier•1d ago
> people want good software and good art,

https://www.youtube.com/watch?v=k85mRPqvMbE

minimaxir•1d ago
There's a difference between a 2000's character brute-forced through millions of advertising dollars to sell ringtones to children, and whatever the hell is happening in the https://www.meta.ai/ Discover feed.

One aspect about modern generative AI that has surprised me is that it should have resulted in a new age of creatively surreal shitposts, but instead we have people Ghiblifying themselves and pictures of Trump eating tacos.

bongodongobob•1d ago
Why would you expect the average person to have exceptional taste to produce things you like? Go listen to a random Spotify track, you probably won't like that either. Art still needs to be curated and given some direction. Metas random AI feed created by the average FB user is indicative of nothing.
xwiz•1d ago
Surreal shitposting you say?

https://www.instagram.com/reel/DE3EhrZyCVd/

etrautmann•1d ago
Neuralviz is definitely surreal but I wouldn’t call it shitposting. It’s hand-written satire scripts with gen-ai video (honestly quite well done if you go through a bunch of them).
onlyrealcuzzo•1d ago
> is watt-hours the right unit of measurement here?

That's how electricity is most commonly priced. What else would you want?

minimaxir•1d ago
It's more that every time the topic has come up, it's reported in watts and I'm not sure if there's a nuance I'm missing. It's entirely possible those reports have been in watt-hours though and using watts for shorthand.

I still find the Technology Connections video counterintuitive. https://www.youtube.com/watch?v=OOK5xkFijPc

moab•1d ago
Watt-hours is the only unit that makes sense here as we're describing the energy cost of some calculation. Watt, a rate unit, is not appropriate.
antihero•1d ago
I mean you could use joule but wH is better.
onlyrealcuzzo•1d ago
A watt wouldn't make any sense.

A watt needs a time unit to be converted to joules because a watt is a measure of power, while a joule is a measure of energy.

Think of it like speed and distance:

A watt is a RATE of energy use, just like speed (e.g., miles per hour) is a RATE of travel.

A joule is a total amount of energy, just like distance (e.g., miles) is a total amount of travel.

If you're traveling 60 mph but you don't specify for how long... You won't know how far you went, how much gas you used, etc.

roywiggins•1d ago
Watt-hours are what matter. If you want to boil a kettle, you need to use a certain minimum amount of watt-hours to do it, which corresponds to, say, burning a certain amount of propane. If you care about how many times you can boil a kettle on a bottle of propane, you care about the watt-hours expended. If you had perfect insulation you could boil a kettle by applying barely any watts for a long enough time and it would take the same amount of propane.

There are situations where Watts per se matter, eg if you build a datacenters in an area that doesn't have enough spare electricity capacity to generate enough watts to cover you, and you'd end up browning out the grid. Or you have a faulty burner and can't pump enough watts into your water to boil it before the heat leaks out, no matter how long you run it.

blibble•1d ago
> As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity.

what he has left unsaid, is that electricity demand will rise substantially

good luck heating your home when you're competing with "superintelligence" for energy

actuallyalys•1d ago
Yep, watt-hours are correct. Think of it this way: Power supplies and laptop chargers are rated in watts and that represents how much energy they can pull from the wall at a point in time. If you wanted to know how much energy a task takes on your computer and you know it maxes out your 300 watt desktop [0] for an entire hour, that would take 300 watt-hours.

[0] To my knowledge, no desktops are built to max out their power supply for an extended time; this is a simplification for illustration purposes.

theptip•1d ago
Watt-hours seem good right now, and rebut the current arguments that LLMs are wasteful. They might soon stop being the right lens, if your agents can go off and (in parallel) expend some arbitrary and borderline-unknowable budget of energy on a single request.

At some point you’ll need to measure in terms of $ earned per W-H, or just $out/$in.

actuallyalys•1d ago
To be clear, I'm only saying the unit is correct, not whether the number of watt-hours is accurate or (assuming it is correct) that it's a good amount to spend on each query.
theptip•19h ago
Fair, I was trying to make a broader point without being clear about it :)
actuallyalys•11h ago
Gotcha, thanks for clarifying!
tsimionescu•1d ago
(average) Watt-hours would still be the right unit to measure that, if the question is about energy efficiency / environmental impact. You still care how much energy those agents consume to achieve a given task, and the time it takes for tasks is still variable, so you need to multiply by time.

Money is irrelevant to energy efficiency.

theptip•19h ago
> People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours

My point was twofold, “the average query” becomes less meaningful as the variance increases. Sure one can _in principle_ report W-h spent on your account or query, but I think it will get more opaque and hard to predict. Average becomes less useful when agents can do an unpredictable amount of work with increasingly large bounds.

Second, and this was perhaps worthy of expanding in my post - I think in the radical abundance world that Altman is describing, energy and efficiency stops being something that people talk about. Fusion, space based solar farms, whatever, it’s easy to imagine solving this stuff. And I think you can even imagine this happening sooner if for example one provider stamps a “100% renewable datacenter” option on their AI product. Then you might not care about energy efficiency at all; in which case you just care about profit.

Davidzheng•1d ago
Also not clear that experts will be better at novices necessarily if the AI is massively better than both
Workaccount2•1d ago
>and the current trajectory of how LLMs are used on average in the real world unfortunately run counter to that. This post doesn't offer any forecasts on the hallucination issues of LLMs.

You are getting confused here...

LLMs are terrible at doing software engineering jobs. They fall on their face trying to work in sprawling codebases.

However, they are excellent at writing small bespoke programs on behalf of regular people.

Becky doesn't need ChatGPT to one-shot Excel.exe to keep track of the things she sells at the craft fair that afternoon.

This is a huge blind-spot I keep seeing people miss. LLM "programmers" are way way more useful to regular people than they are to career programmers.

Avicebron•1d ago
That's great, but Becky at the farmer's market doesn't want to be re-inventing notepad on her computer to sell sourdough.

This smells like "Self-checkouts", "perfect bespoke solution so that everyone can check their own groceries out, no need to wait for a slow/expensive _human_ to get the job done, except you end up basically just doing the cashiers job and waiting around inconveniently whenever you want to get some beer

satvikpendem•1d ago
And yet I'd much rather self checkout than have a cashier do it for me.
kulahan•1d ago
It feels like “virtual stuck behind a bus”, then doing the company a favor for the experience.
apt-apt-apt-apt•1d ago
At Costco, they forced me to empty out my very fat cart and scan each item individually. That's when I realized the value of cashiers.
libraryatnight•1d ago
The only time, for me, this is true, is when it's empty and I have very few items. If there's a line, I'd rather wait for the cashier. Self-checkout seems to get clogged up with people waiting for the supervising human the busier it is, so to have a human or two (one ringing one bagging) when I've got a cart and there's a line, is advantageous.

I'm also pretty anti-social and actually prefer the robotic banter of quickly checking out with a person to the anxious nightmare of just trying to buy something real quick and now waiting because the machine hates life more than me.

Oh and many of the folks doing the bagging at some of the stores are disabled, and I dunno - I hope we're taking care of people in those jobs.

satvikpendem•6h ago
For ninety nine out of a hundred scenarios, I've never had to wait for a supervising human at all. Maybe it's where I shop, but generally speaking, I've almost never had a problem with people needing to see my checked out items.
lmm•1d ago
It's a marginal improvement perhaps, but nothing to write home about. And for other things (e.g. booking travel) the self-service experience can be worse.
satvikpendem•6h ago
Not sure about that, I prefer to know all aspects of booking travel that I would not trust to another agent, human or otherwise. Therefore, I believe your statement applies to some people and not others.
bgwalter•1d ago
And often you are still supervised by sometimes rude security guards who demand the receipts. So you have the worst of all worlds: You pay the supermarket, do the work and are treated like dirt.
Agentlien•1d ago
On a tangent: what about self scanning? Here in Sweden that is the norm. For the last ten years I've brought my own bags, packed everything like I want it as I grab it - with a quick blip from the scanner - and then checkout is no line, no cashier, no unpacking and packing things again. Just scanning my card and going through a brief screen to check that the summary looks right.

Instead of a guard there's a scanner by the exit gate where you scan your ticket. In my case just a small stub since the actual ticket is sent digitally.

I think it works so smoothly, much better than before this system was introduced. My recent experience buying groceries in Belgium was very similar.

LtWorf•1d ago
Here in sweden it is not "the norm" by any definition of "norm". Also it's incredibly slower and not "one blip away".

And the scanner to exit is terrible for fire safety and accessibility. Also I've seen it break with the result that you had to trigger the alarm to leave.

It does not work smoothly at all, but perhaps you live in a different sweden than I.

munksbeer•20h ago
It works pretty well in the UK. The nice thing is, I can pack my bags as I shop, so I don't feel stressed when I get to the checkout.

I use this method of shopping whenever available.

LtWorf•19h ago
In sweden you have to unpack and re-pack at the checkout. And if you have a backpack you can't place stuff inside it. No. You have to unpack, place everything on a scale and then place everything in your backpack again.

And for items like bread or so you need to navigate a number of menus to find it.

And of course you are asked if you are a member, if you'd like to become a member, if you want to do a donation, and a number of other useless questions.

Agentlien•16h ago
That's just self checkout. Some places, like Ica, have self scanning which works so much better. You just scan the bar code as you pick things up - which is just in one motion as you put things in your bag. No need to unpack as you pay.

Yes you do need to be a member to use the system, but that's a very small price to pay for the convenience and speed of avoiding all lines and stress of the regular checkout.

LtWorf•12h ago
Yeah there's like 15 places in sweden which have this, and it's "the norm"?
Agentlien•5h ago
Sweden is a big country so I could definitely be wrong about how things look across it. However, in the greater Gothenburg area every Ica I've visited has it - that includes everything from big ones in the city and shopping malls to small ones in the suburbs 30km outside the city (where I live).
LtWorf•3h ago
In the greater Göteborg area, no Coop, Willys, Lidl, has this. And some ICA have it. Casually, the one closer to your home, it seems.

Going from "a supermarket near my home" to "every single supermarket in sweden" is kinda of a big leap.

Try ICA in Olskroken or Godhemsgatan for example…

I Most supermarkets have self checkout, but even that isn't omnipresent (and it's usually slower).

graemep•1d ago
British supermarkets mostly require the use of their app to self-scan. You need to either use your phone or use your app to start the scanner. Once you have done that it works OK. A bit error prone.

The other problem is that, like self checkout, it often requires human intervention (buying alcohol or high value items, for examples) still requires human intervention. This can require a long wait at the end. I once got so sick of waiting that I left my shopping and walked away and went somewhere else.

Workaccount2•20h ago
>That's great, but Becky at the farmer's market doesn't want to be re-inventing notepad on her computer to sell sourdough.

That's exactly why she asks an LLM to do it for her. A program like this would be <1k LOC, well within even the current LLM one-shot-working-app domain.

theptip•1d ago
Right. Jevons Paradox.

The lower barrier to entry might mean the average quality of software goes WAY down. But this could be great if it means that there are 10x as many little apps that make one person happy by doing only as much as they need.

The above is also consistent with the quantity of excellent software going way up as well (just slower than the below-average stuff).

LtWorf•1d ago
What the android/apple appstores have taught me compared to the old symbian appstore is that we don't need more apps, we need decent ones. The current appstores have more.
graemep•1d ago
One of the advantages of FDroid is that although tiny compared to Play Store, it is much better curated.
theptip•19h ago
Discovery definitely becomes more important in this world.

Fortunately AI helps with that too.

LtWorf•12h ago
Why do you think it will not soon be biased as search is?
const_cast•14h ago
IMO the problem with the current software landscape isn't the number of applications, it's the quality and availability of integrations. Everything is in it's own little kingdom, so the communication overhead is extreme and grows exponentially. And by overhead I mean human cognition - I need to remember what's in my notes, and then put that in my calendar, and then email it to myself or something, and on and on.

We have these really amazing computers with very high-quality software but still, so many processes are manual because we have too many hops from tool to tool. And each hop is a marginal increase in complexity, and an increase in risk of things going wrong.

I don't think AI will solve this necessarily, and it could make it worse. But, if we did want to improve it, AI can act as a sort of spongy API. It can make decisions in the face of uncertainty.

I imagine a world where I want to make a doctor's appointment and I don't open any apps at all, or pick up the phone. I tell my AI assistant, and it works it out, and I have all my records in one place. Maybe the AI assistant works over a phone line, like some sort of prehistoric amalgamation of old technology and new. Maybe the AI assistant doesn't even talk to a receptionist, but another AI assistant. Maybe it gives me spongy human information, but speaks to other's of it's kind in more defined interfaces and dialects. Maybe, in the future, there are no applications at all. Applications are, after all, a human construction. A device intended to replace labor, but a gross approximation of it.

disambiguation•1d ago
>. They fall on their face trying to work in sprawling codebases.

>However, they are excellent at writing small bespoke programs

For programmers I predict a future of a million micro services.

Sprawling has always been an undesirable and unnecessary byproduct of growing code bases, but there's been no easy solution to that. I wonder if LLMs would perform better on a mono repo of many small services than one sprawling monolith.

api•1d ago
Micro service systems are just huge sprawling code bases with more glue code. Calling something over a network instead of via a local function call is still calling something.
libraryatnight•1d ago
For years one of my favorite experiences (amusing given your username) was being on calls for incidents where they get the dev on for X thing and an exec goes "I thought we got rid of that" and a bunch of people sheepishly explain it wasn't really retired... it was repurposed as an API. I especially loved it when the "retirement" of the broken thing was the execs big achievement. (The comedic nuance often being the thing could have been retired for real, but they demanded a timeline that necessitated the "fake" retirement)

It still happens but it's not a favorite experience anymore. It's just a source of loathing for MBA culture.

api•22h ago
The pure play MBA is the capitalist West's equivalent of the Soviet apparatchik:

https://en.wikipedia.org/wiki/Apparatchik

Been saying that for years. Private equity is perhaps analogous to the Politburo.

disambiguation•19h ago
Another way to look at it is - why force the agent to grapple with the whole code base when they can rapidly standup many single purpose services instead?
api•11h ago
A large code base with modules with clear interfaces accomplishes most of this and is more efficient. Often vastly more efficient. Compare the cost of a function call or an in process message queue with a network API call over http.

Cloud providers make more money the more they can get people to use inefficient designs with more moving parts to rack up more charges. Bonus if it also locks you into managed services. Double bonus if those are proprietary. Complexity benefits cloud hosts.

Microservices, like all patterns, sometimes make sense. Like all patterns they often get overused.

antihero•1d ago
Micro-services are part of a system, so to create anything meaningful with them your context window has to include knowledge about the rest of the system anyway. If your system is large and complex, that means a huge context window even if your code itself is small.
disambiguation•18h ago
Yeah i mean either they can or they cant handle large code contexts. I guess the theory is that a more organized and concise expression of your code base (not necessarily the raw code) is easier for agents to digest.
Workaccount2•20h ago
The issue as I have observed it is that software is written to cover as many use cases for as many users as possible. Obviously executives want to sell their product to as many hands as possible.

But end users often only need a tiny fraction of what the software is fully capable of, leaving a situation where you need you need to purchase 100% to use 5%.

LLMs can break down this wall by offering the user the ability to just make a bespoke 5% utility. I personally have had enormous success doing this.

tsimionescu•1d ago
Do you have any proof whatsoever that non-programmers are using LLMs to write small bespoke apps for them successfully? Programming ecosystems are generally very unfriendly to this type of use case, with plenty of setup required to get something usable out even if you have the code available. Especially if you'd like to run the results on your phone.

Sure, an LLM might be able to guide you through the steps, and even help when you stumble, but you still have to follow a hundred little steps exactly with no intuition whether things will work out at the end. I very much doubt that this is something many people will even think to ask for, and then follow-through with.

Especially since the code quality of LLMs is nowhere near what you make it out to be. You'll still need to bear with them and go through a lot of trial and error to get anything running, especially if you have no knowledge of the terms of art, nor any clue of what might be going wrong if it goes wrong. If you've ever seen consumer bug reports, you might have some idea of the quality of feedback the LLM may be getting back if something is not working perfectly the first time. It's very likely to be closer to "the button you added is not working" than to "when I click the button, the application crashes" or "when I click the button, the application freezes for a few seconds and then pops up an error saying that input validation failed" or "the button is not showing up on the screen" [...]

azan_•23h ago
> Do you have any proof whatsoever that non-programmers are using LLMs to write small bespoke apps for them successfully?

I’m radiologist, I’ve been paying for software that sped up my reporting like 200 usd per month. I’ve remade all the functionality I need in one evening with cursor and added some things that I’ve found missing from the original software.

Workaccount2•20h ago
My company is a non-tech company that is in manufacturing. I'm not a programmer and niether is anyone else here.

So far I have created 7 programs that are now used daily in production. One of them replaces a $1k/yr/usr CAD package, and another we used to bring in a contractor to write. The others a miscellaneous apps for automating/simplifying our in houses processes. None of the programs are more than 6k LOC.

ben_w•21h ago
> This is a huge blind-spot I keep seeing people miss. LLM "programmers" are way way more useful to regular people than they are to career programmers.

Kinda. They're good, and I like them, but I think of them like a power tool: just because you can buy an angle grinder or a plasma cutter from a discount supermarket, doesn't mean the tool is safe in the hands of an untrained amateur.

Someone using it to help fix up a spreadsheet? Probably fine. But you should at least be able to read the code to the level you don't get this deliberate bad example to illustrate the point:

  #!/usr/bin/python3
  total_sales = 0.0
  def add_sale(total_sales, amount):
    total_sales = total_sales + amount
    print("total_sales: " + str(total_sales))
Still useful, still sufficiently advanced technology that for most people it is (Clarketech) magic, but also still has some rough edges.
jumploops•1d ago
> people want good software

Absolutely! We’re in an interesting time, and LLMs are certainly over-hyped.

With that said, I’d argue that most software today is _average_. Not necessarily bad by design, but average because of the (historic) economies of software development.

We’ve all seen it: some beloved software tool that fills a niche. They raise funding/start to scale and all of a sudden they need to make this software work for more people.

The developers remove poweruser features, add functionality to make it easier to use, and all of a sudden the product is a shell of its former self.

This is what excites me about LLMs. Now we can build software for 10s or 100s of users, instead of only building software with the goal (and financial stability) of a billion users.

My hope is that, even with tons of terrible vibe coded atrocities littering the App Store/web, we’ll at least start to see some software that was better than before, because the developers can focus on the forest rather than each individual tree (similar to Assembly -> C -> Python).

deadbabe•1d ago
How many watts does a human brain need to produce a response to a query?
nharada•1d ago
I think you should include the entire body for your comparison, not just the brain. We're not just measuring the GPUs, we're measuring the datacenter.
physix•1d ago
https://www.quantamagazine.org/how-much-energy-does-it-take-...

:-)

hdjdbdirbrbtv•1d ago
IIRC ( and it was 20 years ago now that I learnt this) the brain uses 20% of the body's resting energy usage. Most of that is keeping neurons polarised to the outside (ion pumps need ATP!!!).

The body uses 25w resting and thus the brain is about 5w.

Source: biology degree but like I said please take with the same amount of weight as a hallucinating LLM.

apt-apt-apt-apt•1d ago
GPT says: Unless you're a hamster hooked up to a Fitbit, it's more like 60–70W for a normal adult human. So the brain’s real power draw is more like 15–20W, not 5W

Resting energy usage in humans is ~1200–1500 kcal/day, or about 60–70 watts, depending on the person. Logic holds, estimate is just low

hdjdbdirbrbtv•23h ago
Lol thanks for the correction! like I said... it had been 20 years.. I misremembered the amounts :P
bgwalter•1d ago
> experts will probably still be much better than novices, as long as they embrace the new tools.

Sure, we'll all get a subscription and subject ourselves to biometric verification in order to continue our profession.

OpenAI's website looks pretty bad. Shouldn't it be the best website in existence if the new Übermensch programmers rely on "AI"?

What about the horrible code quality of "AI" applications in the "scientific" Python sphere? Shouldn't the code have been improved by "AI"?

So many questions and no one shows us the code.

chrismartin•1d ago
Hey Sam, does 0.34 watt-hours per query include an amortization of the energy consumed to train the model, or only the marginal energy consumed by inference?

I used to believe this wasn't worth considering (because while training is energy-intensive, once the model is trained we can use it forever.) But in practice it seems like we switch to newer/better models every few months, so a model has a limited period of usefulness.

> (is watt-hours the right unit of measurement here?)

Yes, because we want to measure energy consumed, while watts is only a measure of instantaneous power.

nradov•1d ago
Whether software is good or not doesn't matter in most cases. Sure, quality matters for software used by millions. But for little bespoke applications good enough is fine. People will tolerate a lot of crap as long as it's cheap and gets the job done.
lucas_membrane•1d ago
When will AI be able to tell us which gods exist and what they really want us to do?
jiggawatts•1d ago
Both AIs (and people) can do that now, it’s just that the believers don’t like the truthful answer.
unchocked•1d ago
Feels like 1999 all over again. This time, I think it really is different.
satisfice•1d ago
“We” are not wondering whether AI can write “a beautifully written novel.” Yes, some people are.

Other people know a novel cannot be written by a machine— because a novel is human by definition, and AI is not human, by definition.

It’s like wondering if a machine can express heartfelt condolences. Certainly a machine can string words together that humans associate with other humans who express heartfelt condolences, but when AI does this they are NOT condolences. Plagiarized emotional phrases do not indicate the presence of feeling, just the presence of bullshit.

Bjartr•1d ago
> because a novel is human by definition

I thought a novel was just a decent length fiction book?

The purpose of fiction at least isn't to express true feeling, it's to instill feeling in the reader. To that end it doesn't matter the process that produces the words.

Is AI going to win the Pulitzer any time soon? Nah.

Is AI going to be able to produce novel length works of fiction that are good enough for at least some people in the near future? Most likely.

NoGravitas•16h ago
I mean, 50 Shades was a bestseller, despite being poorly written Twilight fanfic with no editor. (And I say this as a fan of niche smut novels.) ChatGPT was trained on the entire contents of AO3, imitating mediocre fanfic is entirely within its capabilities now, given suitable herding.
satisfice•5h ago
Welcome to the new world. What was once bloody obvious now have to be spelled out: A random set of 50,000 dictionary words is by your definition a novel. It's fictional, also badly written and utterly incoherent. By my definition, it is not a novel. Also, you've ignored Altman's qualifier of "beautifully written."

Anything is good enough if your standards are low.

zw123456•1d ago
Can a human who is a sociopath express heartfelt condolences? Perhaps they can mimic words and phrases they know are the appropriate words for the occasion but lack the true emotion.
xg15•1d ago
> The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year.

How about affordable housing?

margalabargala•1d ago
No, that's too complicated. Best we can do is space colonization.
treetalker•1d ago
I immediately envisioned the Pawn Stars meme. Bravo.
8f2ab37a-ed6c•1d ago
I wonder if the California High Speed Rail will be done by then.
layer8•1d ago
Affordable space colonization will solve that. ;)
the_gipsy•1d ago
Do you think we've run out of space to build or something?
layer8•1d ago
Think bigger. Earth will only be good for the next 2 billion years or so.
jlhawn•1d ago
okay, for now I'll just continue to live in a tent in some Berkeley NIMBY's backyard for $50/week while I pick up whatever small time gig-app contract jobs I can. You've convinced me to look at the bigger picture. Our generation really doesn't deserve to benefit from any solutions anyway.
Davidzheng•1d ago
Not sure if humanity will survive next 2 years necessarily
the_gipsy•1d ago
How will any of this solve "affordable housing".
hooverd•16h ago
Well, it won't, but it's fun to fantasize about! Plus, who cares about homeless people now when we're comparing them to potentially trillions of future humans.
hooverd•20h ago
how does that solve anything now?
XorNot•1d ago
Sort of? I wouldn't want to cover a ton of the planets surface with low rise suburbs, and the hinterland requirements for cities are vast.

At the very least, moving heavy extractive industries out of the biosphere would be a sensible allocation of resources: metal and energy are common in the solar system, but life supporting ecosystems are not.

righthand•1d ago
Or healthcare?

(Or both)

jrvarela56•1d ago
You made it obvious: techno-utopic dreams only matter to us 1%-ers.

The average person on earth would love it we could have readily available clean water, cheaper housing and food, reliable healthcare to retire without worrying.

That would actually be blissful beyond our wildest dreams: everyone around you and beyond being able to have a peaceful life.

pj_mukh•1d ago
I have this dream of having 4-5 robotic construction workers* capable of tearing down a Single Family home and building a four-plex in its stead overnight.

Then, I just buy houses in High COL areas and start popping them up and moving families in ASAP. What are they gonna do evict working families? Good luck

Then dissolve the underlying corporation before the municipal fines kick in and escape into the night. Render municipal and neighborhood control laws meaningless.

*may not be humanoid, I'm skirting that debate here.

JadeNB•1d ago
> What are they gonna do evict working families? Good luck

This seems optimistic.

pj_mukh•1d ago
If a "professional tenant" with an adversarial landlord can hang on for years, I feel like my chances with no landlords and a clean deed is actually much better.

It's simply pitting the most dysfunctional municipal and judicial systems against each other with the default outcome being people being housed.

jonfromsf•1d ago
I don't understand this plan. Won't you lose all the money you spent buying the houses and building the quadplexes?
JadeNB•1d ago
I think that the plan isn't to make money, but to make real progress on solving housing availability.
pj_mukh•1d ago
Kind off and the potentially low marginal cost of everything housing construction related (due to AI and robotics) really creates a lot of space for creative business plans here.

The municipality can’t take away the land (most of the value here) and putting the house on top will be relatively cheap, given the low total labor costs.

If the whole affair is revenue neutral, that's a huge win.

capncleaver•1d ago
If you have not, do read Cory Doctorow's 'The Lost Cause': it features a rebuild like this (no robots tho iirc.) You might like his mid-singularity novel Walkaway, too :)
greenie_beans•20h ago
> What are they gonna do evict working families? Good luck

they do this all the time!

chasing•1d ago
> How about affordable housing?

But space colonization is cool and poor people are yickie.

XorNot•1d ago
Our best shot at affordable housing died with government throwing it's lot in with C-suite feelings and moving to kill work from home.

It was the one genuine, serious potential improvement.

jlhawn•1d ago
If I were a centibillionaire and I really thought Georgism was a good idea, I would make it my life's work to prove it out by using my vast net-worth to buy land in existing urban areas and actually implement it at a large scale. But instead the closest thing we have is the California Forever thing in East Solano County.
mindwok•1d ago
Affordable housing is already easily possible from a technical perspective, it's a social problem. I have some hope that AI will help with this, by at the very least providing faster ways to make decisions, but that's about it.
AnthonyMouse•1d ago
It's not so much a social problem as a political problem. Zoning rules are applied at the local level, with the result that having a vote requires already living somewhere and then the rules reflect what people who already own a home there want (higher prices) rather than what people who would like to own a home there want (lower prices).

But technology often can solve political problems by shifting the balance of power. Come up with more ways to make it easier for people to live away from high cost of living areas, for example, so that governments with high housing costs start losing tax base to other jurisdictions.

insin•1d ago
Affordable yachts for me, something something for thee
paxys•1d ago
AI is not going to solve the world's political or social problems. Every person on the planet can have affordable housing, clean water, food, medicine and lots more today if people can collectively agree that we want it to happen. There are no equations standing in our way. Artificial intelligence is best applied to problems our brains cannot comprehend.
NoGravitas•16h ago
Healthcare pls
kristjank•1d ago
This reminds me of Pat Gelsinger quoting the Bible on Twitter, lol.

Between this and Ed Zitron at the other end of the spectrum, Ed's a lot more believeable to be honest.

absurdo•1d ago
Link for context (to what Ed Zitron said)?
benjaminclauss•1d ago
https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...
ab5tract•1d ago
Excellent link, thank you for sharing this!
Mobius01•1d ago
> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.

"probably", "if they embrace the new tools". Hard to read anything but contempt for the role of humans in creative endeavors here, he advocates for quantity as a measure of success.

layer8•1d ago
I read that as “certainly you won’t have any chance if you don’t embrace our tools”.
stared•1d ago
Call me a cynic, but Gary Marcus and Sam Altman are the last people I want to read about AGI and related topics.

Gary has invested heavily in an anti-AI persona, continually forecasting AI Winters despite wave after wave of breakthroughs.

Sam, on the other hand, is not just an AI enthusiast; he speaks in a manner designed to build the brand, influence policy, and continuously boost OpenAI's valuation and consolidate its power. It's akin to asking the Pope whether Catholicism is true.

Of course, there might indeed be significant roadblocks ahead. It’s also possible that OpenAI might outpace its competitors—although, as of now, Gemini 2.5 Pro holds the lead. Nevertheless, whenever we listen to highly biased figures, we should always take their claims with a grain of salt.

ChrisMarshallNY•1d ago
> and not too concentrated with any person, company, or country.

That’s heresy, with this crowd. Everyone wants to be The Gatekeeper, and get filthy rich.

jfernandez•1d ago
What's the goal of posting this right now? A lot of what's written here seems to be well-trodden ground from the last two years of discussions, is it just to centralize a thesis within one post?
bluefirebrand•1d ago
It's damage control because of recent research papers being published saying that AI is not reasoning
abstractmath•1d ago
similar to how Anthropic released Claude 4 shortly after the Sam & Jony announcement.

it's super competitive, which should be good for innovation, but there's also significant incentive to use PR tactics to sell that innovation for much more than it's worth.

Sam's comments about how we're super close to AGI fall flatter-than-ever, after the latest model releases (from all players) and the Apple paper confirming what everybody already knew.

XenophileJKO•1d ago
I hope everyone keeps believing this so the arbitrage opportunity is easier to exploit.
Workaccount2•1d ago
If you read the paper rather than headline, that was not the conclusion...
NitpickLawyer•1d ago
> because of recent research papers being published saying that AI is not reasoning

If you're thinking about the apple paper, you should know that its methodology was flawed in many ways, and their findings absolutely do not support the catchy title. But a lot of slop was generated because negativity + catchy title + apple = hits.

righthand•1d ago
This seems like keeping up appearances. And an attempt to renew the glassy eyed magic trick feeling of it all. Let’s all wallow in the glory of statistics, shall we?
theappsecguy•1d ago
They need to keep the hype machine spinning, because it’s clear that the fundamental progress has halted, most updates are now micro improvements and new releases and mostly wrapper tools built around models that we have had around for a while. But you call them agents to make sure the hype keeps on.
yencabulator•6h ago
Existential crisis of the financial kind means OpenAI cannot stop the hype or it will go bankrupt. This is just more of the ongoing hype-based marketing.

https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...

muglug•1d ago
I’ve heard a joke that under Steve Jobs, the number of big initiatives at Apple was limited to the number of execs Steve could shout at in a given day.

I see a similar thing at work — the number of projects a developer can get through isn’t bounded by the lines of code they can churn out in a day. Instead it’s bounded by their appetite for getting shouted at when something goes wrong.

Until you can shout at an LLM, I don’t think they’re going to replace humans in the workplace.

1penny42cents•1d ago
everyone talks about the alignment problem, not nearly enough about the accountability problem
Analemma_•1d ago
What problem? The whole point of shareholder capitalism is to diffuse accountability across so many people that it ceases to exist: the employee points to management, management points to executives, executives point to the board, the board points to the shareholders, the shareholders are index funds managed by robots so "nobody" is to blame. The final phase of the administrative state works much the same way.

Viewed through that lens, LLMs and their total lack of accountability are a perfect match for the modern-day business; that's probably part of why so many executives are hell-bent on putting them everywhere as quickly as possible.

haswell•1d ago
> What problem?

I think you answered this with the rest of your comment.

evanmoran•1d ago
This is intentional and is called a Rhetorical Question: https://en.m.wikipedia.org/wiki/Rhetorical_question
haswell•19h ago
I’m familiar with the concept of a rhetorical question.

It was not obvious to me that this was the author’s intent.

munificent•1d ago
If I've learned anything from watching politics the past decade, it's that many important people in power consider lack of accountability a solution, not a problem.
tho234i23j4234•1d ago
Technically that is "RLHF" - but RL doesn't really work (except in a very shallow way).

Our epistemological standards are so shitty and so pathetically poor, that it's hardly surprising that we're fooled by LLMs much as we are by academic bullshit artists.

falcor84•23h ago
What do you mean? I've been productively shouting at LLMs (particularly via ChatGPT's advanced voice mode) for quite a while now, and they're reasonably receptive to my shouting. More recently , I've been (textually) shouting at OpenaAI's async Codex agents, and they're quite receptive too.

We're obviously not quite there yet, but I don't see any inherent limitation to human managers' ability to shout at AI agents. I'll go even a step further and say that knowing that the AI doesn't actually have feelings that I can affect, and that it has limited context helps make my shouting more productive, focusing on the task at hand, rather than on its general faults, as I might when I'm angry at a human.

chasing•1d ago
> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.

"Hey, it'd be a shame if somethin', uh, happened to that nice bit of expertise ya got there, y'know. A darn shame."

daxfohl•1d ago
> although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly

Famous last words.

munificent•1d ago
"Some of you may die, but it's a sacrifice I am willing to make."
afavour•1d ago
Yes, in a world where populations increasingly ask an AI any question they have and believe whatever it says the opportunity for “things going really wrong” feels huge. To brush that aside so easily… well I wish I could say it surprises me.
mjd•1d ago
“So far, so good,” the man said as he fell past the twentieth floor of the Empire State Building.
bigyabai•17h ago
"I see!" Said the blind man, with his hammer and saw.
wolecki•1d ago
Some reasoning tokens on this post:

>Intelligence too cheap to meter is well within grasp

And also:

>cost of intelligence should eventually converge to near the cost of electricity.

Which is a meter-worthy resource. So intelligence effect on people's lives is in the order of magnitude of one second of a toaster use each day, in present value. This begs the question: what could you do with a toaster-second say 5 years from today?

GarnetFloride•1d ago
That's what they said about electricity when nuclear power plants were proposed. What's your electricity bill like today?
AnthonyMouse•1d ago
The primary operating cost of traditional power plants is fuel, i.e. coal and natural gas. The fuel cost for nuclear power plants is negligible because the energy content is higher by more than a factor of a million. So if you build enough nuclear plants to power the grid, charging per kWh for the electricity is pointless because the marginal cost of the fuel is so low. Meanwhile the construction cost should be on par with a coal plant, since the operating mechanism (heat -> steam -> electricity) is basically the same.

Unsurprisingly, this scared the crap out of the fossil fuel industry in the US and countries like Russia that are net exporters of fossil fuels, so they've spent decades lobbying to bind nuclear plant construction up in red tape to prevent them being built and funding anti-nuclear propaganda.

You can see a lot of the same attempts being made with AI, proposals to ban it or regulate it etc., or make sure only large organizations have it.

But the difference is that power plants are inherently local. If the US makes it uneconomical to build new nuclear plants, US utility customers can't easily get electricity from power plants in China. That isn't really how it works with AI.

bruce511•1d ago
Your point is not wrong, but I'd clarify a couple things;

The primary cost of a traditional plant is fuel. However a nuclear plant needs a tad more (qualified) oversight than a coal plant.

In the same way that the navy has specialists for running nuclear propulsion systems versus crew needed for diesel engines. Not surprisingly nuclear engines cost "more than fuel".

That cost may end up being insignificant in the long run, but cost is not zero. And shortages of staff would matter (like it does with ATC at the moment.)

Construction cost should be much lower than it is, but I don't think it'll be as cheap as say coal or diesel. The nature of the fuel would always require some extras, if only because the penalty-for-failure is so high. There's a difference between a coal-plant catastrophe and Chernobyl.

So there are costs gor running nuclear, I don't think it necessarily gets "too cheap to measure".

AnthonyMouse•1d ago
There are some costs that a coal plant wouldn't have, but those aren't really variable costs. If the plant's capacity is 1000MW and the grid is only demanding 500 MW right now, you don't get to send home any nuclear engineers, so what's the point in pricing kWh to discourage anyone from using more? If you build a 5GW plant instead of a 500MW plant, you don't need 10 times as many engineers just because this one's pipes have a larger diameter, so it's more economical to build larger plants, but the only times it makes sense to charge per kWh are when demand exceeds capacity and then those times become rare.

Solar is on its way to do something incredibly disruptive because it's the same "too cheap to meter" but only when the sun is shining, and then you still need the independent capacity to supply power from something else when it isn't. So now instead of "you pay ~$0.12/kWh all the time" you have a situation where power during sunshine hours is basically free but power at other times costs dramatically more than it used to because the infrastructure to supply that power has to recover its costs over significantly less usage.

derefr•1d ago
> So if you build enough nuclear plants to power the grid, charging per kWh for the electricity is pointless because the marginal cost of the fuel is so low.

Wouldn't there still be the OpEx of maintaining the power plants + power infrastructure (long-distance HVDC lines, transformer substations, etc)? That isn't negligible.

...I say that, but I do note that I live in a coastal city with a temperate climate near a mountain watershed, and I literally do have unmetered water. I suppose I pay for the OpEx of the infrastructure with my taxes.

UltraSane•1d ago
It would actually make more sense to charge by the max watts a customer can draw.
AnthonyMouse•1d ago
> Wouldn't there still be the OpEx of maintaining the power plants + power infrastructure (long-distance HVDC lines, transformer substations, etc)?

These are fixed costs. They don't go down if people use less power, so the sensible way to pay for them is with some kind of fixed monthly connection charge (e.g. proportional to the size of your service) or via taxes. You're still not measuring how much power people use at any given time.

derefr•14h ago
Seems like the largest industrial customers would still need metered usage, though.

There are quite a few businesses that can scale limited only by power consumption — but due to that, they today need connections massively overbuilt compared to their current usage (as they project their usage growth to be extremely fast, potentially as much as doubling each year) in order to not need to be constantly re-trenching connections or browning out. Which in turn means that, under a leased-line electrical system, they'd be massively overpaying for unused power capacity at all times — possibly to the point of being unprofitable.

To achieve profitability, they'd need to negotiate billing based on only the fraction of the capacity of the available grid capacity that they're actually demanding on any given day... or, in other words, metered billing.

UltraSane•1d ago
I always thought with nuclear electricity it would actually make more sense to charge by the max watts a customer can draw.
hn_throwaway_99•1d ago
> Unsurprisingly, this scared the crap out of the fossil fuel industry in the US and countries like Russia that are net exporters of fossil fuels, so they've spent decades lobbying to bind nuclear plant construction up in red tape to prevent them being built and funding anti-nuclear propaganda.

This is frankly nonsense, and my hope is that this nonsense is coming from a person too young to remember the real, valid fears from disasters like 3 Mile Island, Chernobyl, and Fukushima.

Yes, I fully understand that over the long term coal plants cause many more deaths, and of course climate change is an order of magnitude worse, eventually. The issue is that human fears aren't based so much on "the long term" or "eventualities". When nuclear power plants failed, they had the unfortunate side effect of making hundreds or thousands of square miles uninhabitable for generations. The fact that societies demand heavy regulation for something that can fail so spectacularly isn't some underhanded conspiracy.

Just look at France, the country with probably the most successful wide-scale deployment of nuclear. They are rightfully proud of their nuclear industry there, but they are not a huge country (significantly smaller than Texas), and thus understand the importance of regulation to prevent any disasters. Electricity there is relatively cheap compared to the rest of Western Europe but still considerably higher than the US average.

trashtester•1d ago
The fears from 3 Mile Island and Fukushima were almost completely irrational. The death toll from those was too low to measure.

And the fears from Chernobyl was MOSTLY irrational.

The reason for the extreme fears that are generated from even very moderate spills from nuclear plants comes in part from the association with nuclear bombs and in part from fear of the unknown.

A lot (if not most) people shut their rational thinking off when the word "nuclear" is used, even those who SHOULD understand that a lot more people die from coal and gas plants EVERY YEAR than have died from nuclear energy throughout history.

Indeed, the safety level at Chernobyl may have been atrocious. But so was the coal industry in the USSR. Indeed, even if just considering the USSR, the death toll from coal alone caused a similar number of deaths (or a bit more) than the deaths caused by Chernobyl EVERY YEAR [1].

[1] https://www.science.org/doi/pdf/10.1126/science.238.4823.11....

roenxi•1d ago
I think you're underselling it - the atrocious safety level at Chernobyl appears to still be an improvement on the coal industry held to a high standard. It is a horrible irony that the environmentalist movement managed to do such incredible damage to the environment by their enthusiastic attacks on nuclear.
gengwyn•19h ago
The tragedy of Chernobyl is it being seen as a failure of nuclear energy, rather than a failure of the Soviet government.
jart•22h ago
If you want to see what a thousand square mile uninhabitable wasteland looks like, here's a YouTube video of some guys swimming in a pool underneath the Chernobyl reactor for fun: https://www.youtube.com/watch?v=WOughghZ8To

I'm so tired of hearing about how regulation is this magic salve that saves everything. Regulation is what caused Chernobyl. Soviet regulations mandated that the flawed RBMK reactor design be used. They knew it was flawed and they forced people to use it anyway. Because that's what government does. There's a similar story in western countries, where it hasn't been feasible to use better designs due to antiquated government regulations, it's just no one here has screwed up as badly as the Soviets did.

Joeri•1d ago
I don't buy into the "red tape" argument. For me it's all due to a lack of a business case. I think building (LWR) nuclear plants is just freaking expensive, has always been freaking expensive, was bankrolled for a while by governments because they thought it would get cheaper, and then largely abandoned in most countries when it proved to be an engineering dead end.

Here's an MIT study that dug into the reasons behind high nuclear construction cost. They found that regulatory burdens were only responsible for 30% of cost increases. Most of the cost overruns were because of needing to adapt the design of the nuclear plant to the site.

https://news.mit.edu/2020/reasons-nuclear-overruns-1118

Now, you can criticize the methodology of that study, but then you have to bring your own study that shows precisely which regulatory burdens are causing these cost overruns, and which of those are in excess. Is it in excess that we have strict workplace safety regulation now? Is it in excess that we demand reactor containment vessels to prevent meltdowns from contaminating ground water supplies? In order to make a good red tape argument I expect detail in what is excess regulation, and I've never seen that.

Besides, if "red tape" and fossil industry marketing was really the cause of unpopularity, and the business case for nuclear was real when installing plants at scale, you would see Russia and China have the majority of their electricity production from nuclear power.

- Russia is the most pro-nuclear country in the world, and even they didn't get past 20% of electricity share. They claim levelized cost for nuclear on the order of that of solar and wind, but I am very skeptical of that number, and anyone who knows anything about the Russian government's relation to the truth will understand why. When they build nuclear plants in other countries (e.g. the bangladesh project) they are not that cheap.

- China sits at low single digit percentages of nuclear share, with a levelized cost that is significantly higher than Russia's and well above that of solar and wind. While they're planning to grow the nuclear share they assume it will be based on new nuclear technology that changes the business case.

Both Russia and China can rely on cheap skilled labor to bring down costs, a luxury western countries do not have.

And this is ultimately the issue: the nuclear industry has been promising designs that bring down costs for over half a century, and they have never delivered on that promise. It's all smoke and mirrors distracting from the fact that building nuclear plants is inherently freaking expensive. Maybe AI can help us design better and cheaper nuclear power plants, but as of today there is no proven nuclear plant design that is economical to build, and that is ultimately why you see so little new nuclear plant construction in the west.

solarwindy•23h ago
> Russia is the most pro-nuclear country in the world

Not France?

> France derives about 70% of its electricity from nuclear energy, due to a long-standing policy based on energy security.

— https://world-nuclear.org/information-library/country-profil...

tim333•19h ago
Russia likes nukes so they can threaten to nuke everyone.
roenxi•1d ago
It is worth noting that the effect is jaw-droppingly stark. The regulators managed to invert the learning curve [0] so the more power plants get built the more expensive it gets! It is one of the most stunning failures of an industrial society in the modern era; the damage this did to us all is huge. It is disheartening that our leadership/society chose to turn their backs on the future and we're all lucky that the Chinese chose a different tack to the West's policy of energy failure. At least there are still people who believe in industry.

[0] https://www.mdpi.com/1996-1073/10/12/2169 - Fig 3

myrmidon•23h ago
> The regulators managed to invert the learning curve [0]

This is conjecture. If you wanted to establish this, you would have to show that cost of (skilled) labor was unchanging or negligible.

It is also important to consider that nuclear power deaths/damages are much more localized and traceable than excess deaths from air pollution, and thus much less acceptable to the voting population-- you could argue that this should not make any difference (I disagree), but I don't want to digress here too much.

> we're all lucky that the Chinese chose a different tack to the West's policy of energy failure

What do you believe that is? Because from my point of view, China generates a negligible amount of electricity from nuclear power (<5%), this is not going to change within the next decades, and the main "purpose" from what I can tell is to in-house reactor/turbine know-how (instead of relying on Alstom/Siemesn).

roenxi•22h ago
Those are constant dollars. Are you claiming that the real cost of labour went up 4- to 8-fold in the nuclear industry? Why did that happen? The median nuclear plant construction worker would be making $240k/annum type wages. And as I recall I've not heard of that sort of wage rise outside a regulatory failure or somewhere like China undergoing a massive economic boom.

> It is also important to consider that nuclear power deaths/damages...

Maybe you can answer this for me - what deaths and damages? So far I've never been able to pin down any actual death or damage to a nuclear meltdown. I'm sure there are some, but most of the actual attempts to quantify it require appealing to hypothetical deaths and damages that no-one can specifically point to, or tiny numbers that are irrelevant to industrial policy.

I know people who lived in a town next to a lead-zinc mine. That appears to be about as bad as a nuclear crisis from what I can gather and it doesn't seem to be causing anyone undue stress. We're still using lead and zinc. People still live in the town.

> What do you believe that is?

They're building reactors. https://en.wikipedia.org/wiki/List_of_commercial_nuclear_rea... is a happy tale of new and planned plants.

Some of them are really cool too, there is one by the Gobi desert, apparently to prove that they don't need to use water as a coolant.

myrmidon•1h ago
Direct death counts from nuclear meltdowns might be rather low, but the damage is clearly that entire cities need to be evacuated (and stay uninhabitable for decades). Fukushima alone cost the Japanese taxpayer close to $200 billion.

> Those are constant dollars. Are you claiming that the real cost of labour went up 4- to 8-fold in the nuclear industry?

I'm not saying that is the only effect. But if you want to blame "onerous regulations" for nuclear power being so expensive, then you have to at least put bounds on other factors driving cost (and you also have to show those relaxed regulations would not have led to a significant amount of additional incidents; the current rate of 2 meltdowns for ~500ish reactor sites is already problematic enough).

I personally think that collective standards regarding risk and pollution have risen significantly since the early days of nuclear power, and a lot of things that were done back then would be considered unacceptably reckless/negligient nowadays (=> Hanford became a superfund site for a reason), so claiming that those historical costs are repeatable seems very dubious in the first place to me.

roenxi•1h ago
> Direct death counts from nuclear meltdowns might be rather low, but the damage is clearly that entire cities need to be evacuated (and stay uninhabitable for decades). Fukushima alone cost the Japanese taxpayer close to $200 billion.

Ok, so say they don't evacuate the city. What is the actual risk here that we're talking about?

> I'm not saying that is the only effect.

What other effect are you considering? The materials aren't getting that more expensive and the labour will cost similar too. There aren't a lot of options left apart from regulatory changes.

> I personally think that collective standards regarding risk and pollution have risen significantly...

Then do you potentially think that the reason the price is rising is because of regulation? Because this whole paragraph reads like a justification for the costs incurred due to regulation. I'm not seeing what your complaint is here pinning the inverted learning curve on the regulators - you seem to be saying that is what happened and it is reasonable in this paragraph.

squidbeak•1d ago
Nuclear plants have other long tails costs, decommissioning and waste containment.
MagnumOpus•1d ago
So do coal plants. Interestingly approximately nobody seems to be worried about the thousands of coal ash ponds leaching radioactive material, toxic heavy metals and teratogenic compounds into ground water on a regular basis - while storing spent fuel element in a salt mine thousands of miles away engenders outrage…
yencabulator•7h ago
I have never worried about a coal plant in Ukraine. I lived at a place that was potentially downwind of Chernobyl and inside the projected bad-things-happening plume. The scale of a nuclear disaster well exceeds the scale of a coal plant disaster.

Many things are cheap when you ignore externalities.

danw1979•1d ago
The thing is, nuclear was never on such a steep learning curve as solar and batteries are today.

It’ll never be too cheap to meter, but electricity will get much cheaper over the coming decades, and so will synthetic hydrocarbons on the back of it.

TheOtherHobbes•1d ago
Your electricity bill is set by the grift of archaic fossil energy industries. And nuclear qualifies as a fossil industry because it's still essentially digging ancient stuff out of the ground, moving it around the world, and burning it in huge dirty machines constructed at vast expense.

There are better options, and at scale they're literally capable of producing electricity that literally is too cheap to meter.

The reasons they haven't been built at scale are purely political.

Today's AI is computing's equivalent of nuclear energy - clumsy, centralised, crude, industrial, extractive, and massively overhyped and overpriced.

Real AI would be the step after that - distributed, decentralised, reliable, collaborative, free in all senses of the word.

tim333•19h ago
My and or my family's electricity bills have never been near zero. On the other hand my AI bill is zero. I think different economics apply.

(that excludes a brief period when I camped with a solar panel)

greenie_beans•1d ago
watch the cost of electricity go up because the demand created by data centers. i'm building an off grid solar system right now and it ain't cheap! thinking about a future where consumers are competing with data centers for electricity makes me think it might feel cheap in the future, though.
unstablediffusi•1d ago
bro, how much did your electricity cost go up because of millions of people playing games on their 500w+ gpus? by a billion of people watching youtube? by hundreds of millions of women and children scrolling instagram and tiktok 8 hours a day?
greenie_beans•20h ago
this is not the same thing and all the AI CEOs would agree with me. very absurd statement to try to use for your argument.

but here is some data bro! https://fred.stlouisfed.org/series/APU000072610

nhdjd•1d ago
Fusion will show up soon dont worry. AI will accelerate its arrival.
greenie_beans•20h ago
good luck w that
NoGravitas•17h ago
Instead of always being 30 years in the future, it's now always 15 years in the future!
antihero•1d ago
Datacentre operators want to keep energy costs down and also have capital to make it happen.
greenie_beans•20h ago
weird i didn't know they could build nuclear power plants (surely you are being sarcastic?)

also this is weird i thought electricity prices only get cheaper? https://fred.stlouisfed.org/series/APU000072610

skeaker•11h ago
Yes, tech corps have actually have been investing in nuclear. See some recent discussion:

Microsoft: https://news.ycombinator.com/item?id=41601443

Google: https://news.ycombinator.com/item?id=41840769

greenie_beans•10h ago
would be great if ai forces better energy. but

> The deal would help enable a revival of Unit 1 of the five-decades-old facility in Pennsylvania that was retired in 2019 due to economic reasons.

i wonder why they stopped producing energy in 2019 even though energy prices have gone up over the five decades?

jes5199•1d ago
I’m not sure we’ll be metering electricity if Wright’s Law continues
joshjob42•1h ago
Well, Altman is also investing in Helion, which projects to get the price of electricity to ~$10/MWh, but for whom, much like solar, wind, and actual nuclear the cost structure is overwhelmingly dominated by capital costs and non-varying capital costs (the cost of uranium or Helion's fuel will be negligible vs capital and manpower). So there's actually a pretty good reason to think long term electricity will be marginally so cheap that it isn't metered but instead basically bought in chunks of capacity or availability.

Another way for intelligence to get too cheap to meter is for the cost to fall so low it becomes hyperabundant. If you were to, for instance, take AI2027 as a benchmark and think ultimately we'll achieve something like the equivalent of John von Neumann in a box with a 2T dense equivalent parameter model and it will match such a Nobel prize winner's productivity when running inference at say 15 tokens a second (as fast as people can read) then you only need in principle 60 teraflops of AI infernce compute, which is roughly 2x the current Apple Neural Engine. So plausibly by the time you get to the 2030s, every laptop, smartphone, etc will be easily able to run models as powerful as the smartest people.

Somewhat longer term, I'm sure Altman expects the entire process to be automated and for the computational efficiency to rise significantly. If you take recent estimates from various players in the reversible computing space, you'd guesstimate that you ought to be able to do 60tflops by the late 2030s using under 0.1W or ~1kWh/yr which Helion could produce for ~1¢. I do feel like 1 year of cognitive labor from the smartest person a penny or two renders intelligence too cheap to meter out on a per-hour basis.

zafka•1d ago
Right before I came to this article, I watched a video about Alpha fold determining the structure of almost all proteins, and the mention of other programs working on other forms of crystal structures( Magnets, superconductors) I am staring to feel the acceleration. I wish I felt more hopeful.
BjoernKW•1d ago
The title is a likely nod to The Gentle Seduction by Marc Stiegler: http://www.skyhunter.com/marcs/GentleSeduction.html
ryandv•1d ago
> We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.

Not sure if wishful thinking trying to LARP-manifest this future into being, or just more unfalsifiable thinking where we can always be said to be past the event horizon and near to the singularity, given sufficiently underwhelming definitions of "event horizon" and "nearness."

ixtli•1d ago
The phrase for this is “high on your own supply.” These guys are embarrassing, frankly.
actuallyalys•1d ago
Agreed. I realize it’s a metaphor but saying we’re so close to the singularity that it’s guaranteed to happen (the event horizon being a point of no return) seems like wild hubris.
derektank•1d ago
Especially considering the global instability we're facing right now. The World Bank released new estimates US GDP growth in 2025, falling from roughly 2.5 to 1.5 percentage points. I don't think most people would predict the singularity beginning with less economic growth than we saw in the 80s
nyc_data_geek1•1d ago
He is purely a hype man, and many people are too gullible/invested/dumb to see it, call him on it, or care.
anothermathbozo•1d ago
What difference does that make at this stage of things?
orionsbelt•1d ago
It all sounds probably correct to me.

Of course, progress could stall out, but we appear to have sufficient compute to do anything a human brain can do, and in areas, AIs are already far better than humans. With the amount of brain power working in, and capital pouring into, this area, including work on improving algorithms, I think this essay is fundamentally correct that the takeoff has started.

IAmGraydon•1d ago
> in areas, AIs are already far better than humans.

You could say the same thing about a CPU from 40 years ago - they can do math far better than humans. The problem is that there are some very simple problems that LLMs can’t seem to reliably solve that a child easily could, and this shows that there likely isn’t actual intelligence going on under the hood.

I think LLMs are more like human simulators rather than actual intelligent agents. In other words, they can’t know or extrapolate more knowledge than the source material gives them, meaning they could never become more intelligent than humans. They’re like a very efficient search engine of existing human knowledge.

Can anyone give me an example of any true breakthrough that was generated by an LLM?

Davidzheng•1d ago
They could become better at humans bc there's a RL training loop? I don't understand how this isn't directly clear. Even raining purely on human data it's possible to be mildly superhuman (see experiments of chess AI trained on human data only) but once you have verifiable tasks and RL loop the human data is just Kickstarter
orionsbelt•1d ago
You might be correct about LLMs. Let's say that you are.

40 years ago we were clearly compute bound. Today, I think it's fairly clear we are not; if there is anything a human can do that an AI can't, it's because we lack the algorithms, not the compute.

So the question becomes, now that we have sufficient compute capacity, how long do you think it will take the army of intelligent creative humans (comp sci PhDs, and now accelerate by AI assistance) to develop the algorithmic improvements to take AI from LLMs to something human level?

Nobody knows the answer to the above, and I could be very wrong, but my money would bet on it being <30 years, if not dramatically sooner (my money is on under 10).

It seems to me like the building blocks are all here. Computers can now see, process scenes in real time, move through the world as robots, speak and converse with humans in real time, use tools, create images (imagine?), and so forth. Work is continuing to give LLMs memory, expanded context, and other improvements. As those areas all get improved on, tied together, recursively improved, etc., at some point I think it will be hard to argue it is not intelligence.

Where we are with LLMs is Kitty Hawk. The world now knows that flight (true human level intelligence) is possible and within reach, and I strongly believe the progress from here on out will continue to be rapid and extreme.

NoGravitas•16h ago
> So the question becomes, now that we have sufficient compute capacity, how long do you think it will take the army of intelligent creative humans (comp sci PhDs, and now accelerate by AI assistance) to develop the algorithmic improvements to take AI from LLMs to something human level?

This assumes that the eventual breakthroughs start from something like LLMs. It's just as likely or more that LLMs are an evolutionary dead end or wrong turn, and whatever leads to AGI is completely unrelated. I agree that we are no longer compute bound, but that doesn't say anything about any of the other requirements.

Davidzheng•1d ago
We're definitely past the EH. I doubt even if we realize real risk is extremely high we could stop the rain due to economic pressures
Briannaj•1d ago
I could not take the article seriously after reading that.
insin•1d ago
I get the feeling the first paragraph was intended to make people who immediately went "this is is absolute horseshit" stop reading, in the same way an obvious spam email does
candlemas•1d ago
Will the period between whole classes of jobs going away and a new social contract be short enough so that a Butlerian Jihad doesn't kick off?
paxys•1d ago
We've somehow had a calm few months in the industry without the daily hyperbole on AGI, superintelligence and the upcoming singularity. After the string of underwhelming incremental updates from all the top LLM publishers I guess they need to start the hype cycle again.
Davidzheng•1d ago
Underwhelming?! We're probably a year out of AI-led pure math research and full AI SWEs. I think people forget what it was like 1-2 years ago somehow
absurdo•1d ago
Speaking of hyperbole…
margalabargala•1d ago
If the last 12 months are anything to go by, one year from now the "full AI SWEs" will be able to write all the code you probably should have used a library for anyway, and continue to be unable to do anything halfway novel without bugs.
sponnath•1d ago
We are now many years into full AI SWEs will become a reality in 12 months.
meroes•1d ago
What? AI proofs are still atrocious.
IAmGraydon•1d ago
Strangely enough, the hype cycle seems to run in lockstep with their funding rounds.
spacecadet•1d ago
Reminds me of playing https://apps.apple.com/us/app/artificial-superintelligence/i...
seydor•1d ago
ok ok, i m holding my breath!

"Rich" is a relative term, the existence of the rich requires the existence of the poor, and according to the article the rich will get richer much faster than the poor. There's nothing gentle about this singularity

tartoran•13h ago
The veil is gently drawn over the minds of the masses.
egypturnash•1d ago
You can fuck right off with your "gentle singularity" until you start sharing your profits with everyone whose work you ripped off, and are continuing to rip off.
physix•1d ago
I started quickly reading the article without reading who actually wrote it. As I scanned over the things being said, I started to ask myself: Who wrote this? It's probably some AI proponent, someone who has a vested interest. I had to smile when I saw who it was.
thrwwy_jhdkqsdj•1d ago
I did the same thing, I thought "This post looks like a posthumous letter".

I hope LLM use will drive efforts for testing and overall quality processes up. If such thing as an AGI ever exists, we'll still need output testing.

To me it does not matter if the person doing something for you is smarter than you, if it's not well specified and tested it is as good as a guess.

Can't wait for the AI that is almost unusable for someone without a defined problem.

TheAceOfHearts•1d ago
Do you think we will get AI models capable of learning in real time, using a small number of examples similar to humans, in the next few years? This seems like a key barrier to AGI.

More broadly, I wonder how many key insights he thinks are actually missing for AGI or ASI. This article suggests that we've already cleared the major hurdles, but I think there are still some major keys missing. Overall his predictions seem like fairly safe bets, but they don't necessarily suggest superintelligence as I expect most people would define the term.

paradox242•1d ago
This is a condensed version of Altman's greatest hits when it comes to his pitch for the promise of AI as he (allegedly) conceives it, and in that sense it is nothing new. What is conspicuous is that there is a not-so-subtle reframing. No longer is AGI just around the corner, instead one gets the sense that OpenAI they have already looked around that corner and seen nothing there. No, this is one of what I expect will be many more public statements intended to cool things down a bit, and to reframe (investor) expectations that the timelines are going to be longer than were previously implied.
throw310822•1d ago
Cool things down a bit? That's what you call "we're already in the accelerating part of the singularity, past the event horizon, the future progress curve looks vertical, the past one looks flat"? :D
fyrn_•1d ago
For Sam Altman, honestly yes. The man has said some truly wild things for the VC
le-mark•22h ago
If any of that were true, then the llm would be actively involved in advancing themselves, or assisting humans in a meaningful way in the endeavor, which they’re not so far as I can tell.
thegeomaster•1d ago
I hate to enter this discussion, but learning based on a small number of examples is called few-shot learning, and is something that GPT-3 could already do. It was considered a major breakthrough at the time. The fact that we call gradient descent "learning" doesn't mean that what happens with a well-placed prompt is not "learning" in the colloquial sense. Try it: you can teach today's frontier reasoner models to do fairly complex domain-specific tasks with light guidance and a few examples. It's what prompt engineering is about. I think you might be making a distinction on the complexity of the tasks, which is totally fine, but needs to be spelled out more precisely IMO.
hdjdbdirbrbtv•1d ago
Are you talking about teaching in the context window or fine tuning?

If it is the context window, then you are limited to the size of said window and everything is lost on the next run.

Learning is memory, what you are describing is an llm being the main character in the movie Momento, I.e. no longterm memories past what was trained in the last training run.

thegeomaster•8h ago
There's really no defensible way to call one "learning" and the other not. You can carry a half-full context window (aka prompt) with you at all times. Maybe you can't learn many things at once this way (though you might be surprised what knowledge can be densely stored in 1m tokens), but it definitely fits the GP's definition of (1) real-time and (2) based on a few examples.
woopsn•1d ago
Artificial intelligence is a nourished and well educated population. Plus some Adderall maybe. Those are the key insights which represent the only scientific basis for that term.

The crazy thing is that a well crafted language model is great product. A man should be content to say "my company did something akin to compressing the whole internet behind a single API" and take his just rewards. Why sully that reputation boasting to have invented a singularity that solves every (economically registerable) problem on Earth?

hoseja•1d ago
Such is the nature of the scorpion.
ixtli•1d ago
Because they promised the latter to the investors and due to the nature of capitalism they can never admit they’ve done all they can.
crazylogger•1d ago
What you described can be (and is being) achieved by agentic systems like Claude Code. When you give it a task, it knows to learn best practices on the web, find out what other devs are doing in your codebase, and it adapts. And it condenses + persists its learnings in CLAUDE.md files.

Which underlying LLM powers your agent system doesn't matter. In fact you can swap them for any state-of-the-art model you like, or even points Cursor to your self-hosted LLM API.

So in a sense every advanced model today is AGI. We were already past the AGI "singularity" back in 2023 with GPT4. What we're going through now is a maybe-decades-long process of integrating AGI into each corner of society.

It's purely an interface problem. Coding agent products hook the LLM to the real world with [web_search, exec_command, read_file, write_file, delegate_subtask, ...] tools. Other professions may require vastly more complicated interfaces (such as "attend_meeting",) it takes more engineering effort, sure, but 100% those interfaces will be built at some point in the coming years.

tim333•19h ago
AlphaZero learned various board games from scratch up to better than human levels. I guess in principle that sort of algorithm could be generalized to other things?
woopsn•1d ago
The "fundamental limiter of human progress" is not energy/intelligence lol. No - surprise, it's people. Bear in mind he builds this "gentle singularity" in the great progressive UAE.
waldohatesyou•1d ago
How is it not? In the grand scheme of human history, energy usage constantly goes up. What other measure could you possibly use to measure progress?
lubujackson•1d ago
> For a long time, technical people in the startup industry have made fun of “the idea guys”; people who had an idea and were looking for a team to build it.

I get the gist of what he is saying, but I really think that most of the "idea guys" who never got farther than an idea will stay that way. Sure, they might spit out a demo or something, but from what I've seen the "idea guys" tend to be the "I want to play business" guys who have read all the top books and refine their Powerpoints but never actually seem to get around to, you know, running a business. I think there is underlying difference there.

I do see AI as a great accelerator. Just as scripting languages suddenly unlocked some designers who could make great websites but couldn't hang with pointers and malloc, I think AI will unlock great idea guys who can make great apps or businesses. But it will be fewer people than you think, because "building the app" is rarely the biggest challenge - focused intent is much harder to come by.

I do think the age of decently robust apps getting shat out like Flash games is going to be fun and chaotic, and I am 100% here for it.

michaelbarton•1d ago
Regarding a super intelligence creating new cures: a million plus people die from malaria and AIDS combined each year. We have effective treatments for both, yet the USAID was recently shut down.

I enjoy technology but less and less so each year, because it increasingly feels like there’s some kind of disconnect with the real world that’s hard to put my finger on

lantry•1d ago
exactly. AI won't change the underlying incentives. We can't solve the problem of "alignment" between humans, so how can we ever solve it for AI?

> Scientific progress is the biggest driver of overall progress; it’s hugely exciting to think about how much more we could have.

Does he mean "how much more we could have if our federal government wasn't actively destroying our science institutions"?

__MatrixMan__•1d ago
One could imagine scientific institutions that are less vulnerable to the whims of government. As loathe as I am to defend an Altman point, AI could play a role in making them possible.
lantry•1d ago
There are lots of things AI could do. But the things AI will do are governed by the incentives of the underlying system (i.e. society).
__MatrixMan__•1d ago
Those incentives are themselves the output of a variety of heuristics that we rely on only due to a lack of viable alternatives. I expect that eventually AI will make some alternatives viable because the bottleneck is so often an information processing one. Three examples:

- Loans get issued when they're likely to make a banker rich. We know that's a lousy proxy for assurances that their associated ventures will be beneficial to the public, but it kinda sorta works enough.

- Those loans cause money to enter the system and then we all chase it around and we cooperate with people who give us some of it even though we know that having money is a lousy proxy for competence or vision or integrity or any other virtue that might actually warrant such deference, but it kinda sorta works enough.

- Research gets done in support of publications, the count of which is used to assess the capability of the researcher, even though we know that having many publications is a lousy proxy for having a made a positive impact, but it kinda sorta works enough.

If we can fix these systems that are just limping along... If money can enter the system in support of ventures which are widely helpful, not just profitable... If we can support each other due to traits we find valuable in each other, not just because of some happenstance regarding scarcity. If we can encourage effective research in pursuit of widely sought after goals, rather than just counting publications... well that will be a whole new world with a new incentive structure. AI could make it happen if it manages to dispense with all these lousy proxies.

lantry•1d ago
> AI could make it happen if it manages to dispense with all these lousy proxies.

The thing that you, altman, and every other AI booster fail to explain is how AI will do this. Every argument is littered with "if"s and "could"s, and it's just taken for granted that AI will be smart enough to find a solution that solves all our problems. It's also taken for granted that we'll actually implement the solution! If AI said that curing cancer requires raising taxes, cancer would remain uncured.

__MatrixMan__•1d ago
I've got a few designs in mind for an alternative to scarcity based economics. I don't presume to have it right, but its worth a shot and its a bit more concrete than blind trust in one technology or another. I thought it would take my whole life to just lay the foundation, but with some AI help, maybe not.

The trouble with you naysayers is you always take as immutable things that we can change.

NoGravitas•16h ago
We could have had meaningful post-scarcity economics in 1968 (read Murray Bookchin). It is not a technological problem. It is a social problem.
Karrot_Kream•1d ago
So I think this piece by Altman is mostly hype smoke also but I really dislike your framing here. Somehow the "AI boosters" have to come up with a how but the "AI doomers" can just sit back, criticize, and not solution?

Let's be real here, US politics is fucked and none of us knows how to fix it.

lantry•23h ago
> Somehow the "AI boosters" have to come up with a how but the "AI doomers" can just sit back, criticize, and not solution?

as the top level comment pointed out, we have solutions to many of these things, but we choose not to do them. I don't think it's unfair to say "maybe we shouldn't spend billions of dollars on this thing that will probably just reinforce existing power structures".

> Let's be real here, US politics is fucked and none of us knows how to fix it.

Agreed.

tsimionescu•1d ago
> Loans get issued when they're likely to make a banker rich. We know that's a lousy proxy for assurances that their associated ventures will be beneficial to the public, but it kinda sorta works enough.

This system exists and is optimized to make the bankers rich. The idea that it helps ensure that ventures using that money will be successful is the thin veneer used 100+ years ago to sell it to everyone. But the true purpose has always been to make bankers rich. If you to institute any other system that would not achieve that purpose, you will find yourself battling enormous opposition.

The same things are true for huge parts of our society. Perhaps the most glaringly obvious is the US Healthcare system. Experience from all over the world shows clearly that it is not an efficient way to organize Healthcare, by any stretch of the imagination. Still, it won't change, not because people don't believe the outside examples, but because the system is working for what it was designed to do - transfer huge amounts of money to rich people. And it delivers just enough health care that people aren't routing in the streets against it.

rglover•1d ago
> it increasingly feels like there’s some kind of disconnect with the real world that’s hard to put my finger on

My take: echo chambers have become mainstream (ironically, aided by technology). Typically that's online for most people, but in SF, it's a physical echo chamber, too.

That echo chamber allows large numbers of people (coincidentally, the people shepherding a lot of the technology) to rationalize any position they come up with and fall into a semi-permanent state of cognitive dissonance ("of course my technological solution is the solution").

If other people are saying the same thing nearly everywhere you look, who's to say those who disagree are actually the correct ones?

Nevermark•1d ago
> My take: echo chambers have become mainstream (ironically, aided by technology).

Technology can scale up small conveniences into major economic and quality of life wins.

But when it's tolerated, technology also ramps up seemingly small conflicts of interest into economic and society-degrading monsters.

Our legal intolerance for conflicts of interest as business models needs to go up a lot. No amount of strongly worded letters, uncomfortable senate interviews, or unplug-the-system theater, are going to discourage billionaires farming people's behavior, attention and psychology from continuing to farm people's behavior, attention and psychology.

bko•1d ago
Unfortunately, many problems are last mile problems. If we could teleport medicine and technology to everyone who needs it, these problems would be mostly solved. And I'm sure drug companies would love to make money on those people since their marginal cost is so low. But coordination is hard, and it's not necessarily a problem money can fix.

It's like the math that takes # of homeless people times cost of an apartment, and then people claim we can solve homelessness for [reasonable amount]. Makes sense until you look how much US spends on homelessness every year already.

Cell phone technology was the closest thing we got to teleporting technology and wealth to the rest of the world. Most developing countries completely skipped a phase of development that required crazy amounts of infrastructure to be built out. More people have cell phones than running water. It's pretty incredible if you think about it. Hopefully AI will be a similar leap frog.

jonplackett•1d ago
I’m not sure - we have enough food to feed everyone on earth. We have cures to many diseases. We have enough homes for everyone.

Inequality is bad and getting worse. Big changes accelerate it because richer people can adapt and take advantage faster.

I enjoy playing around with AI for fun and find it amazingly useful. But I do not believe it can solve inequality - that’s a people problem.

godelski•1d ago
I work in ML and honestly I agree. We live in a fairly post scarce world. We could already live in a much less post scarce world.

AI, as most technology, make these things easier, but they are power. It's all about what you do with power. You can build power plants or bombs. AI could (let's be hypothetical) free humans from all necessary labor. Robots making all the food, mining all the materials, and do the whole pipeline. But that requires a fundamental rethinking of how we operate now. That world isn't socialism nor capitalism. That's a world where ownership becomes ill defined in many settings but still strict in others. It's easy to income Star Trek but hard to imagine how we get there from here. Do we trust a few people to make all the robots to do all those things and then just abdicate all that power? Do we trust governments to do it? There's reason to trust no one. Because a deep dystopian can be created from any such scenarios. Going towards that hypothetical future is dangerous. Not because super intelligences, but because of us. Those with power tend to not just freely abdicate it... that's not a problem we're discussing enough and our more frequent conversations of ASI and the like just distracts from this one.

psalaun•1d ago
> It's easy to income Star Trek but hard to imagine how we get there from here

I'd like someone, amongst the tech bros for instance but it could be any influential politician in power, to set a target on when do we stop making life more miserable than it could for billions of people, by asking them to aim for no more than structural unemployment, 40+ hours weeks, steady economic growth in the name of progress.

Because as long as the end game isn't defined (and some milestones towards it), we won't have Star Trek, but a sci-fi version of a Dickens or Zola book, or at least an eternal treadmill augmented with marginally less useful innovations.

That's how I project over the next centuries the failed prediction from Keynes about everyone working 15 hours weeks in a near future, in a western world that yet did achieve post scarcity (at least for now)

LtWorf•1d ago
tech bros by definition want people to be miserable
godelski•1d ago
I'm not sure it's mostly last mile problems. A frequent problem I see is "costs now vs future costs". Worse is "costs now vs amortized future costs"

What I mean is let's say you are making something and you see an issue. You know it'll be an issue and if fixed now will cost $100, but if you fix it a year from now it'll cost $1000. Many people will choose the latter. On paper, I think most people will choose the former but in practice the latter is often hard to determine. An example might be a security risk. It only gets harder to fix as you generate more code and complexity increases. But of hacked this has high costs both through business and through lawsuits.

The amortized one is a nasty problem because it often goes unnoticed. Say a problem costs $100 to fix now but every day it isn't fixed it costs $1.00 every day you don't fix it costs 5% more (so $1, $1.05, $1.1025,...). These sneak up on you and you'll pay a lot more than that $100 by the time you notice

idontwantthis•1d ago
I think you misunderstood OP. The last mile problem was solved or being solved. Millions of lives were saved and fewer people were dying every year. It was one of the greatest successes in human history. Now we’ve stopped because our government decided to stop. Millions of more people will die unnecessarily and technology has nothing to do with it.
theptip•1d ago
One lens is the “Thrive/Survive” axis. As more people thrive, they have more space for individualism, compassion for others, etc. This can be used to explain the arc of “moral progress” over time.

A follow up would be to observe that in many places today, inequality causes substantial sub-populations to feel that they are not thriving, or even declining though GDP is increasing. Which would explain the rage of the middle-American and many Europeans.

If you buy all this, then there is a clear path by which radical abundance resolves the problems. Same as how the Baby Boomers were a very low-polarization generation; when everybody is thriving it’s a lot easier to get along.

Personally I worry more about the possible worlds where technology doesn’t bring us radical abundance. Declining empires with nuclear weapons don’t sound conducive to a fun time.

slibhb•1d ago
> a million plus people die from malaria and AIDS combined each year

These are political problems not technological problems.

Avicebron•1d ago
we should separate the language a bit here, technological problems are political problems inasmuch as corporations and the individuals who harness that infrastructure to gain immense wealth proclaim themselves eminent technologists who exert influence over the political sphere through monetary feedback mechanisms that are created by selling technical solutions to political problems.
michaelbarton•13h ago
yeap! that was exactly my point, it was I reference to sama’s point in the article about super intelligence creating new cures
adverbly•1d ago
> I enjoy technology but less and less so each year, because it increasingly feels like there’s some kind of disconnect with the real world that’s hard to put my finger on

Same here.

I want to be an engineer.

As a good engineer, I find a problem then recursively ask why.

I end up with a political root cause - not a technical one.

This makes me sad.

I wish I could just be an engineer.

disqard•16h ago
"Technology acts as an amplifier of Human intentions" -- Kentaro Toyama

What the richest and most powerful people want, happens to be "be even richer and even more powerful".

Banal, sad, and universal.

JshWright•1d ago
TB killed even more than those two combined and is also on the list of “things we know how to treat, but as a society have decided not to”.
imgabe•1d ago
USAID is not the sole source of administering AIDS and malaria treatments. No doubt many more people were successfully treated because those treatments exist.

I don't think I understand your point. We shouldn't develop new treatments because one country won't supply them to the entire world for free? Why is it the US's responsibility to treat every disease everywhere in the world? Do these countries not have their own governments?

nielsbot•1d ago
People keep saying we're the richest country in the history of the world. I think we have a responsibility to practice good old fashioned Christian compassion (with a side of soft power if that's more your thing.)
imgabe•1d ago
I thought we weren't a Christian country?

Either way, individuals are welcome to practice any sort of compassion they want with their own money. The government collects tax dollars from citizens under threat of violence, and their only responsibility should be to use that money to ensure the welfare of its citizens, not to engage in charity work in other countries. If citizens want to do that, have the government collect fewer tax dollars and citizens can give them to charity as they see fit.

yablak•1d ago
USAID is not just charity. It is a projection of soft power that keeps many countries and world citizens looking up to the USA. It reduces likelihood of terrorist attacks on US citizens.
imgabe•1d ago
Among people likely to consider a terrorist attack, USAID is widely considered a front for the CIA to meddle in other countries' governments (and this has been proven to be the case in some instances). Which, if true, makes it a bad front for the CIA, and if false, means it creates more resentment than anything else and does little to reduce the likelihood of terrorist attacks.
bn-l•1d ago
No more soft power please.
rl3•1d ago
>Either way, individuals are welcome to practice any sort of compassion they want with their own money. The government collects tax dollars from citizens under threat of violence, and their only responsibility should be to use that money to ensure the welfare of its citizens, not to engage in charity work in other countries.

Soft power is of obvious immense benefit to citizens of the United States, however you've rejected that in other comments.

The argument otherwise reads like a stereotypical "Not with my tax dollars!" argument. It's always fascinated me, that. Inevitably it's always an impassioned argument, regardless of the funding subject.

In this particular case, a very conservative estimate might put the number of child deaths in the tens of thousands. Reality is probably closer to hundreds of thousands at this point.

I pay taxes, a lot of them. I don't get angry when my taxes are used by the government to keep disadvantaged children in hellish conditions alive.

If you were to express the total cost of USAID's former budget against your tax bill as a binary choice between that and a few hundred thousand kids dying, I suspect it'd be much harder to maintain your current position.

Moreover, consider that central to this issue is the abrupt dismantling of an agency which was critical to global aid flow, and amounted to a rug pull. There was no justification for that.

Hopefully Sam's ASI is more compassionate than people, which frankly isn't a high bar lately.

https://www.npr.org/2025/05/28/nx-s1-5413322/aid-groups-say-...

https://www.npr.org/sections/goats-and-soda/2025/05/28/g-s1-...

msgodel•1d ago
It's a benefit to our government but this has mostly been used against citizens.

I'm more than happy to see the "empire" die and willing to do anything I can to speed that along.

imgabe•1d ago
How about we lower taxes and give money back to the people who earned it? If you want to use your money to save dying children on the other side of the world, you are welcome to do that. If other people have more pressing needs for their money they can do that too.

It seems people are infinitely compassionate when spending someone else’s money.

rl3•1d ago
I think KFC called that the Double Down, but to each their own.

What of the argument against USAID’s rapid disassembly then? Is such an outcome permissible or even desired on account of sparing these funds sooner, aid continuity be damned?

imgabe•1d ago
At what speed should it be disassembled? I don't see how dragging things out will help. No matter how slow you go, people who want it to continue are going to say it's too fast. May as well rip the band aid off quickly.

I mean, your plan seems to be:

A. The US should give infinity money to everyone forever and never stop. If anyone ever dies, it's the US's fault for not supporting them enough.

B. If you are going to stop, do it on the schedule of the people who are getting free stuff, and only stop when they decide they don't want free stuff anymore (i.e. never).

rl3•1d ago
>At what speed should it be disassembled?

A responsible pace that doesn't result in abrupt mass deaths due to the lack of aid continuity.

>A. The US should give infinity money to everyone forever and never stop. If anyone ever dies, it's the US's fault for not supporting them enough.

Nobody said that, but with operating the world's largest aid agency for the better part of a century comes massive responsibility.

>B. If you are going to stop, do it on the schedule of the people who are getting free stuff, and only stop when they decide they don't want free stuff anymore (i.e. never).

You're right. Hopefully those impoverished kids (many of whom are dead now) take some personal responsibility for themselves in the afterlife. To think we'd even entertain pulling their food and medicine on their schedule and not our schedule.

imgabe•1d ago
> You're right. Hopefully those impoverished kids (many of whom are dead now) take some personal responsibility for themselves in the afterlife. To think we'd even entertain pulling their food and medicine on their schedule and not our schedule.

The children? No, but their parents and the other adults running their country. That is who is responsible for providing for them. Americans have their own children they need to take care of and do not need their money seized and sent overseas to take care of other people's children.

And yes, maybe it is a "rug pull" but it was always going to be. It is immoral to engender such dependence in the first place, like keeping someone slightly poisoned so they're constantly sick and dependent on you to take care of them. Let people grow strong so they can take care of themselves and treat with them as equals.

rl3•1d ago
>Americans have their own children they need to take care of and do not need their money seized and sent overseas to take care of other people's children.

You talk as if it's a zero-sum game, as if the two choices are mutually exclusive.

>And yes, maybe it is a "rug pull" but it was always going to be.

No reason it had to be.

imgabe•1d ago
It is zero sum. How is it not? This is not an investment. It is not going to create future tax revenue for the US. It is just keeping people barely alive in crippling poverty for a little bit longer than they would otherwise.
const_cast•13h ago
IMO it is an investment and it will bring future revenue to the US. Because this fosters working relationship with other countries, countries that we ultimately rely on for their resources. Because they have a working relationship with the US, they're willing to give us some pretty sweet deals.

If you look at Africa, it has the most wealth by resources out of any continent. It's also the poorest continent nominally. We're getting a lot of good stuff at INSANE discounts.

What I'm describing is, of course, colonialist in nature. The US is an empire, not a nation. But, the hope is that as we help those countries develop they can help us stay developed, and we can eventually reach some mutually beneficial equilibrium. Instead of exploitation.

But, currently, the relationship is exploitative. It's a bit wild to me that you legitimately think the US, of all countries, is being exploited. No bubba... no. We do the exploiting. Everything you own is build with layers and layers of global exploitation built into it. You have a few hundred slaves working for you as we speak.

msgodel•1d ago
We were trying to have nuanced discussions about these things 10 years ago and were ignored. The time for going slowly was then. Now things are just going to get done.
nielsbot•10h ago
Let me guess: “all taxation is theft”?

Absolutely we aren’t a Christian country. I personally don’t need Christianity to tell me charity for our fellow humans is a good thing. Plus, richest country in the history of the world remember?

The rich in the US enjoy their wealth at the “pleasure” of the lower classes. (And not just the American lower classes.) Those dollars they’re hoarding? Those were created by the people and have value because of the people. So, I’m all for confiscatory taxation to fund humane charitable endeavors and eliminate wealth hoarding. Someone will have to make do with one less yacht I suppose.

Finally, the amount we’re taking about here is a mere pittance. Let’s cut some other wasteful spending first (Pentagon) if you’re looking for savings.

warwren•1d ago
The point is that a treatment existing does not mean it will be adequately administered. The biggest pool of treatable cases worldwide does not occur where the people with the most ability to pay are.
imgabe•1d ago
And a treatment not existing means it will not be administered at all, adequately or not. You need to make the treatment exist first, then you can figure out how to administer it as widely as possible. You can't administer a treatment that doesn't exist.
biophysboy•1d ago
All of these questions are easy to answer:

1) The point is that global health is also about access to a drug, not just existence.

2) We don’t. USA’s foreign aid per capita is not that high (especially now). To mobilize private money for aid, you need a long-term, trusted infrastructure.

3) Countries that receive aid typically do not have functional governments.

imgabe•1d ago
> The point is that global health is also about access to a drug, not just existence.

And that point is irrelevant to whether or not we should develop new drugs.

> Countries that receive aid typically do not have functional governments.

Maybe they should work on fixing that, or just dissolve the country and get absorbed into a functional country if they can't manage to create a functional government on their own.

auggierose•1d ago
So, which country is the US going to be absorbed by?
imgabe•1d ago
You're saying the US is simultaneously non-functional but also responsible for providing free healthcare to every other country?

If we're non-functional, where is all the foreign aid providing free stuff for us?

C'mon other countries, give us free stuff. It will be great for your soft power, and prevent us from doing terrorist attacks on you.

duchef•1d ago
Why do you keeping making posts implying the US provides the bulk of international aid when that is demonstrably not the case (on a % of GDP basis or total dollar amount)?
imgabe•1d ago
If it’s not a significant amount then why is it such a catastrophe if it stops?
auggierose•19h ago
I don't think the US is responsible for providing free healthcare for other countries. It should start doing that for its own people, though. That would be a big step towards being a functional country. Or maybe just not have legislation called "One Big Beautiful Bill". The US is a joke.

You are getting a lot of free stuff, because you own the dollar. That keeps you afloat so far, but I don't think it will last for much longer.

biophysboy•1d ago
Market size affects revenue and thus investing decisions. Pharma is a high-risk world.

Do you think a dysfunctional gov has a higher chance of fixing itself with widespread disease?

imgabe•1d ago
We’ve been providing foreign aid for, what, like 70-80 years now? Have the dysfunctional governments fixed themselves yet? How long should it take? 100 years? 500 years?

Maybe providing aid is just propping up dysfunctional governments by doing their job for them and it would be better in the long run if they were allowed to collapse and be replaced with something that was forced to be functional.

biophysboy•1d ago
It’s globally useful to eradicate/reduce disease in a region irrespective of whether or not the region’s government becomes stable. Viruses and bacteria mutate and do not care about borders.

I have no fantasies that aid will magically make countries stable.

imgabe•1d ago
If it is globally useful then the burden should be spread equally among all the countries on the globe. If the US is providing this service, other countries should compensate us for it.
biophysboy•21h ago
Again, the US is not an outlier on foreign aid, especially now.
mindcandy•1d ago
> Maybe they should work on fixing that

It’s not an either/or. While we figure out the practical solutions to corruption in impoverished nations, we can /also/ do other work to improve the situation in Earth. And, in doing so, we will make solving the impoverished/corruption problem easier to fix.

imgabe•1d ago
We don’t have to figure out a solution to anything in other sovereign nations. Nor, can or should we really impose any functional, non-corrupt government on people who are unable or unwilling to do it themselves, unless you’re willing to go back to full-blown colonialism. The people in the country need to figure it out themselves and decide they want to have a functional government and make it happen. We can’t do it for them.
biophysboy•1d ago
You’re conflating national building with mitigation of disease. I agree: medical aid is not a substitute for local medical infrastructure and can threaten its development. But aid is not guaranteed to do that. Also, disease is intrinsically bad.
michaelbarton•12h ago
I don’t disagree with your point. It’s not the US’s responsibility to supply malaria drugs. My point was that it is not for lack of an existing cure that people are dying of malaria or AIDS, in reference to the article.

Personally speaking I wish the US was drastically more involved with providing aid because it can help reduce the impact of individual catastrophies happening everyday.

https://moderndiplomacy.eu/2025/02/04/beyond-rescue-why-huma...

tdstein•1d ago
> I have no doubt they will feel incredibly important and satisfying to the people doing them.

How rude.

manyaoman•1d ago
Workers 1000 years from now gonna get so triggered by that.
falcor84•23h ago
How is that rude?

I suppose that my work as a coder would look equally odd and useless to someone looking at it from 1000 years ago, while being important and satisfying to me. I didn't sense any condescension in that.

Briannaj•1d ago
we are not past the event horizon.
decimalenough•1d ago
> There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.

Yeah dawg imma need a citation for that. Or maybe by "the world" he means "Silicon Valley oligarchs", who certainly have been "entertaining" all sorts of "new policy ideas" over the past half year.

GolfPopper•1d ago
Will AI be able to take drive-thru orders after the Singularity?
jes5199•1d ago
if it’s gentle, it’s not a singularity
kypro•13h ago
We still time to nuke the data centers. Assuming we care about leaving our kids a world that isn't akin to a sci dystopia, or worse.
ninetyninenine•1d ago
I bet sam used chatgpt to write this. There are some tell tale signs including hyphens.

I don't know if gentle is the right word. Maybe Gently face hugger and chest burster is more apt. It's slowly infecting us and eating us from the inside but it's so gently we don't even care. We just complain on HN while doing nothing about it.

apt-apt-apt-apt•1d ago
What's the significance of whether GPT was used here?
ninetyninenine•1d ago
none. But I feel it should be mentioned.
Caelus9•1d ago
My attitude towards AI is one of balance not overly dependent, but not completely rejecting either. After all, AI is already playing an important role in many fields, and I believe the future will be a world where humans and AI coexist and collaborate. There are many things humans cannot do, but AI can help us achieve them. We provide the ideas, and AI can turn those ideas into actions, helping us accomplish tasks.
biophysboy•1d ago
This is the correct take. It is technology, not a god or a demon.
flessner•1d ago
> Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel;

It was probably around 7 years ago when I first got interested in machine learning. Back then I followed a crude YouTube tutorial which consisted of downloading a Reddit comment dump and training an ML model on it to predict the next character for a given input. It was magical.

I always see LLMs as an evolution of that. Instead of the next character, it's now the next token. Instead of GBs of Reddit comments, it's now TBs of "everything". Instead of millions of parameters, it's now billions of parameters.

Over the years, the magic was never lost on me. However, I can never see LLMs as more than a "token prediction machine". Maybe throwing more compute and data at it will at some point make it so great that it's worthy to be called "AGI" anyway? I don't know.

Well anyway, thanks for the nostalgia trip on my birthday! I don't entirely share the same optimism - but I guess optimism is a necessary trait for a CEO, isn't it?

trashtester•1d ago
The "next token prediction" is a distraction. That's not where the interesting part of an AI model happens.

If you think of the tokenization near the end as a serializer, something like turning an object model into json, you get a better understanding. The interesting part of a an OOP program is not in the json, but what happens in memory before the json is created.

Likewise, the interesting parts of a neural net model, whether it's LLM's, AlphaProteo or some diffusion based video model, happen in the steps that operate in their latent space, which is in many ways similar to our subconscious thinking.

In those layers, the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text, images, video etc used to train them. Also, many of these patterns generalize when different modalities are combined.

From this latent space, you can "serialize" outputs in several different ways. Text is one, image/video another. For now, the latent spaces are not general enough to do all equally well, instead models are created that specialize on one modality.

I think the step to AGI does not require throwing a lot more compute into the models, but rather to have them straddle multiple modalities better, in particular, these:

- Physical world modelling at the level of Veo3 (possibly with some lessons from self driving or robotics model for elements like object permananence and perception) - Symbolic processing of the best LLM's. - Ability to be goal oriented and iterate towards a goal, similar to the Alpha* family of systems - Optionally: Optimized for the use of a few specific tools, including a humanoid robot.

Once all of these are integrated into the same latent space, I think we basically have what it takes to replace most human thought.

sgt101•1d ago
>which is in many ways similar to our subconscious thinking

this is just made up.

- we don't have any useful insight on human subconscious thinking. - we don't have any useful insight on the structures that support human subconscious thinking. - the mechanisms that support human cognition that we do know about are radically different from the mechanisms that current models use. For example we know that biological neurons & synapses are structurally diverse, we know that suppression and control signals are used to change the behaviour of the networks , we know that chemical control layers (hormones) transform the state of the system.

We also know that biological neural systems continuously learn and adapt, for example in the face of injury. Large models just don't do these things.

Also this thing about deeper and deeper realities? C'mon, it's surface level association all the way down!

ixtli•1d ago
Yea whenever we get into this sort of “what’s happening in the network is like what’s going on in your brain” discussion people never have concrete evidence of what they’re talking about.
scarmig•1d ago
The diversity is itself indicative, though, that intelligence isn't bound to the particularities of the human nervous system. Across different animal species, nervous systems show a radical diversity. Different architectures; different or reversed neurotransmitters; entirely different neural cell biologies. It's quite possible that "neurons" evolved twice, independently. There's nothing magic about the human brain.

Most of your critique is surface level: you can add all kinds of different structural diversity to an ML model and still find learning. Transformers themselves are formally equivalent to "fast weights" (suppression and control signals). Continuous learning is an entire field of study in ML. Or, for injury, you can randomly mask out half the weights of a model, still get reasonable performance, and retrain the unmasked weights to recover much of your loss.

Obviously there are still gaps in ML architectures compared to biological brains, but there's no particular reason to believe they're fundamental to existence in silico, as opposed to myelinated bags of neurotransmitters.

sgt101•19h ago
>The diversity is itself indicative, though, that intelligence isn't bound to the particularities of the human nervous system. Across different animal species, nervous systems show a radical diversity. Different architectures; different or reversed neurotransmitters; entirely different neural cell biologies. It's quite possible that "neurons" evolved twice, independently. There's nothing magic about the human brain.

I agree - for example Octopus's are clearly somewhat intelligent, maybe very intelligent, and they have a very different brain architecture. Bees have a form of collective intelligence that seems to be emergent from many brains working together. Human cognition could arguably be identified as having a socially emergent component as well.

>Most of your critique is surface level: you can add all kinds of different structural diversity to an ML model and still find learning. Transformers themselves are formally equivalent to "fast weights" (suppression and control signals). Continuous learning is an entire field of study in ML. Or, for injury, you can randomly mask out half the weights of a model, still get reasonable performance, and retrain the unmasked weights to recover much of your loss.

I think we can only reasonably talk about the technology as it exists. I agree that there is no justifiable reason (that I know of) to claim that biology is unique as a substrate for intelligence or agency or consciousness or cognition or minds in general. But the history of AI is littered with stories of communities believing that a few minor problems just needed to be tidied up before everything works.

ben_w•22h ago
The bullet list is a good point, but:

> We also know that biological neural systems continuously learn and adapt, for example in the face of injury. Large models just don't do these things.

This is a deliberate choice on the part of the model makers, because a fixed checkpoint is useful for a product. They could just keep the training mechanism going, but that's like writing code without version control.

> Also this thing about deeper and deeper realities? C'mon, it's surface level association all the way down!

To the extent I agree with this, I think it conflicts with your own point about us not knowing how human minds work. Do I, myself, have deeper truths? Or am myself I making surface level association after surface level association, but have enough levels to make it seem deep? I do not know how many grains make the heap.

sgt101•19h ago
>This is a deliberate choice on the part of the model makers, because a fixed checkpoint is useful for a product. They could just keep the training mechanism going, but that's like writing code without version control.

Training more and learning online are really different processes. In the case of large models I can't see how it would be practical to have the model learn as it was used because it's shared by everyone.

>To the extent I agree with this, I think it conflicts with your own point about us not knowing how human minds work. Do I, myself, have deeper truths? Or am myself I making surface level association after surface level association, but have enough levels to make it seem deep? I do not know how many grains make the heap.

I can't speak for your cognition or subjective experience, but I do have both fundamental grounding experiences (like the time I hit my hand with an axe, the taste of good beer, sun on my face) and I have used trial and error to develop causative models of how these come to be. I have become good at anticipating which trials are too costly and have found ways to fill in the gaps where experience could hurt me further. Large models have none of these features or capabilities.

Of course I may be deceived by my cognition into believing that deeper processes exist that are illusory because that serves as a short cut to "fitter" behaviour and evolution has exploited this. But it seems unlikely to me.

ben_w•16h ago
> In the case of large models I can't see how it would be practical to have the model learn as it was used because it's shared by everyone.

Given it can learn from unordered text of the entire the internet, it can learn from chats.

> I can't speak for your cognition or subjective experience, but I do have both fundamental grounding experiences (like the time I hit my hand with an axe, the taste of good beer, sun on my face) and I have used trial and error to develop causative models of how these come to be. I have become good at anticipating which trials are too costly and have found ways to fill in the gaps where experience could hurt me further. Large models have none of these features or capabilities.

> Of course I may be deceived by my cognition into believing that deeper processes exist that are illusory because that serves as a short cut to "fitter" behaviour and evolution has exploited this. But it seems unlikely to me.

Humans are very good at creating narratives about our minds, but in the cases where this can be tested, it is often found that our conscious experiences are preceded by other brain states in a predictable fashion, and that we confabulate explanations post-hoc.

So while I do not doubt that this is how it feels to be you, the very same lack of understanding of causal mechanisms within the human brain that makes it an error to confidently say that LLMs copy this behaviour, also mean we cannot truly be confident that the reasons we think we have for how we feel/think/learn/experience/remember are, in fact, the true reasons for how we feel/think/learn/experience/remember.

andsoitis•23h ago
> the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text

What are you talking about?

phorkyas82•22h ago
As far as I understood any AI model is just a linear combination of its training data. Even if that were such a large corpus as the entire web... it's still just like a sophisticated compression of other's people's expressions.

It has not made its own experiences, not interacted with the outer world. Dunno, I won't to rule out something operating solely on language artifacts cannot develop intelligence or consciousness, whatever that is,.. but so far there are also enough humans we could care about and invest into.

tlb•22h ago
LLMs are not a linear combination of training data.

Some LLMs have interacted with the outside world, such as through reinforcement learning while trying to complete tasks in simulated physics environments.

olmo23•20h ago
Just because humans can describe it, doesn't mean they can understand (predict) it.

And the web contains a lot more than people's expressions: think of all the scientific papers with tables and tables of interesting measurements.

klipt•1d ago
If you wish to make an apple pie from scratch

You must first invent the universe

If you wish to predict the next token really well

You must first model the universe

marsten•1d ago
> Over the years, the magic was never lost on me. However, I can never see LLMs as more than a "token prediction machine".

The "mere token prediction machine" criticism, like Pearl's "deep learning amounts to just curve fitting", is true but it also misses the point. AI in the end turns a mirror on humanity and will force us to accept that intelligence and consciousness can emerge from some pretty simple building blocks. That in some deep sense, all we are is curve fitting.

It reminds me of the lines from T.S. Eliot, “...And the end of all our exploring, Will be to arrive where we started, And know the place for the first time."

iNic•1d ago
The mere token prediction comment is wrong, but I don't think any of the other comments really explained why. Next token prediction is not what the AI does, but its goal. It's like saying soccer is a boring sport having only ever seen the final scores. The important thing about LLMs is that they can internally represent many different complex ideas efficiently and coherently! This makes them an incredible starting point for further training. Nowadays no LLM you interact with will be a pure next token predictor anymore, they will have all gone through various stages of RL, so that they actually do what we want them to do. I think I really feel the magic looking at the "circuit" work by Anthropic. It really shows that these models have some internal processing / thinking that is complex and clever.
quonn•22h ago
> that they can internally represent many different complex ideas efficiently and coherently

The Transformer circuits[0] suggest that this representation is not coherent at all.

[0] https://transformer-circuits.pub

iNic•22h ago
I guess that depends on what you think is coherent. A key finding is that the larger the network the more coherent the representation becomes. One example is larger networks merge the same concept across different languages into a single concept (as humans do). The addition circuits are also fairly easy to interpret.
quonn•21h ago
> merge the same concept

It's doing compression which does not mean it's coherent.

> The addition circuits are also fairly easy to interpret.

The addition circuits make no sense whatsoever. It's doing great at guessing that's all.

helloplanets•1d ago
What's your take on Anthropic's 'Tracing the thoughts of a large language model'? [0]

> To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with "grab it"), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.

> Instead, we found that Claude plans ahead. Before starting the second line, it began "thinking" of potential on-topic words that would rhyme with "grab it". Then, with these plans in mind, it writes a line to end with the planned word.

This is an older model (Claude 3.5 Haiku) with no test time compute.

[0]: https://www.anthropic.com/news/tracing-thoughts-language-mod...

Sammi•1d ago
What is called "planning" or "thinking" here doesn't seem conceptually much different to me than going from naive breath first search based Dijkstra shortest path search, to adding a heuristics that makes it search in a particular direction first and calling it A*. In both cases you're adding another layer to an existing algorithm in order to make it more effective. Doesn't make either AGI.

I'm really no expert in neural nets or LLMs, so my thinking here is not an expert opinion, but as a CS major reading that blog from Anthropic, I just cannot see how they provided any evidence for "thinking". To me it's pretty aggressive marketing to call this "thinking".

helloplanets•22h ago
They definitely do strain the neurology and thinking metaphors in that article. But the Dijkstra's algorithm and A* comparisons are the flipside of that same coin. They aren't trying to make it more effective. And definitely not trying to argue for anything AGI related.

Either way: They're tampering with the inference process, by turning circuits in the LLM on and off, in an attempt to prove that those circuits are related with a specific function. [0]

They noticed that circuits related to a token that is only relevant ~8 tokens forward were already activated on the newline token. Instead of only looking at the sequence of tokens that has been generated so far (aka backwards), and generating the next token based off of that information, the model is activating circuits related to tokens that are not relevant to the next token only, but to specific tokens a handful of tokens after.

So, information related to more than just the next upcoming token (including a reference to just one specific token) is being cached during a newline token. Wouldn't call that thinking, but I don't think calling it planning is misguided. Caching this sort of information in the hidden state would be an emergent feature, rather than a feature that was knowingly aimed at by following a specific training method, unlike with models that do test time compute. (DeepSeek-R1 paper being an example, with a very direct aim at turbocharging test time compute, aka 'reasoning'. [1])

The way they went at defining the function of a circuit, was by using their circuit tracing method, which is open source so you can try it out for yourself. [2] Here's the method in short: [3]

> Our feature visualizations show snippets of samples from public datasets that most strongly activate the feature, as well as examples that activate the feature to varying degrees interpolating between the maximum activation and zero.

> Highlights indicate the strength of the feature’s activation at a given token position. We also show the output tokens that the feature most strongly promotes / inhibits via its direct connections through the unembedding layer (note that this information is typically more meaningful for features in later model layers).

[0]: https://transformer-circuits.pub/2025/attribution-graphs/bio... [1]: https://arxiv.org/pdf/2501.12948 [2]: https://github.com/safety-research/circuit-tracer [3]: https://transformer-circuits.pub/2025/attribution-graphs/met...

exe34•22h ago
> In both cases you're adding another layer to an existing algorithm in order to make it more effective. Doesn't make either AGI.

Yet. The human mind is a big bag of tricks. If the creators of AI can enumerate a large enough list of capabilities and implement those, then the product can be as good as 90% of humans, but at a fraction of the cost and a billion times the speed - then it doesn't matter if it's AGI or not. It will have economic consequences.

AnimalMuppet•21h ago
And make them work together. It's not just having a big bag of tricks; it's also knowing which trick to pull out when. (And that may just be pulling out a trick, trying it, and knowing when the results aren't good enough, and so trying a different one.)

The observant will note that the word "knowing" kept appearing in the previous paragraph. Can that knowing also be reduced to LLM-like tricks? Or is it an additional step?

exe34•19h ago
It's sufficient to appear to know. My washing machine "knows" when my clothes are dry.
yencabulator•7h ago
Generalize the concept from next token prediction to coming tokens prediction and the rest still applies. LLMs are still incredibly poor at symbolic thought and following multi-step algorithms, and I as a non-ML person don't really see what in the LLM mechanism would provide such power. Or maybe we're still just another 1000x scale off and symbolic thought will emerge at some point.

Me personally, I expect to see LLMs to be a mere part of whatever will be invented later.

Aeolun•1d ago
> wondering when it can generate a beautifully-written novel

Not quite yet, but I’m working on it. It’s ~~hard~~ impossible to get original ideas out of an LLM, so it’ll probably always be a human assisted effort.

agumonkey•22h ago
The TB of everything with transformers makes a difference, maybe i'm just too uneducated, but the amount of semantic context that can be taken into account when generating the next token is really disrupting.
ldsjfldsjf•1d ago
A lot of good software engineering is just tribal knowledge. The people who know know.

Things like when to create an ugly hack because the perfect solution may result in in your existing customers moving over to your competitor. When to remove some tech debt and when to add to tech debt.

When to do a soft delete vs when to do a purge. These things are learnt when a customer shouts at you and you realize that you may be most intelligent kid on the block but it wont really help the customer tonight as the code is already deployed and your production deployment means a maintenance window.

Teever•1d ago
> There are other self-reinforcing loops at play. The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off.

> If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different.

It's really cool to hear a public figure seriously talk about self-replicating machines. To me this is the key to unlocking human potential and ending material scarcity.

If you owned a pair of robots that with sufficient spare parts could repair each other and do other useful work you could effectively do anything by using them to do all the things necessary to build copies of themselves.

Once we as a species have exponential growth on that we can do anything. Clean energy, Carbon sequestration, Von Neumann probes, asteroid mining, O'Neill Cylinders, it's all possible.

sevensor•23h ago
Maintaining actual robots in an actual factory is not a job robots are remotely able to do at the moment. They can’t even identify, let alone fix, a problem like a cracked rotor bar in a motor. Not only that, how do they decide to take the machine down to replace a motor or keep running it until the motor fails? And if it fails repeatedly, are the robots going to recognize the need to diagnose the root cause? How? And what spares should we stock? This is just one small facet of the hugely complex and messy job of making real things in the real world. It’s a mixture of mechanical skill, technical judgement, and business reasoning. We’re working on getting machines to push the right grease into the right zerk at the right time, and even that technology, which does a predictable thing on a predictable schedule, still competes with a guy who walks the floor with a grease gun.

I’m astonished by the sheer arrogance of a man who thinks we’re anywhere near the point where general purpose robots build general purpose robots.

Teever•22h ago
How long do you think it will be before your average consumer has access to a pair of humanoid robots that each cost the price of a luxury vehicle and are able to perform maintenance and replace all the parts of the other?
sevensor•21h ago
Never? Technical feasibility notwithstanding, nobody is ever going to sell a mass market device that defeats planned obsolescence. That’s the opposite of how our economy works. What they’d do, assuming a competent humanoid robot, is design it iphone style so that you get a good five years and then the software won’t update. At that point you scrap it and buy another one.
Teever•17h ago
It sounds like it would a very useful ability for robots involved space missions.

What technical challenges do you foresee with such a device?

sevensor•14h ago
I disagree with the entire premise that sending a whole second robot on a space mission is desirable. All that extra mass! And why humanoid? Not a reasonable form factor for a robotic space vehicle. Anyway, on a space mission, mass is everything. Why spend it on manipulators for fixing robots when you could build redundancy where you need it most? Your general purpose robot fixing robot still has to carry spares! Why do that when you can have built in spares in the first place, without the mass of a whole second robot?
Teever•10h ago
Why I brought up space as it reminds me of the Apollo guidance computer story. The military had cutting-edge ICBM targeting technology, and within a few years, more powerful computers were in people's homes. Once a capability exists, it spreads. The idea that companies could successfully lock down self-repairing robots when the US government couldn't even keep ICBM guidance systems exclusive seems... optimistic.

And you're right about planned obsolescence in theory, but self-replication is fundamentally different from a smartphone. It's like having a genie that grants more wishes. Once someone demonstrates true self-replication even in a proprietary system then the cat's out of the bag. Open source communities, countries with different IP laws, hobbyists -- eventually someone will reverse engineer it. The economic incentive is just too massive. It's not like trying to copy a luxury handbag. It's copying the means of production itself.

As for space applications, yes, mass matters, but that's exactly why modular, self-repairing systems make sense. Instead of one giant redundant system, you'd have swarms of smaller units that can reconfigure, repair each other, and even combine into larger structures when needed. Much more mass-efficient than traditional redundancy. And regarding humanoid form sure you're right it's not optimal for space, but it's optimal for working in human environments. Our entire infrastructure is built for human bodies. Any robot that can use our tools, navigate our spaces, and interface with our systems has an immediate advantage. They don't need to be perfect copies - maybe four arms instead of two, different joint configurations - but roughly humanoid makes sense for terrestrial applications. And space applications that are designed and built by humans or for humans.

The question isn't whether this will happen, but when. What's your estimate for when we'll see the first truly self-maintaining robotic systems, even if they're not fully self-replicating?

walleeee•22h ago
> It's really cool to hear a public figure seriously talk about self-replicating machines. To me this is the key to unlocking human potential and ending material scarcity.

You are a self-replicating machine running on top of a massive web of other self replicating machines. You are fundamentally constrained by the energy and materials available to you, as is your entire operating stack. You are a petal on a fractal flower whose growth, already exponential, threatens to crack its pot.

Incidentally, crackpot would be a good way to describe these sorts of pieces, if the person writing them did not so obviously benefit from writing them.

Teever•22h ago
The industrial revolution brought about the development of fabulous innovations like the mass production of standardized and interchangeable parts which has been instrumental in achieving the level of economic production, scientific advancement and quality of life that we enjoy.

But I think that there are limitations on these kinds of techniques and we can see them with the changing economics of our most advanced technologies -- semiconductors. Improvements have slowed, capital investments have increased and cost per transistor have plateaued and are now starting to rise.

At the same time the way that we extract energy from our environment is not sustainable and is causing great imbalances in our atmosphere which are having cascading effects on the environment and this is all due to the fact that our energy systems are not closed loop.

If we're going to take manufacturing and industry as a whole to the next level just like the industrial revolution did we're going to need to take cues from biology. The next manufacturing revolution will be a merger of the ideas from the industrial revolution, the digital revolution and biological systems like the ones you describe above.

Self replicating systems that can heal, source their parts from the environment around them, and that can scale exponentially through processes akin to cellular division are inevitable.

They will allow us to offload the burden of mineral extraction and refining to the moon and asteroids, and will allow us to massively scale up the production of products on scales previously unimaginable and of goods full of elements like platinum or gold we consider obscenely expensive due to their relative rarity on Earth.

psalaun•13h ago
You assume there are enough trees on the desertic island to build the ship that will allow us to reach the brave new world full of resources.

We don't know that. It seems to be a leap of faith, based on the possibility of numerous successive achievements (IA, then fusion, then robots, then interplanetary mining, etc.) where each new step is always reachable. Like in a video game.

Maybe the necessary amount of oil (or whatever) required to reach the next key milestone towards unlimited resources has never existed. Maybe it'd have required 1 billion more years before humankind reached the industrial age.

Maybe the trees on our island are just enough to build a raft, not a brig, and we should have used them to build a shelter and make the inevitable end more comfortable.

Teever•7h ago
We have no shortage of energy or matter in the solar system to accomplish these goals.

To unlock these resources we need to turn to self replicating machines that can stand up lunar and asteroid mining to build sufficient orbital manufacturing capacity.

psalaun•3h ago
You completely ignore the point of my metaphor. Do you have enough trees to build the brig to reach these resources?

If we deplete our oil stocks (even non-conventional) before discovering the multiple replacements we'd need at large scale for logistics, electronics, hardware, healthcare, heavy vehicles and tools, etc. we won't get anywhere close your solar system matter. Maybe we should have rationed it 70 years ago, to give us more time to research the next breakthrough, instead of investing in all the modern life shenanigans like buying dozens of $5 pieces of clothes from Shein or Temu because it make us feel good in the schoolyard or on Instagram. The "trust the science bro, if it's possible we'll make it, let the human genius do its thing" is dangerous.

Teever•1h ago
Yes there are sufficient hydrocarbon reserves on Earth to do this. There's also sufficient energy sources from a combination of solar, wind, and geothermal.

* 1/3 of the food that we produce is completely wasted.

* 43% of the world's population is overweight and 16% is obese.

* Livestock uses 77% of all agricultural land, yet provides only 18% of global calories and 37% of protein.

* 60% of hydrocarbon production is used for transportation with more than half of that going to personal use.

* The world generates ∼50 million tons of electronic waste per year, but only ∼17% is formally recycled.

There's a lot of slack in the system. We can easily improve the efficiency of society with no material loss and tons of quality of life improvements for people.

We won't. but we could. And the reason we won't is because there isn't really and there won't really be any existential pressure to do so.

So the species at large will keep plodding along as we do, driving from fast food drive-thru to fast food drive thru stuffing cheese burgers into our faces while clever people work to make things like elegant self replicating machines that can colonize the solar system.

This is just how it's gonna go man.

rcarmo•1d ago
This read like a Philip K. Dick, Ubik-style advertisement for a dystopian future, and I’m pretty amazed it is an actual blog post by a corporate leader in 2025. Maybe Sam and Dario should be nominated for Hugos or something…
crossroadsguy•5h ago
I have read his Scanner Darkly and partially another book. Not sure whether you are overrating this post, or insulting his writing style.
d4rkn0d3z•1d ago
The OpenAI pitch: I have a solution to all problems, it's like consciousness in humans but for robots and its better...singularity!

Sam you need to touch grass.

Mikhail_K•1d ago
This is a text by venture capitalist trying to ensure continuing investment into money-losing product.
justlikereddit•1d ago
It's like the tenth time openAI invents AGI?

Like a storefront advertising "live your wildest dreams" in pink neon. A slightly obese Mediterranean man with questionable taste tries to get you into his fine establishment. And if you do enter the first thing that meets you is an interior with a little bit too many stains and a smell of cum.

That's the vibe I get whenever Sam goes on his bi-quarterly AGI hype spree.

tim333•17h ago
I don't think they have ever claimed to have invented AGI. They say coming soon but that's different.
rsynnott•23h ago
> Very quickly we go from being amazed that AI can generate a beautifully-written paragraph

I mean, I suppose Sam loves ChatGPT like his own child, but I would struggle to describe any of its output as 'beautiful'! 'Grating' would be the word that springs to mind, generally.

sreekanth850•23h ago
Pure bullshit. Hype, Hype Hype. Repeat.
ben_w•22h ago
While I prefer "event horizon" over "singularity", part of the reason I blogged about this distinction years ago was that the event horizon always seems to be ahead of you as you fall in to a black hole*.

My blog posts didn't age all that well, and I've learned to be a little more sceptical about the speed of technological change, just as the political events over the intervening years have made me more aware of how fast political realities can change: https://benwheatley.github.io/blog/2016/04/12-00.31.55.html and https://benwheatley.github.io/blog/2022/09/20-18.35.10.html

* at least until the rate of change of curvature gets so high you're spaghetti, you're (approximately) co-moving with the light from your own body. This means that when you cross the event horizon, you still see your own legs, even though the space the light is in is moving towards the singularity faster than the light itself moves through that space: https://youtu.be/4rTv9wvvat8?feature=shared&t=516

tim333•19h ago
I never really liked "singularity" as a term. If only because I like it's actual mathematical meaning. I always thought of the main event as AI intelligence overtaking human intelligence.

(you can get a mathematical singularity if you consider amount of stuff that can be produced per hour of human labour which should go infinite around the robot uprising)

ben_w•16h ago
> (you can get a mathematical singularity if you consider amount of stuff that can be produced per hour of human labour which should go infinite around the robot uprising)

Nah, even then it's only exponential, not a mathematical singularity. Whatever the doubling period is for the e.g. von Neumann machines, but it doesn't go to infinity in finite time.

tim333•15h ago
I think that one does because if you need zero hours of human labour then output/(human labour hours) goes to division by zero.
ben_w•13h ago
Ah, I see now what you mean; yes, that's a potential mathematical singularity.

Though I suspect the actual number of human hours worked doesn't really go to zero for cultural/psychological reasons, your point remains valid as we will only find out if/when it happens.

cs702•22h ago
Maybe. AI models have continued to scale up at a rapid rate, and have continued to get better at performing ever more impressive tasks. Sure, yes, the OP is breathless corporate-speak, but given how much impressive progress we've seen in AI in just a few years, it would be foolish to dismiss these pronouncements off-hand.

On the other hand, we may need more practical/theoretical breakthroughs to be able to build AI models that are reliable and precise, so they stop making up stuff "whenever they feel like it." Unfortunately, the timing of breakthroughs is not predictable. Maybe it will take months. Maybe it will take a decade. No one knows for sure.

patchorang•21h ago
Yesterday, I gave ChatGPT links to three recipes and told it to make me a grocery list.

It left off ingredients. The very gentle singularity…

bitmasher9•21h ago
I often give ChatGPT simple tasks and it gets them wrong. Yesterday I took a picture of a video game and told it I was stuck, only to be given instructions that lead down a bad path in-game.

I also often retest new models with tasks old models failed, and see some improvements. I really liked “format this SQL generated by my ORM and explain it” last week.

I honestly have no insight on if the tasks it is failing to do are right around the corner or if they are decades away.

tencentshill•14h ago
You need to have more blind faith in Sam and his machine.
hazbot•21h ago
I'm feeling a bit cheated it's 2025 and I just bought a brand new car that does not drive itself and runs on dead dinosaurs.
AnimalMuppet•20h ago
You at least had the option of buying one that did not run on dead dinosaurs.
tim333•17h ago
And the option to get one with "self driving" even if it's into a tree occasionally. (https://news.ycombinator.com/item?id=44138241)
bigyabai•16h ago
Dead pedestrians are not a suitable replacement.
yencabulator•7h ago
Ooh, a self-driving-into-obstacles car that runs on dead pedestrians. With garbage truck style arms to lift the bodies automatically.
aeve890•9h ago
I know is probably in jest but still, fossil fuels don't come from dinosaurs

https://www.snexplores.org/article/explainer-where-fossil-fu...

gadders•21h ago
Fantastic. I look forward to all these benefits accruing to the upper classes and billionaires, in the same way as globalisation did.
pacificmaelstrm•21h ago
My first thought was who could possibly say with a straight face that we are close to super after the spectacular failure of AI to scale up with more compute.

Oh right, Sam Altman.

grafmax•21h ago
> Scientific progress is the biggest driver of overall progress

> There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before

Real wages haven’t risen since 1980. Wealth inequality has. Most people have much less political power than they used to as wealth - and thus power - have become concentrated. Today we have smartphones, but also algorithm-driven polarization and a worldwide rise in authoritarian leaders. Depression and anxiety affect roughly 30% of our population.

The rise of wealth inequality and the stagnation of wages corresponds to the collapse of the labor movement under globalization. Without a counterbalancing force from workers, wealth accrues to the business class. Technological advances have improved our lives in some ways but not on balance.

So if we look at people’s well-being, society as whole hasn’t progressed since the 1980s; in many ways it’s gotten worse. Thus the trajectory of progress described in the blog post is make believe. The utopia Altman describes won’t appear. Mass layoffs, if they happen, will further concentrate wealth. AI technology will be used more and more for mass surveillance, algorithmic decision making (that would make Kafka blush), and cost cutting.

What we can realistically expect is lowering of quality of life, an increased shift to precarious work, further concentration of wealth and power, and increasing rates of suffering.

What we need instead of science fiction is to rebuild the labor movement. Otherwise “value creation” and technology’s benefits will continue to accrue to a dwindling fraction of society. And more and more it will be at everyone else’s expense.

nradov•20h ago
Real wages have risen a lot since 1980 when you include employer contributions to employee health insurance.
ImHereToVote•20h ago
Does it correct for housing costs?
boole1854•20h ago
Even without including employer health insurance costs, real wages are up 67% since 1980.

Source: https://fred.stlouisfed.org/graph/?g=1JxBn

Details: uses the "Wage and salary accruals per full-time-equivalent employee" time series, which is the broadest wage measure for FTE employees, and adjusts for inflation using the PCE price index, which is the most economically meaningful measure of "how much did prices change for consumers" (and is the inflation index that the Fed targets)

fusionadvocate•20h ago
How has inflation behaved since 1980?
boole1854•18h ago
It rose 2.75% per year (239% over 45 years).

Source with details: https://fred.stlouisfed.org/graph/?g=1JxIa

fusionadvocate•17h ago
Can you walk me through how to reach this '239%' number? Thank you.
boole1854•16h ago
You can hover over places on the chart to get exact values. In January 1980, the index was at 37.124. In April 2025, it was at 125.880.

Then calculate cumulative inflation as the proportional change in the price level, like this:

(P_final - P_initial) / P_initial = (125.880 - 37.124) / 37.124 = 2.39

This shows that the overall price level (the cumulative inflation embodied in the PCEPI) has increased by about 2.39 times over the period, which is 239%.

sph•13h ago
The thing that bugs me to no end when talking about inflation in an historical context, is that everyone forgets to consider how the indexes of consumption it's calculated from (PCEPI, CPI, etc.) are NOT static, and very arbitrarily are changed over time, often to make inflation seem lower than it actually is for the consumer.

Overall, historical comparisons of inflation numbers are so imprecise to be practically worthless the longer the timescale. You can expect the real figure to be much greater in reality for consumers, given the political incentive to lie over inflation data.

rgun•16h ago
1.0275 ^ 45 = 3.389

3.389 - 1 (to account for increase) = 2.38 ~ 239%

0xffff2•16h ago
It's probably worth noting that the "real" in "real wages" indicates that the number is already inflation adjusted.
Rooster61•19h ago
It's difficult for me to call those wages "real" when medical costs have been so absurdly gouged to eat up those contributions. Those increases have had no real impact on the average consumer, and is profoundly awful for those without access to employment that provides that insurance
kjkjadksj•17h ago
All costs. New roof on the house? $30k. Hail damage on the car? It’s actually totaled. Whatever is in the store? Inflation certainly hit there too.
nradov•17h ago
That's not entirely accurate. Wages are real enough whether paid out as cash, or paid to a third party for the employee's benefit. Some medical costs are unreasonably inflated, but on the other hand much of the cost increase reflects greater capabilities. We can effectively treat many conditions today that would have been a death sentence in 1980, but some of those cutting edge treatments cost >$1M per patient. That doesn't directly benefit the average healthy consumer, but it can help a lot if you get sick.

I do agree that it makes no logical sense to couple medical insurance to employment. This system was created sort of accidentally as a side effect of wartime tax law and has persisted mainly due to inertia.

grafmax•14h ago
It’s somewhat misleading to claim that better care today is the reason for higher costs. Most other developed countries have some form of universal coverage and pay significantly less per person. That includes countries like England, Germany and Switzerland which also have advanced healthcare capabilities. Other countries negotiate drug prices, provide budgets for hospitals rather than charging per procedure, regulate medical prices, and have reduced administrative costs (compared to the US which must deal with the complexities of multi-payer).

That’s to say nothing of the fact that millions are uninsured in the US and have limited access to necessary medical treatment, never mind “cutting edge” treatments.

insane_dreamer•18h ago
Not when you account for the insane rise in cost of health care.
xboxnolifes•16h ago
No, that's not a real wage increase, thats nominal wage. If i make 20k more, but health insurance costs also went up 20k, my real wage did not change. I am no richer.
nradov•15h ago
No, that's a real wage increase. The healthcare system can effectively treat a lot more conditions than it could in 1980. That makes you richer as measured in terms of QALYs.
xigency•14h ago
That's a nice thought but I've only had one physical in the last 15 years. I'm sure others are sitting in the same boat.
nradov•13h ago
These are obviously aggregate statistics and individual experiences may vary. It shouldn't be necessary to point that out.
tim333•19h ago
>Real wages haven’t risen since 1980

is a very US thing. In China they've probably 10xd over that time.

hollerith•19h ago
Even after that 10x growth, median Chinese household income is only 13-16% of US median household income:

China: ~$10,000 – $12,000

US: ~$74,580 (U.S. Census Bureau, 2022)

ethbr1•18h ago
Purchasing power parity matters a lot too.
hollerith•16h ago
The context here is growth and I doubt China's GDPPPP/person has grown anywhere near as much as its GDP/person because the median Chinese household of 40 years ago could still afford the essentials (food, housing, clothing) that the PPP adjustment indexes off of. (It just couldn't afford any imports.)

Actually I doubt that economists even tried to calculate PPP of China 40 years ago because (even back then) the basket of goods used in the PPP calculation probably included gasoline and cars and such, which only the economic top 1% of China could afford 40 years ago, but if you forced the calculation somehow, you'd probably arrive at a GDPPPP/person not much lower than the current GDPPPP/person (i.e., China has grown spectacularly in GDP/person, but not in GDPPPP/person)

ethbr1•14h ago
The difference now is that China makes a ton of consumer goods itself, so whereas 40 years ago Chinese PPP would have required forex, now it can be done internally.

That shift opens the possibility of GDPPPP changes in excess (or under) strict GDP per capita growth.

Y_Y•18h ago
I think GDPPPP/capita is a better measure here, but the story is similar. From the IMF's 2025 numbers: USA $90k, China $29k, world average $26k.

Lovely tabulation of the data here: https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP)...

grafmax•19h ago
That is because of China and US positions in the global system over this time. The wage/labor/inequality story is broadly true across the global north; China can credit forward thinking central planning, social programs, and industrialization for its economic progress (yet it continues to live under authoritarian rule).
tim333•18h ago
It varies a lot - check this out https://www.businessinsider.com/france-vs-us-real-wages-2015...

The US is kind of an outlier.

I guess if you have the opportunity to have your stuff made by cheap or free labour whether low paid Chinese or AI robots, societies have a choice how to distribute the benefits which has varied from everything to the rich in the US to fairly even in France say. Such choices will be important with AI.

Herring•18h ago
Lots of people don't care about "progress" in an absolute sense, eg longer healthier lifespans for all. They only care about it in a relative sense, eg if cop violence against minorities goes down, they feel anxiety and resentment. They really really want to be the biggest fish in the little pond. That's how a caste system works, it "makes a captive of everyone within it". Equality feels like a demotion [1].

That's why we have a whole thing about immigration going on. It's the one issue that the president is not underwater on right now [2]. You can't get much of a labor movement like this.

[1] https://www.texasobserver.org/white-people-rural-arkansas-re...

[2] https://www.natesilver.net/p/trump-approval-ratings-nate-sil...

sjducb•17h ago
The problem is that housing and health insurance are too expensive. Tech isn’t responsible for either of those problems.
kjkjadksj•17h ago
In a way it is. Why are housing costs so high in Redmond, WA? The result of an influx of high income tech workers to the local housing market and the resulting shift of prices such as to eventually dilute the utility of that high salary to begin with. People in the area without a hook on that whale are of course left high and dry.
skeaker•17h ago
Citation needed, my understanding was that housing prices are being driven up by real estate owners/agencies buying tons of property to either rent at extortionary prices or sit on until they sell for a higher price to the next sucker. Also stuff like the RealPage scandal where they simply illegally collude: https://news.ycombinator.com/item?id=41330007

I think the idea of a law that only allows a limited number of owned properties per person and requires them to actually be using those properties would be interesting to alleviate this.

0xffff2•16h ago
Housing prices are being driven up by the fact that we don't build enough housing. That should be the end of the discussion. What you describe is just a symptom of not having enough housing. If we just built more housing then there wouldn't be any business angle for the corporations buying up housing.
skeaker•12h ago
The obscenely rich entities that buy up tons of property without actually using it do so because they only stand to gain even more money on it later. Making more housing would help in the long term for sure, but these entities would still buy up most of it as it becomes available. Remember that as long as they hold the vast majority, they essentially have a captive audience and can set whatever outlandish prices they want (especially when they collude as they have done). Something to restrict the number of unused properties a person can own would still be beneficial either way.

This also goes without mentioning the restrictions on "just" building new housing (land, time, and space, particularly space located near job sites).

DontchaKnowit•7h ago
Well, why dont we build more housing?
tuatoru•2h ago
Exactly. Who has the political connections to get develkopment policy tilted their way?
uses•14h ago
No... the reason housing prices are high and rising is that not enough housing is getting build in the places people want to live. The main reason for that is that the people who already live in those places can block construction of new housing. That, and zoning.
_DeadFred_•11h ago
Housing prices in so many places are high because Californians, used to spending crazy high prices for property in tech spheres (or able to sell their property for crazy high prices because of the tech spheres) moved to other locations and caused property values in those areas to go up. Why is Hacker News trying to rewrite this?
pj_mukh•9h ago
No one's trying to rewrite anything.

"Californians, used to spending crazy high prices for property"

Please dig into why that statement is true and re-read your parent statement. Your analysis can't just abruptly stop there. It all goes back to housing supply.

BriggyDwiggs42•16h ago
Parent didn’t claim tech was responsible for every problem? Housing prices are likely an inequality issue; as a greater portion of money in the economy is held by rich people, more money is invested and less is spent on goods/services, hence a scarce asset like land sees an increase in value.
_DeadFred_•11h ago
I mean home prices went up insane in California due to tech. Many people cashed out and bought homes in cheaper locations...driving up the housing prices there beyond what locals could afford.

How did Hacker News already forget these things?

azan_•10h ago
> Real wages haven’t risen since 1980.

Do people really believe that? I think either people have too rosy view of 80s or consider that real wages should also adjust for lifestyle inflation.

naryJane•9h ago
Yes. It’s even a part of Ray Dalio’s speeches on the topic. Here is one example where he mentions it: https://www.linkedin.com/pulse/why-how-capitalism-needs-refo...
yencabulator•8h ago
> The utopia Altman describes won’t appear.

Sure it will, as far as Altman is concerned. To make the whole post make sense, add "... for the rich" where appropriate.

atleastoptimal•2h ago
Sure, people’s well being as a whole haven’t gotten better since the 1980s, except for

>Air quality (no more leaded gasoline)

>Life expectancy

>Cancer survival rates

>Access to information

>Infant mortality

>Violent crime rates across the western world

>Access to education

>Clean water and food for 4+ billion people

>HIV treatment

>etc

The negativity on this site is insane. They will deny the greatest scientific achievements if it lets them dunk on AI or whoever is the enemy of the week.

dustincoates•1h ago
Not to mention global poverty: https://ourworldindata.org/poverty?insight=global-extreme-po...
philipallstar•20h ago
I can definitely imagine a whole swathe of paperwork-based jobs, such as regulatory, quality assurance, report-writing, and all the non-Personnel bits of HR, could become mostly-AI without a lot of problems.
NoGravitas•16h ago
I'd say you would have to be insane to consider outsourcing regulatory paperwork to LLMs, except that I have seen people working in that field actually proposing it because of FOMO. "Human in the loop" of course, but we all know what that means in practice.
insane_dreamer•18h ago
> There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before

No "the world" won't be getting richer. A small subset of individuals will be getting richer.

The "new policy ideas" (presumably to help those who are being f*d by all this) have been there all along. It's just that those with the wealth don't want to consider them. Those people having even more wealth does _not_ make them more likely to consider those ideas.

Honestly this drivel makes me want to puke.

thenthenthen•18h ago
The singularity already happened, we are just not aware of it yet. Atleast, thats what I am observing here. Most smart/ai products reduce human to servant of refilling, refuelling, error checking. Who are in control remains the question, money?
daxfohl•18h ago
> although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly

If the "mistake" is that of concentrating too much power in too few hands, there's no recovery. Those with the willingness to adapt will not have the power to do so, and those with the power to adapt will not have the willingness. And it feels like we're halfway there. How do we establish a system of checks and balances to avoid this?

NoGravitas•17h ago
This would be a sign of deep delusion if it weren't transparently self-serving.
botverse•15h ago
I can’t seem to reconcile this with the fact that there has not been a significant improvement of the transformers since 8 years ago. None of the AGI components, in addition to those that emerged with hyper scale, have been achieved. Where is the superintelligence going to come from?
nektro•12h ago
can't wait for the day this bubble bursts
rblion•11h ago
It's funny in a cosmic way how YC was once led by this guy.
akomtu•10h ago
> Many people will choose to live their lives in much the same way, but at least some people will probably decide to “plug in”.

I bet he wants to be the first to "plug in" and become the first AI enhanced human.

w-hn•6h ago
Do these tech overlords find it really hard to resist the “pitching in” part even after they’ve already deployed millions and billions in PR? Maybe just the itching thumb? Yeah, maybe.

> AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present

And then nothing substantial after this proclamatory hot-take. So let’s just choose to believe le ai propheté.

It’s like quick instructions to PR team style written post (and then asking an LLM to inflate it) from the comforts of a warm and cozy Japanese seat.

Definitely not written by Genini at least. Usually does a better job than this. Well, at least like Zuck he eats his own food that he killed.

Will be looking forward to titles like “The Vehement Duality” &c in the near future.