frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•2m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•13m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•19m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•23m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•32m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•39m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
2•neogoose•42m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•43m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•43m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•44m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•44m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•45m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
1•dangtony98•50m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•58m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
4•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
2•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1h ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•1h ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•1h ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•1h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
4•ambitious_potat•1h ago•4 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•1h ago•0 comments
Open in hackernews

Say farewell to the AI bubble, and get ready for the crash

https://www.latimes.com/business/story/2025-08-20/say-farewell-to-the-ai-bubble-and-get-ready-for-the-crash
122•taimurkazmi•5mo ago

Comments

mitchbob•5mo ago
https://archive.ph/2025.08.20-113134/https://www.latimes.com...
jqpabc123•5mo ago
After the crash, tech "industry leaders" will struggle to explain why/how they were conned into believing that intelligence was a simple database function with some probability and statistics sprinkled on top.
Zealotux•5mo ago
I remain convinced the whole hype was a way to overfire after the big overhire.
toomuchtodo•5mo ago
They're still aggressively outsourcing to India, the Philippines, and LATAM under cover of AI to tighten the screws on labor costs. Domestic hiring that drags on is to pull in new employees at current lower market wages that will be sticky for some time.
AbstractH24•5mo ago
Eventually, we're going to have to stop living in the shadow of the excess in tech that was 2020-21.

Are we there yet?

lbrito•5mo ago
But who's to say humans are any different than simple databases with probability and statistics firing neurons??

/s

naasking•5mo ago
> But who's to say humans are any different than simple databases with probability and statistics firing neurons??

Nobody has said they're simple databases, they would obviously be complex databases.

jqpabc123•5mo ago
Some of them are very much like this. They think *intelligence* is a measure of your ability to regurgitate data that you have been fed. A genius is someone who wins on Jeopardy.

In engineering school, it was easy to spot the professors who taught this way. I avoided them like the plague.

mrguyorama•5mo ago
No they won't. They won't struggle to explain anything because in US business culture, nobody ever actually takes blame except underlings. Nobody ever even asks them to explain themselves. Even in public companies it seems like questions from shareholders that actually get asked in quarterly calls are vague humble brags. "What are we going to do about the problem of winning so hard" while they had record layoffs the previous quarter.

Those talking heads haven't had to mea culpa for : Hype about Hadoop, hype about blockchain, hype about no code, hype about the previous AI bubble, hype about "agile", hype about whatever JS script is popular this week, etc.

karmakurtisaani•5mo ago
We don't have to look far for examples. Meta dropped VR in favor of AI without much explanation. All they need is just start talking about the next shiny thing.
bugjoey•5mo ago
The internet didn't go away after the dotcom crash in early 2k, so the AI *IF* such crash happens.

There're a lot of business discovering its benefits now, companies will continue building things over it.

anuramat•5mo ago
this is just wrong

you could say "it's just matrix multiplication"; but then quantum mechanics (and thus chemistry, biology, and everything on top of that) is just linear algebra

pera•5mo ago
I just don't get why did Altman had to hype this release so much, what was the plan?

Also, what was the deal with all those mysterious Star Wars pictures?

bgwalter•5mo ago
Initially politicians were responsive. In Feb. 2024 he sought trillions of investment to ramp up "AI" [1]. Then he got the White House announcement with Trump and Softbank for the $500 billion Stargate deal. The project has flopped, only one data center will be built.

So I assume he thought hype would work again, but people are beginning to scrutinize the real capabilities of "AI".

[1] https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-do...

pj_mukh•5mo ago
I think the announcement was mostly a ploy to get OpenAI access to the White House and not much else (especially because Musk was already in there).

But they are clearly on their way to build 20 data centers[1]. OpenAI raising $500B over 10-15 years to build inference capacity isn’t really that hard to believe or that impressive at this point tbh. Like that could just be venture debt that is constantly serviced.

[1]: https://builtin.com/articles/stargate-project

AbstractH24•5mo ago
I'm starting to think he's playing the long game. To make OpenAI what Amazon and Google were in the dot com bubble. The last people standing.

And he's not the only one, a handful of companies are well aware that we're nearing the peak of a hype cycle and making sure they can survive the burst.

egypturnash•5mo ago
I sure am looking forwards to what will happen to my power bill once Facebook decides to default on its share of the bill for the massive power plants Entergy is building solely to power the huge-ass data center FB's building in northern Louisiana. https://www.knoe.com/2025/08/19/entergy-power-plant-meta-dat...
selimthegrim•5mo ago
I was just in the Louisiana public service commission meeting where Entergy told the regulators that Meta Platforms was worth 2 trillion and they got some opinion from a New York law firm that their word was good so YOLO. Passed 4-1. The exact words were “Meta Platforms is worth more than the five biggest banks combined” (implying no point in asking a bank for a loan guarantee or backstop)
morkalork•5mo ago
They really went with the "it's too big to fail" argument hah. Not that I'd be afraid of that, just their short attention span. How is the meta verse these days anyways?
egypturnash•5mo ago
uuuuugggggggghhhhhhhhhhhh I hate this future so much.
andrewstuart•5mo ago
Meh.

Crashes come when there was no real business value.

I use AI all day and I’m sure I’m not the only one.

kg•5mo ago
There was a lot of business value during the dotcom boom and we still had a crash. The question is how many AI companies have strong fundamentals and will survive, vs the ones that have weak fundamentals and will die if/when the investment money dries up.
m_fayer•5mo ago
The sad thing about bubbles based on overhyped but nevertheless useful tech is the collateral damage of the pop. Small promising companies that are simply too young to have good fundamentals will go under from the backlash created among investors and potential customers. It’s destruction that could’ve been avoided if we had a more measured and sober society that doesn’t need a new craze every 5 years.
lazide•5mo ago
pets.com
benreesman•5mo ago
I also consume all the heavily sibsidized LLM tokens I can find a use for. Great deal for us, the people who funded it?

Not so fun.

torginus•5mo ago
LLMs also cost what they cost because NVIDIA won't sell you a 5090 upgraded with 80GB RAM for $3000 instead of $2000 (which is overkill in the first place). Yo have to buy a H200 for $40000
m4rtink•5mo ago
If there is demand, someone will sell that eventually - while NVIDIA has a headstart, they "just" fab stuff on TSMC anyway. AMD and to a degree Intel are already starting to sell cards with more VRAM.
Henchman21•5mo ago
Apparently you can get these on the black market in China, at least according to “Gamers Nexus”

https://m.youtube.com/watch?v=1H3xQaf7BFI

vinni2•5mo ago
There is definitely value but not sure if it is as much as the AI bosses are promising. I don’t know if it will crash or not but they are definitely overselling it. GPT-4o to 5 is so incremental compared to 3.5 and 4.
currio•5mo ago
Crashes also happen when there is a huge mismatch between price and value.
foldr•5mo ago
The issue isn't that AI has no value but that the amount of money invested in it is out of proportion to the value it's going to generate within a reasonable time frame. Useful new technologies are invented all the time. But not many of them will yield a return on a trillion dollar investment.
minimaxir•5mo ago
Two things can be true simultaneously:

a) AI is an extremely useful productivity tool to accomplish tasks that other programming paradigms can't do.

b) Investment in AI is disproportionate to the impact of (a), leading to a low probability of sufficient ROI.

nabla9•5mo ago
> Crashes come when there was no real business value.

You fall into all or nothing logic. That's thinking failure.

If real business value is 10% of the price, there will be massive crash and years of slow advance.

Dot-com bust was like that. Internet clearly had value, but not as much and not as quickly as people thought.

intended•5mo ago
I made this comment earlier, and its just easier to copy it:

>Theres 2 AI conversations on HN occurring simultaneously.

> Convo A: Is AI actually reasoning? does it have a world model? etc..

> Convo B: Is it good enough right now? (for X, Y, or Z workflow)

The internet reshaped the entire global economy, yet the dot com crash occurred all the same.

Convo A leads to questioning if the insane money being poured into AI make sense. The fact that many people are finding utility, doesn't preclude things from being over valued and over hyped.

Spooky23•5mo ago
There’s a $5 bill taped to every prompt you do. It’s unlikely you’d be using it all day if you were paying by the drink.
toomuchtodo•5mo ago
95 per cent of organisations are getting zero return from AI according to MIT - https://news.ycombinator.com/item?id=44956648 - August 2025

State of AI in Business 2025 [pdf] - https://news.ycombinator.com/item?id=44941374 - August 2025

https://web.archive.org/web/20250818145714/https://nanda.med...

> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.

naasking•5mo ago
GenAI is barely out of the research phase and is only now being fine-tuned for specific applications. Check back in 3 years.
rightbyte•5mo ago
Hasn't e.g. Deepseek been releasing special coder llms for like two years now?
naasking•5mo ago
That's still cutting edge research, not a polished product for a well understood domain. These are like the first steam engines, not a Ferrari.
namnnumbr•5mo ago
This is not new - the quote was "87% of data science projects fail" in 2019.

https://venturebeat.com/ai/why-do-87-of-data-science-project...

babypuncher•5mo ago
I used the internet every day in 2000. Bubbles happen with useful technologies not because we decide they aren't useful, but because we were over-sold on what they could do before they could do it.

A lot of AI investment right now is hinged on promises of "AGI" that are failing to materialize, and models themselves are seeing diminishing returns as we throw more hardware at them.

exasperaited•5mo ago
Crashes, rather, must come when there is an enormous, industry-wide mismatch between perceived value (e.g. assessed in terms of expected return on investment) and actual value in terms of real return on investment within the expected period.

Evidence is emerging that the former could be twenty times the latter, or more.

The value you perceive has been much, much more expensive than investors would like, I suspect.

TMWNN•5mo ago
Internet traffic kept growing throughout the dotcom bubble. That valuations got ahead of themselves didn't mean that there wasn't something real driving the hype.

Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.

spcebar•5mo ago
My take is the reckoning will come for the billion businesses that offer B2B AI solutions that don't offer any meaningful value. "Analyze customer intent and improve conversion with XYZ AI!" The tools of the AI revolution will continue to exist though development (read money) will presumably slow as businesses recalibrate and stop paying for the silver bullet solutions that they discovered don't work. Then the snake oil business people who built businesses around LLMs will move onto whatever BlockChain 2 Electric Boogaloo looks like.
evil-olive•5mo ago
I use my house all day and I'm sure I'm not the only one.

that didn't stop the housing bubble in the 2000s.

likewise, if I argue that Dutch "Tulip mania" [0] was a bubble, "but tulips are pretty" is not an effective counter-argument. tulips being pretty was a necessary precondition for the bubble to form.

the existence of a foo bubble does not mean that foo has zero value - it means that the real-world usefulness of foo has become untethered from market perceptions of its monetary value.

0: https://en.wikipedia.org/wiki/Tulip_mania

rsynnott•5mo ago
> Crashes come when there was no real business value.

Indeed. That's why we don't have trains or the internet anymore; once they had their big crashes we knew there was no business value, so they went away.

... I mean, what? You generally can't get a big bubble without _some_ business value, so bursting bubbles almost always have _something_ behind them (the crypto one may be the exception).

webdevver•5mo ago
nothing ever happens
pelagicAustral•5mo ago
tap tap tap
aydyn•5mo ago
Well would you look at the time.
1945•5mo ago
The author isn't exactly a thought leader in the space, or really any space for that matter. Opinion worth nothing.
nathan_compton•5mo ago
I've never met a "thought leader" whose opinion was worth anything.
aydyn•5mo ago
"Thought leader" isn't an actual title (or at least it shouldn't be). In my mind, its simply someone who you recognize as having the expertise worth paying attention to.
cjbgkagh•5mo ago
It’s a title that is given to people to get them to present at junkets, a modern socially and legally acceptable way to bribe people. No one should take them seriously.
aydyn•5mo ago
Ok, I didn't know that. I thought it was just a shorthand for people in leadership positions with lots of expertise.
4ndrewl•5mo ago
I'm a "thought follower". Wherever my leaders tell me to go, I follow.
roywiggins•5mo ago
Have bubbles ever been successfully called by thought leaders?
hasperdi•5mo ago
Yes. Say there are 10000 thought leaders with different thinkings. There's a chance that at least one is right.
MattGrommes•5mo ago
Yep. Then they're the lottery winner that gets to go on TV and write a book about it as if it was expertise that led to their prediction.
alephnerd•5mo ago
If by "thought leader" you mean domain experts making criticism then yes.

For example, Nouriel Roubini calling out the risks of the 2008 Recession before it happened, Michael Pettis calling out the risks of a real estate balance sheet crisis in China before Evergrande happened, and Arvind Subramanian calling out the risks of a a shadow bank crisis in India before the ILFS collapse in 2018.

For AI/ML, I'd tend to trust Emily Bender, given her background in NLP which itself was what became LLMs originated from.

rsynnott•5mo ago
Hrm. I'd read "thought leader" to mean "hype man"; that's how the term is normally used. I certainly wouldn't read it as "domain expert"; the people generally referred to as 'thought leaders' frequently are not.
exasperaited•5mo ago
But people being just a "thought leader in the space" is exactly the reason there's a bubble.

Bubbles are a lot easier to visualise from the outside.

yahoozoo•5mo ago
Is Sam Altman a thought leader?
Henchman21•5mo ago
IMO “thought leaders” only set the agenda of groupthink. So… yes?
xracy•5mo ago
Never quite realized how much I disliked the term "Thought Leader" until I read it 5x in the comments responses of this thread.
1945•5mo ago
I should have said domain expert instead. These two casually chosen words really riled them up
xracy•5mo ago
I just said the word "domain expert" in my head 5x, and I don't like it any better.

Both of them give off "influencer" vibes. They're meaningless without more context. We used to just call people "experts", but now that's an arbitrarily bad word.

JeremyNT•5mo ago
It's just another anecdote, but the "vibes" feel like they're shifting.

The employment numbers, the inflation numbers, government austerity, the gpt-5 disappointment... the valuations are all more like meme stocks and not based on reality.

If enough articles about the crash start appearing, and enough people believe the crash is coming, the congratulations: the crash will occur.

nabla9•5mo ago
Dot-com bubble is a good analogy.

Nvidia will be Cisco of this era. Cisco was the worlds most valuable company when dot-com bubble peaked, went down almost 90% in 2 years. There was lots of "dark fiber" all around (fiber optic cable already installed but not used).

I think OpenAI and most small AI companies go down. Microsoft, Google, Meta scale down, write down losses but keep going and don't stop research.

I hope AI bubble leaves behind massive amounts of cloud compute that companies are forced to sell at the price of the electricity and upkeep for years. New startups with new ideas can build upon it.

Investors will feel poor, crypto market will crash and so on.

NitpickLawyer•5mo ago
What a bad article. I mean how biased can you be, to put the first big quote from someone who wrote a book called "The AI con". Come on! This feels like the "deepseek r1 is the death of nvda" of 6 months ago. Someone is making a play, and whoever wrote this article fell for it.

gpt5 has always been about making a "collection of models" work together and not about model++. This was announced what, a year ago? And they delivered. Capabilities ~90-110% of their top tier old models at 4-6x lower price. That's insane!

gpt5-mini is insane for its price, in agentic coding. I've had great sessions with it, at 0.x$ / session. It can do things that claude 3.5/3.7 couldn't do ~6 months ago, at 10-15x the price. Whatever RL they did is working wonders.

sapphicsnail•5mo ago
> What a bad article. I mean how biased can you be, to put the first big quote from someone who wrote a book called "The AI con".

It's an op-ed. It's supposed to be biased.

HDThoreaun•5mo ago
This is my biggest issue with online newspapers. With print it is very clear if you are in the op-ed section. Online not so much
Workaccount2•5mo ago
Media orgs (well journalists really) are especially hostile towards AI and it's easy to see understand why.
athenot•5mo ago
This is an Opinion piece, not a news article. The distinction between the two seems to be lost on most people nowadays.

One way I leverage opinion pieces for things with which I disagree, is to treat it as a sort of "devil's advocate". What argument are they making? Is that really the strongest one they have? Does my understanding of that domain effectively counter those arguments? etc.

In this case, the main argumentation is on how ChatGPT is not the miraculous genie it was hyped up to be. That's a fair statement, but to extrapolate that into "AI bubble is crashing now" is overlooking a host of other facts about its usefulness. Yes we'll eventually hit the through of disillusionment but I don't think we're there yet.

leopoldj•5mo ago
You are right. But newspapers make it difficult to distinguish them. Opinion pieces are liberally distributed amid news items. LA Times has a Columns section on the right hand side. But, this particular piece is not listed there. It is listed next to other news articles.
mvdtnz•5mo ago
> gpt5 has always been about making a "collection of models" work together and not about model++.

That is revisionist history. Look at Altman's hype statements in the weeks and months leading up to gpt5, some of which were quoted in the article. He never proposed gpt5 as what you're saying and indeed he claimed a massive leap in model performance.

NitpickLawyer•5mo ago
> https://x.com/sama/status/1889755723078443244?lang=en

> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.

> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.

6 months ago.

There's also another one, earlier that says gpt5 will be about routing things to the appropriate model, and not necessarily a new model per se. Could have been in a podcast. Anyway, receipts have been posted :)

HAL3000•5mo ago
> gpt5 has always been about making a "collection of models" work together and not about model++.

No, it wasn’t. Have you read and listened to Altman’s hype around GPT-5 from a year ago? They changed the narration after the 4.1 flop, which they thought would be GPT-5, and it seems some people fell for it.

> Capabilities ~90-110% of their top tier old models at 4-6x lower price

Maybe they finally implemented the DeepSeek paper.

NitpickLawyer•5mo ago
> No, it wasn’t.

I replied below in this thread with the specific post, 6 months ago.

> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.

> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.

lossolo•5mo ago
"the delta between 5 and 4 will be the same as between 4 and 3"[1]

Obviously it's not.

1. https://lexfridman.com/sam-altman-2-transcript/

jsnell•5mo ago
GPT-4 was a long time ago, and honestly mostly useless. But a lot of that progress was already present in the intervening models, and it's easy to forget it happened when comparing GPT-5 to the state of the art a month ago rather than two years ago.

This is hard to quantify exactly since very few benchmarks have the kind of scales where comparing two deltas would be meaningful. But if we pick the Artifical Analysis composite score[0] as the baseline, GPT-3.5 Turbo was at 11, GPT-4 at 25, and GPT-5 at 69. It's just that most of the post-GPT-4 improvement was with o1 and o3.

Feels like a pretty fair statement.

[0] https://artificialanalysis.ai/#frontier-language-model-intel...

pera•5mo ago
This is Altman before the release:

OpenAI's CEO says he's scared of GPT-5

https://www.techradar.com/ai-platforms-assistants/chatgpt/op...

Sam Altman Compares OpenAI To The Manhattan Project—And He's Not Joking About the Risks

https://finance.yahoo.com/news/sam-altman-compares-openai-ma...

This is Altman after the release:

Sam Altman says ‘yes,’ AI is in a bubble

https://www.theverge.com/ai-artificial-intelligence/759965/s...

xracy•5mo ago
> This was announced what, a year ago?

Source? Others are calling out this as being incorrect, so a source would help people evaluate this claim. Personally I'm much more likely to believe that AI companies are moving the goalposts rather than making significant leaps forward.

NitpickLawyer•5mo ago
I posted the tweet below, please check it there. It was 6 months ago.
jaredcwhite•5mo ago
I have a feeling if someone wrote an article about how great GPT-5 is and the first big quote was from Sam Altman, you'd say it's a cool article.

It's only "bias" when you don't like it.

aydyn•5mo ago
A comment in a previous thread stuck with me said something like "AI is successful because nothing else interesting is happening".

That rings true and I suspect the bubble won't burst until something else comes along to steal the show.

r2_pilot•5mo ago
Meanwhile Claude is helping me build a robot and is also writing the code that runs its subsystems but okay. Sure, I could do it all myself(maybe?) but not nearly as quickly or in the interstitial moments of life.
bogwog•5mo ago
Sounds more like you're helping Claude build a robot.
r2_pilot•5mo ago
I have more hands and a larger context window. It's a collab(/s). But it's cool that I can do it more or less solely with that tool and not necessarily use Google or other resources(obviously for any source, from Google to the Encyclopedia Britannica, one must evaluate the quality of the information).
bogwog•5mo ago
What are you getting out of this though? Do you think this robot is going to have some kind of positive economic impact? Will you turn it into a business? Do you anticipate it will solve a personal need, like folding your laundry for you? Because a lot of people do robot projects in their free time to learn, but you're doing it without the learning part...so what's the point?
r2_pilot•5mo ago
It's a hack. Not everything has to have commercial value, not everything has to solve a need, there's no inherent purpose in anything, this is just how I choose to spend my time.
queenkjuul•5mo ago
My brother built a robot when he was 14
AbstractH24•5mo ago
Interesting hypothesis.

One could argue the same was true of blockchain until AI came along.

TMWNN•5mo ago
Internet traffic kept growing throughout the dotcom bubble. That valuations got ahead of themselves didn't mean that there wasn't something real driving the hype.

Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.

BoredPositron•5mo ago
The only thing that crystalizes it that the guys in that one meeting were right... there is no moat. The author might be right but the problem will be oversupply.
marcosdumay•5mo ago
Hum... Does anybody expect the US government to reduce the money supply or distribute it? Or for the dollar to devalue enough that their money doesn't make much of a difference anymore?

If the answer is "no" for all above, then you should expect some bubble to keep going. At most, they will change the bubble subject.

geye1234•5mo ago
I have also concluded that it is more likely a bubble than not.

Anyone looked at buying S&P sector-specific ETFs? For people who want to keep their portfolio spread as widely as possible, but are frightened by how tech-heavy the S&P index is, these seem a good option. But they all seem to have high costs (the first one I pulled up is 0.39%).

ofrzeta•5mo ago
The core thesis seems valid " AI bots seem intelligent, because they’ve achieved the ability to seem coherent in their use of language. But that’s different from cognition."

As it happens LLMs work comparatively well with code. Is this because code does not refer (a lot) to the outside world and fits well to the workings of a statistical machine? In that case the LLMs output can also be be verified more easily by inspection through an expert, compiling, typechecking, linting and running. Although there might be hidden bugs that only show up later.

anuramat•5mo ago
> large language models work comparatively well with (programming) languages

what else would they be good at

tim333•5mo ago
>perceptions of AI’s relentless march toward becoming more intelligent ... came to a screeching halt Aug. 7

Overstates things a bit. It seems unlikely OpenAI will release human level AI in the next year or two, but the march of AI improving goes on.

Also re the AI Con book saying AI is a marketing term, I'm more inclined to go with Wikipedia and "a field of research in computer science".

Though there is a bit of a dot com bubble feel to valuations.

1vuio0pswjnm7•5mo ago
"Indeed, as long ago as the 1960s, that phenomenon was noticed by Joseph Weizenbaum, the designer of the pioneering chatbot ELIZA, which replicated the responses of a psychotherapist so convincingly that even test subjects who knew they were conversing with a machine thought it displayed emotions and empathy.

"What I had not realized," Weizenbaum wrote in 1976, "is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Weizenbaum warned that the "reckless anthropomorphization of the computer" - that is, treating it as some sort of thinking companion - produced a "simpleminded view of intelligence.""

https://www.theguardian.com/technology/2023/jul/25/joseph-we...

Weizenbaum's 1976 book: https://news.ycombinator.com/item?id=36875958

HN commenter rates this "greatest tech book of all-time":

https://news.ycombinator.com/item?id=36592209

mgarfias•5mo ago
Some of us have lived through multiple bubbles and know that often, the underlying bits are useful and will gain widespread acceptance. Just play long, and don’t feed the hypemonster.
joeyagreco•5mo ago
Using how GPT-5 generates text within an image is a terrible way to test it.

If you ask it to list all 50 states or all US presidents it does it no problem. Asking it to generate the text of the answer in an image is a piss poor way of testing a language model.

I heavily dislike GPT-5 but at least have a fair review of it.

wonderfuly•5mo ago
This article tries to argue that the AI bubble has burst by pointing to the failed release of GPT-5. Admittedly, the release of GPT-5 was somewhat of a flop, but I think it's more of a failure in its launch rather than the model itself. In fact, if you use the GPT-5 Thinking model, it's actually quite good. They attempted to make the model automatically route to different levels of thinking intensity, but the routing didn't work very well, which led to the various bad cases people experienced.
tim333•5mo ago
Yeah Altman says "I think we totally screwed up some things on the rollout" https://fortune.com/2025/08/18/sam-altman-openai-chatgpt5-la...

also

>ChatGPT is already the fifth biggest website in the world, according to Altman, and he plans for it to leapfrog Instagram and Facebook to become the third, though he acknowledged: “For ChatGPT to be bigger than Google, that’s really hard.”

>“We have better models, and we just can’t offer them, because we don’t have the capacity,” he said. GPUs remain in short supply, limiting the company’s ability to scale.

So Altman wants to keep building. Whether investors will pay up for that remains to be seen I guess.