frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Netlify's new default enabled "AI gateway" broke our app

https://typzet.com/d5fe64fb-0969-4684-95cb-e7adc269d5cc/c4ebe65b-6a91-4e52-85d9-d3f959f0336f
2•johoba•3m ago•0 comments

Windows 10 refuses to go gentle into that good night

https://www.theregister.com/2025/10/02/windows_10_statcounter/
1•rntn•4m ago•0 comments

Meta will listen into AI conversations to personalize ads

https://www.theregister.com/2025/10/01/meta_ai_use_informs_ads/
1•Bender•8m ago•0 comments

Show HN: Llms.py – Local ChatGPT-Like UI and OpenAI Chat Server

https://servicestack.net/posts/llms-py-ui
1•mythz•8m ago•0 comments

BT promises 5G Standalone for 99% of the UK by 2030

https://www.theregister.com/2025/10/02/bt_5g_standalone_2030/
1•Bender•9m ago•0 comments

EU funds are flowing into spyware companies and politicians demanding answers

https://www.theregister.com/2025/10/02/eu_spyware_funding/
2•Bender•10m ago•0 comments

YC, Take Two

https://www.raf.xyz/blog/01-yc-take-two
1•sebg•12m ago•0 comments

Piracy Operator Goes from Jail to Getting Hired by a Tech Unicorn in a Month

https://torrentfreak.com/sports-piracy-operator-goes-from-jail-to-getting-hired-by-a-tech-unicorn...
3•askl•12m ago•0 comments

Skibidi toilet test for super intelligence

https://tombers.github.io/oblique-angles/ai/culture/2025/09/25/skibidi-toilet-super-ai.html
1•TomBers•13m ago•1 comments

US to give Ukraine intelligence on long-range energy targets in Russia

https://www.reuters.com/world/europe/us-provide-ukraine-with-intelligence-missile-strikes-deep-in...
5•geox•14m ago•0 comments

Red Hat confirms security incident after hackers claim GitHub breach

https://www.bleepingcomputer.com/news/security/red-hat-confirms-security-incident-after-hackers-c...
2•speckx•16m ago•0 comments

Tangled – Decentralized Git Forge on at Protocol

https://tangled.org
1•nasso_dev•17m ago•0 comments

Magic Wormhole: Get things from one computer to another, safely

https://magic-wormhole.readthedocs.io/en/latest/welcome.html
1•xd1936•19m ago•0 comments

AI-generated 'participants' can lead social science experiments astray

https://www.science.org/content/article/ai-generated-participants-can-lead-social-science-experim...
2•sebg•22m ago•0 comments

Why America still needs public schools

https://theconversation.com/why-america-still-needs-public-schools-260368
16•PaulHoule•24m ago•3 comments

I Tried It: Should You Microwave Salmon?

https://mollymogrenkatt.substack.com/p/i-tried-it-should-you-microwave-salmon
1•surprisetalk•25m ago•1 comments

Oura's Partnership with The Pentagon Is Ringing Alarm Bells for Customers

https://slate.com/technology/2025/10/oura-ring-pentagon-department-of-defense-health-wearable.html
2•bookofjoe•25m ago•0 comments

Show HN: Ultra-realistic, highly controllable AI images for e-commerce brands

https://nightjar.so
1•dayaya•25m ago•0 comments

Hoth Takes: A Star Wars Podcast

https://hothtakes.wordpress.com/
1•mooreds•26m ago•0 comments

Mira Murati's Stealth AI Lab Launches Its First Product

https://www.wired.com/story/thinking-machines-lab-first-product-fine-tune/
2•msolujic•27m ago•0 comments

Nine HTTP Edge Cases Every API Developer Should Understand

https://blog.dochia.dev/blog/http_edge_cases/
1•ludovicianul•29m ago•0 comments

NIST: Post-quantum cryptography push overlaps with existing security guidance

https://www.cybersecuritydive.com/news/nist-post-quantum-cryptography-guidance-mapping/760638/
1•mooreds•30m ago•0 comments

Show HN: Unicode symbols that make numbers look cool anywhere

https://fontgen.cool/number-font-generator
1•liquid99•30m ago•0 comments

Meadow Companion: tools of a smartphone, without any distractions

https://www.meadow.so/product
1•RockstarSprain•31m ago•0 comments

Two Planes Collide on LaGuardia Airport Taxiway

https://www.nytimes.com/2025/10/02/nyregion/delta-plane-collision-laguardia.html
1•cypherpunks01•32m ago•0 comments

Show HN: Treyspace – A canvas that gives your LLM spatial awareness

https://app.treyspace.app/
1•lforster•32m ago•0 comments

Tool to audit `ljharb`-maintained packages in your npm dependencies

https://voldephobia.rschristian.dev/
1•rob•35m ago•0 comments

The Book of the Runtime (BOTR) for the .NET Runtime

https://github.com/dotnet/runtime/blob/main/docs/design/coreclr/botr/README.md
1•mooreds•35m ago•0 comments

Ask HN: How are you using LLMs?

2•akshayB•35m ago•1 comments

When Arguing Definitions Is Arguing Decisions [pdf]

https://theinexactsciences.github.io/docs/Arguing%20Definitions%20As%20Arguing%20Decisions.pdf
1•rzk•38m ago•0 comments
Open in hackernews

How the AI Bubble Will Pop

https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop
116•hdvr•1h ago

Comments

rimeice•1h ago
> Some people think artificial intelligence will be the most important technology of the 21st century.

I don’t, I think a workable fusion reactor will be the most important technology of the 21st century.

mattmaroon•1h ago
Because it’ll power AI!
baq•1h ago
if AI won't be a fad like everything else, we're going to need these, pronto

...and it does seem this time that we aren't even in the huge overcapacity part of the bubble yet, and won't be for a year or two.

gizajob•1h ago
We’ll probably need to innovate one of those to power the immense requirements of AI chatbots.
geerlingguy•1h ago
I think we'll need a few hundred, if spending continues like it has this year.
_heimdall•57m ago
What makes you think we'll have fusion reactors in the 21st century?
rimeice•48m ago
I don’t think we’ll have the choice.
fundatus•36m ago
> I like fusion, really. I’ve talked to some of luminaries that work in the field, they’re great people. I love the technology and the physics behind it.

> But fusion as a power source is never going to happen. Not because it can’t, because it won’t. Because no matter how hard you try, it’s always going to cost more than the solutions we already have.

https://matter2energy.wordpress.com/2012/10/26/why-fusion-wi...

tbrownaw•52m ago
We'll finally have electricity that's too cheap to meter.
lopis•49m ago
And we'll use it all to run more crypto/AI/next thing
fundatus•38m ago
Why would it be too cheap to meter? You're still heating up water and putting it through a turbine. We've been doing that for ages (just different sources of energy for the heating up part) and we still meter energy because these things cost money and need lots of maintenance.
tbrownaw•20m ago
But that's the whole reason fusion is so important. Just like it was the whole reason fission was so important.

https://www.nrc.gov/reading-rm/basic-ref/students/history-10...

Perz1val•36m ago
As we have more and more solars, we see rises for being connected to the grid more and more while electricity stays relatively cheap. Fusion won't change that, somebody has to pay for the guy reconnecting cables after a storm
Bigsy•52m ago
Deepmind are working on solving the plasma control issue at the moment, I suspect they're probably using a bit of AI.... and I wouldn't put it past them to crack it.
robinhoode•31m ago
This is the thing with AI: We can always come up with a new architecture with different inputs & outputs to solve lots of problems that couldn't be solved before.

People equating AI with other single-problem-solving technologies are clearly not seeing the bigger picture.

fundatus•41m ago
I mean, we already have a giant working fusion reactor (the sun) and we can even harvest it's energy (solar, wind, etc)! That's pretty awesome.
jacknews•27m ago
How so?

The maximum possible benefit of fusion (aside from the science gained in the attempt) is cheap energy.

We'll get very cheap energy just by massively rolling out existing solar panels (maybe some at sea), and other renewables, HVDC and batteries/storage.

Fusion is almost certain to be uneconomical in comparison if it's even feasible technically.

AI, is already dramtically impacting some fields, including science (eg deepfold), and AGI would be a step-change.

BrokenCogs•12m ago
Time travel will be the most important invention of the 21st century ;)
seydor•9m ago
but people say that AI will spit out that fusion reactor, ergo AI investment is prior in the ordo investimendi or whatever it would be called (by an AI)
throw0101d•1h ago
Perhaps worth remembering that 'over-enthusiasm' for new technologies dates back to (at least) canal-mania:

* https://en.wikipedia.org/wiki/Technological_Revolutions_and_...

* https://en.wikipedia.org/wiki/Canal_Mania

leojfc•59m ago
Absolutely. Years ago I found this book on the topic really eye-opening:

- https://www.amazon.co.uk/Technological-Revolutions-Financial...

The process of _actually_ benefitting from technological improvements is not a straight line, and often requires some external intervention.

e.g. it’s interesting to note that the rising power of specific groups of workers as a result of industrialisation + unionisation then arguably led to things like the 5-day week and the 8-hour day.

I think if (if!) there’s a positive version of what comes from all this, it’s that the same dynamic might emerge. There’s already lots more WFH of course, and some experiments with 4-day weeks. But a lot of resistance too.

_heimdall•50m ago
My understanding is that the 40 hour work week (and similar) was talked about for centuries by workers groups but only became a thing once governments during WWI found that longer days didn't necessarily increase output proportionally.

For a 4 day week to really happen st scale, I'd expect we similarly need the government to decide to roll it out rather than workers groups pushing it from the bottom up.

idiotsecant•55m ago
Importantly, however, canals did end up changing the world.

Most new tech is like that - a period of mania, followed by a long tail of actual adoption where the world quietly changes

foofoo12•34m ago
There's a good podcast on the Suez and Panama canal: https://omny.fm/shows/cautionary-tales-with-tim-harford/the-...
hemloc_io•1h ago
The most frustrating thing to me about this most recent rash of biz guy doubting the future of AI articles is the required mention that AI, specifically an LLM based approach to AGI, is important even if the numbers don't make sense today.

Why is that the case? There's plenty of people in the field who have made convincing arguments that it's a dead end and fundamentally we'll need to do something else to achieve AGI.

Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.

I'm not a hater, it could be true, but it seems to be gospel and I'm not sure why.

Mapping to 2001 feels silly to me, when we've had other bubbles in the past that led to nothing of real substance.

LLMs are cool, but if they can't be relied on to do real work maybe they're not change the world cool? More like 30-40B market cool.

EDIT: Just to be clear here. I'm mostly talking about "agents"

It's nice to have something that can function as a good Google replacement especially since regular websites have gotten SEOified over the years. Even better if we have internal Search/Chat or whatever.

I use Glean at work and it's great.

There's some value in summarizing/brainstorming too etc. My point isn't that LLMs et al aren't useful.

The existing value though doesn't justify the multi-trillion dollar buildout plans. What does is the attempt to replace all white collar labor with agents.

That's the world changing part, not running a pretty successful biz, with a useful product. That's the part where I haven't seen meaningful adoption.

This is currently pitched as something that will have nonzero chance of destroying all human life, we can't settle for "Eh it's a bit better than Google and it makes our programmers like 10% more efficient at writing code."

aurareturn•1h ago

  Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I have a friend who works at PwC doing M&A. This friend told me she can't work without ChatGPT anymore. PwC has an internal AI chat implementation.

Where does this notion that LLMs have no value outside of programming come from? ChatGPT released data showing that programming is just a tiny fraction of queries people do.

admissionsguy•56m ago
When your work consists of writing stuff disconnected from reality it surely helps to have it written automatically.
patapong•50m ago
On the other hand, it's a hundreds-of-billions of dollars market...
Etheryte•50m ago
The recent MIT report on the state of AI in business feels relevant here [0]:

> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.

There's no doubt that you'll find anecdotal evidence both for and against in all variations, what's much more interesting than anecdotes is the aggregate.

[0] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...

crazygringo•38m ago
No. The aggregate is useless. What matters is the 5% that have positive return.

In the first few years of any new technology, most people investing it lose money because the transition and experimentation costs are higher than the initial returns.

But as time goes on, best practices emerge, investments get paid off, and steady profits emerge.

sigbottle•32m ago
I think it's true that AI does deliver real value. It's helped me understand domains quickly, be a better google search, given me code snippets and found obscure bugs, etc. In that regard, it's a positive on the world.

I also think it's true that AI is nowhere near AGI level. It's definitely not currently capable of doing my job, not by a long shot.

I also think that, throwing trillions of dollars at AI for a "a better google search, code snippet generator, and obscure bug finder" is a contentious question, and a lot of people oppose it for that reason.

I personally still think it's kind of crazy we have a technology to do things that we didn't have just ~2 years before, even if it just stagnates right here. Still going to be using it every day, even if I admittedly hate a lot of parts of it (for example, "thinking models" get stuck in local minima way too quickly).

At the same time, don't know if it's worth trillions of dollars, at least right now.

So all claims on this thread can be very much true at the same time, just depends on your perspective.

jt2190•30m ago
That report also mentions individual employees using their own personal subscriptions for work, and points to it as a good model for organizations to use when rolling out the tech (i.e. just make the tools available and encourage/teach staff how they work). That sure doesn’t make it sound like “zero return” is a permanent state.
lowsong•46m ago
This says more about PwC and what M&A people do all day than it does about ChatGPT.
hshdhdhj4444•45m ago
> This friend told me she can't work without ChatGPT anymore.

Is she more productive though?

People who smoke cigarettes will be unable to work without their regular smoke breaks. Doesn’t mean smoking cigarettes is good for working.

Personally I am an AI booster and I think even LLMs can take us much farther. But people on both sides need to stop accepting claims uncritically.

aurareturn•29m ago
No she’s less productive. She just use it because she wants to do less work, be less likely to get promoted, and have to stay in the office longer to finish her work.

/s

What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder? Like ChatGPT is some kind of addiction similar to drugs? Is it so hard to believe that ChatGPT is actually productive?

echelon•28m ago
Serious thought.

What if people are using LLMs to achieve the same productivity with more cost to the business and less time spent working?

This, to me, feels incredibly plausible.

Get an email? ChatGPT the response. Relax and browse socials for an hour. Repeat.

"My boss thinks I'm using AI to be more productive. In reality, I'm using our ChatGPT subscription to slack off."

AI can be a tool for 10xers to go 12x, but more likely it's that AI is the best slack off tool for slackers to go from 0.5x to 0.1x.

I've seen it happen to good engineers. Tell me you've seen it too.

freehorse•25m ago
It is the kind of question that takes into account that people thinking that they are more productive does not imply that they actually are. This happens in a wide range of contexts, from AI to drugs.
1123581321•19m ago
It isn’t a question asked by people generally suspicious of productivity claims. It’s only asked by LLM skeptics, about LLMs.
rimunroe•5m ago
That doesn’t seem to me like a good reason to dismiss the question, and especially not that strongly/aggressively. We’re supposed to assume good intentions on this site. I can think of any number of reasons one might feel more productive but in the end not be going much faster. It would be nice to know more about the subject of the question’s experience and what they’re going off of.
happymellon•3m ago
It absolutely is a question people ask when suspicious of productivity claims.

Lots of things claim to make people more productive. Lots of things make people believe they are more productive. Lots of things fail to provide evidence of increasing productivity.

This "just believe me" mentality normally comes from scams.

tbrownaw•11m ago
It's not that hard to review how much you actually got done and check whether it matches how much it felt like you were getting done.
mrbungie•22m ago
> Is it so hard to believe that ChatGPT is actually productive?

We need data, not beliefs and current data is conflicting. ffs.

spicyusername•21m ago
I mean... there are many situations in life where people are bad judges of the facts. Dating, finances, health, etc, etc, etc.

It's not that hard to imagine that your friend feels more productive than she actually is. I'm not saying it's true, but it's plausible. The anecdata coming out of programming is mostly that people are only more productive in certain narrow use cases and much less productive in everything else, relative to just doing the work themselves with their sleeves rolled up.

But man to seeing all that code gets spit out on the screen FEEL amazing, even if I'm going to spend the next few hours needing to edit it, for the next few months managing the technical debt I didn't notice when I merged it.

bakugo•20m ago
You're working under the assumption that punching a prompt into ChatGPT and getting up to grab some coffee while it spits out thousands of tokens of meaningless slop to be used as a substitute for something that you previously would've written yourself is a net upgrade for everyone involved. It's not. I can use ChatGPT to write 20 paragraph email replies that would've previously been a single manually written paragraph, but that doesn't mean I'm 20x more productive.

And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.

isamuel•12m ago
> And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.

Come on, you can’t mean this in any kind of robust way. I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?” Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.

coderjames•17m ago
> This friend told me she can't work without ChatGPT anymore.

It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.

legucy•7m ago
Cigarettes were/are a pretty lucrative business. It doesn’t matter if it’s better or worse, if it’s as addictive as tobacco, the investors will make back their money.
bakugo•36m ago
> This friend told me she can't work without ChatGPT anymore.

This isn't a sign that ChatGPT has value as much as it is a sign that this person's work doesn't have value.

deadbabe•23m ago
I find it’s mostly a sign of how lazy people get once you introduce them to some new technology that requires less effort for them.
kumarm•1h ago
[even in programming it's inconclusive as to how much better/worse it makes programmers.]

Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.

Current AI tools may not beat the best programmer, they definitely improves average programmer efficiency.

piker•59m ago
> Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.

Try changing something old in claude code (or codex etc) using a programming language you have used before. Your opinion might change drastically.

UrineSqueegee•58m ago
this is literally how i maintain the code at my current position, if i didn't have copilot+ i would be cooked
Wowfunhappy•27m ago
I have! Claude is great at taking a large open source project and adding some idiosyncratic feature I need.
lowsong•53m ago
> using a programming language you have not used before

But why would I do that? Either I'm learning a new language in which case I want to be as hands-on as possible and the goal is to learn, not to produce. Or I want to produce something new in which case, obviously, I'd use a toolset I'm experienced in.

jbstack•23m ago
There are plenty of scenarios where you want to work with a new language but you don't want to have to dedicate months/years of your life to becoming expert in it because you are only going to use it for a one-time project.

For example, perhaps I want to use a particular library which is only available in language X. Or maybe I'm writing an add-on for a piece of software that I use frequently. I don't necessarily want to become an expert in Elisp just to make a few tweaks to my Emacs setup, or in Javascript etc. to write a Firefox add-on. Or maybe I need to put up a quick website as a one-off but I know nothing about web technologies.

In none of these cases can I "use a toolset I'm experienced in" because that isn't available as an option, nor is it a worthwhile investment of time to become an expert in the toolset if I can avoid that.

piva00•51m ago
The question is: is that value worth the US$400bi per year of investment sucking out all the money from other ventures?

It's a damn good tool, I use it, I've learned the pitfalls, it has value but the inflation of potential value is, by definition, a bubble...

philippta•49m ago
> they definitely improves average programmer efficiency

Do we really need more efficient average programmers? Are we in a shortage of average software?

jennyholzer•43m ago
> Are we in a shortage of average software?

Yes. The "true" average software quality is far, far lower than the average person perceives it to be. ChatGPT and other LLM tools have contributed massively to lowering average software quality.

layer8•30m ago
I don’t understand how your three sentences mesh with each other. In any case, making the development of average software more efficient doesn’t by itself change anything about its quality. You just get more of it faster. I do agree that average software quality isn’t great, though I wouldn’t attribute it to LLMs (yet).
Cthulhu_•12m ago
Average programmers do not produce average software; the former implements code, the latter is the full picture and is more about what to build, not how to build it. You don't get a better "what to build" by having above-average developers.

Anyway we don't need more efficient average programmers, time-to-market is rarely down to coding speed / efficiency and more down to "what to build". I don't think AI will make "average" software development work faster or better, case in point being decades of improvements in languages, frameworks and tools that all intend to speed up this process.

tymonPartyLate•44m ago
I did just that and I ended up horribly regretting it. The project had to be coded in Rust, which I kind of understand but never worked with. Drunk on AI hype, I gave it step by step tasks and watched it produce the code. The first warning sign was that the code never compiled at the first attempt, but I ignored this, being mesmerized by the magic of the experience. Long story short, it gave me quick initial results despite my language handicap. But the project quickly turned into an overly complex, hard to navigate, brittle mess. I ended up reading the Rust in Action book and spending two weeks cleaning and simplifying the code. I had to learn how to configure the entire tool chain, understand various cargo deps and the ecosystem, setup ci/cd from scratch, .... There is no way around that.

It was Claude Code Opus 4.1 instead of Codex but IMO the differences are negligible.

fabian2k•41m ago
AI can be quite impressive if the conditions are right for it. But it still fails at so many common things for me that I'm not sure if it's actually saving me time overall.

I just tried earlier today to get Copilot to make a simple refactor across ~30-40 files. Essentially changing one constructor parameter in all derived classes from a common base class and adding an import statement. In the end it managed ~80% of the job, but only after messing it up entirely first (waiting a few minutes), then asking again after 5 minutes of waiting if it really should do the thing and then missing a bunch of classes and randomly removing about 5 parenthesis from the files it edited.

Just one anecdote, but my experiences so far have been that the results vary dramatically and that AI is mostly useless in many of the situations I've tried to use it.

layer8•33m ago
How can I possibly assess the results in a programming language I haven’t used before? That’s almost the same as vibe coding.
jbstack•18m ago
The same way you assess results in a programming language you have used before. In a more complicated project that might mean test suites. For a simple project (e.g. a Bash script) you might just run it and see if it does what you expect.
hemloc_io•27m ago
Yeah I've used it for personal projects and it's 50/50 for me.

Some of the stuff generated I can't believe is actually good to work with long term, and I wonder about the economics of it. It's fun to get something vaguely workable quickly though.

Things like deepwiki are useful too for open source work.

For me though the core problem I have with AI programming tools is that they're targeting a problem that doesn't really exist outside of startups, not writing enough code, instead of the real part of inefficiency in any reasonably sized org, coordination problems.

Of course if you tried to solve coordination problems, then it would probably be a lot harder to sell to management because we'd have to do some collective introspection as to where they come from.

apercu•57m ago
"Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers."

The business model is it is data collection about you on steroids, and that the winning company will eclipse Meta in value.

It's just more ad tech with multipliers, and it will continue to control thought, sway policy and decide elections. Just like social media does today.

viking123•57m ago
I think there is real value, for instance nowadays I just use chatGPT as google replacement, brainstorming, and for coding stuff. It's quite useful and it would be hard to go back to time without this kind of tool. The 20 bucks a month is more than worth it.

Not sure though that do they make enough revenue and what will be the moat if more or less the best models will converge around the same level. For most normies, it might be hard to spot difference between gpt 5 and claude for instance. Okay for Grok the moat is that it doesn't pretend to be a pope and censor everything.

onlyrealcuzzo•56m ago
> adoption is low to nonexistent outside of programming

Odd way to describe ChatGPT which has >1B users.

AI overviews have rolled out to ~3B users, Gemini has ~200M users, etc.

Adoption is far from low.

TheCapeGreek•41m ago
> AI overviews have rolled out to ~3B users

Does that really count as adoption, when it has been introduced as a default feature?

onlyrealcuzzo•21m ago
Yes, if people are interacting with them, which they are.

HN seems to think everyone is like in the bubble here, which thinks AI is completely useless and wants nothing to do with it.

Half the world is interacting with it on a regular basis already.

Are we anywhere near AGI? Probably not.

Does it matter? Probably not.

Inference costs are dropping like a rock, and usage is continuing to skyrocket.

ACCount37•38m ago
Mostly agreed, but AI overviews are a very bad example. Google can just force feed its massive search user base whatever bullshit it damn pleases. Even if it has negative value to the users.

I don't actually think that AI overviews have "negative value" - they have their utility. There are cases where I stop my search right after reading the "AI overview". But "organic" adoption of ChatGPT or Claude or even Gemini and "forced" adoption of AI overviews are two different beasts.

neutronicus•34m ago
My father (in his 70s) has started specifically looking for the AI overview, FWIW.

He has not engaged with any chatbot, but he thinks of himself as "using AI now" and thinks of it as a value-add.

raincole•55m ago
> adoption is low to nonexistent outside of programming

In the last few months, every single non-programmer friend I've met has ChatGPT installed on their phone (N>10).

Out of all the people that I know enough to ask if they have ChatGPT installed, there is only one who doesn't have it (my dad).

I don't know how many of them are paying customers though. IIRC one of them was using ChatGPT to translate academic writing so I assume he has pro.

Gigachad•36m ago
I’ve been trying out locally run models on my phone. The iPhone 17 is able to run some pretty nice models, but they lack access to up to date information from the web like ChatGPT has. Wonder if some company like Kagi would offer some api to let your local model plug in and run searches.
hermannj314•26m ago
My daughter and her friends have their own paid chatgpt. She said she uses it to help with math homework and described to me exactly why I bought a $200 TI-92 in the 90s with a CAS.

Adoption is high with young people.

billy99k•55m ago
"Where's the business value? "

Have you ever used an LLM? I use it every day to help me with research and completing technical reports (which used to be a lot more of my time).

Of course you can't just use it blindly, but it definitely adds value.

lm28469•50m ago
Does it bring more value than it cost ? That's the real question.

Nobody doubt it works, everybody doubt Altboy when he asks $7 trillion

UpsideDownRide•19m ago
This question is pretty hard to answer without knowing the actual costs.

Current offerings are usually worth more than they cost. But since the prices are not really reflective of the costs it gets pretty muddy if it is a value add or not.

mnky9800n•45m ago
i think it is more like maps. before 2004, before google maps, the way we interacted with the spatial distribution of places and things was different. all these ai dev tools like claude code as well as tools for writing, etc. are going to change the way we interact with our computers.

but on the other side, the reason everyone is so gung ho on all this is because these models basically allow for the true personalization of everything. They can build up enough context about you in every instance of you doing things online that they can craft the perfect ad experience to maximize engagement and conversion. that is why everyone is so obsessed with this stuff. they don't care about AGI, they care about maintaining the current status quo where a large chunk of the money made on the internet is done by delivering ads that will get people to buy stuff.

cornholio•44m ago
It's silly to say that the only objective that will vindicate AI investments is AGI.

Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )

So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.

The emerging reasoning capabilities are very promising, able to generate new theories and make scientific experiments in easy to test fields, such as in vitro drug creation. It doesn't matter if the LLM hallucinates 90% of the time, if it correctly reasons a single time and it can create even a single new cancer drug that passes the test.

These are all examples of massive, massive economic disruption by automating intellectual labor, that don't require strict AGI capabilities.

asdf1280•38m ago
The problem is that it’s already commodified; there’s no moat. The general tech practice has been capture the market by burning vc money, then jack up prices to profit. All these companies are burning billions to generate a new model and users have already proven there is no brand loyalty. They just hop to the new one when it comes out. So no one can corner the market and when the VC money runs out they’ll have to jack up prices so much that they’ll kill their market
Wowfunhappy•29m ago
> The problem is that it’s already commodified; there’s no moat.

From an economy-wide perspective, why does that matter?

> users have already proven there is no brand loyalty. They just hop to the new one when it comes out.

Sounds to me like there is real competition, which generally keeps prices down, it doesn't push them up! It's true VCs may not end up happy.

frogperson•29m ago
The most isnt with the llm crestors, its with nvidia, but even that is under seige by Chinese mskers.
squidbeak•20m ago
Compute is the moat.
jennyholzer•38m ago
You seem to be making an implicit claim that LLMs can create an effective cancer drug "10% of the time".

Smells like complete and total bullshit to me.

rafaelmn•30m ago
If you calculate the investment into AI and then divide by say 100k that's how many man-years need to replace with AI to be cost effective as labor automation the numbers aren't that promising given the current level of capability.
maeln•17m ago
> Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )

> So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.

Can Sora2 change the framing of a picture without changing the global scene ? Can it change the temperature of a specific light source ? Can it generate a 8k HDR footage suitable for re-framing and color grading ? Can it generate minute long video without loosing coherence ? Actually, can it generate a few seconds without having to reloop with the last frame and have these obnoxious cuts that the video you pointed has ? Can it reshoot the same exact scene with just one element altered ?

All the video models right now are only good at making short, low-res, barely post-processable video. The kind of stuff you see on social media. And considering the metrics on ai-generated video on social media right now, for the most part, nobody want to look at them. They might replace the bottom of the barrel of social media posting (hello cute puppy videos), but there is absolutely nothing indicating that they migth automate or upend any real industry (be used in the pipeline, yeah maybe, why not, automate ? Won't hold my breath).

And the argument of their future capabilities, well ... It's been 50+ years that we should have fusion in 20 years.

Btw, the same argument can be made for LLM and image-gen tech in any creative purposes. People severly underestimate just how much editing, re-work, purpose and pre-production steps are involved in any major creative endeavor. Most model are just severly ill suited for that work. They can be useful for some stuff (specificaly, for editing images, ai-driven image fill do work decently for exemple), but overall, as of right now, they are mostly good at making low quality content. Which is fine I guess, there is a market for it, but it was already a market that was not keen on spending money.

____mr____•8m ago
> They might replace the bottom of the barrel of social media posting (hello cute puppy videos)

Lay off. Only respite I get from this hell world is cute Rottweiler videos

hemloc_io•5m ago
Regardless of my opinions on if you're correct about this, I'm not an ML expert so who knows, I'd be very happy if we cured cancer so I hope you're correct and the video is a cool demo.

I don't believe the risk vs reward on investing a trillion dollars+ is the same when your thesis changes from "We just need more data/compute and we can automate all white collar work"

to

"If we can build a bunch of simulations and automate testing of them using ML then maybe we can find new drugs" or "automate personalized entertainment"

The move to RL has specifically made me skeptical of the size of the buildout.

codingdave•44m ago
There is a middle ground where LLMs are used as a tool for specific use cases, but not applied universally to all problems. The high adoption of ChatGPT is the proof of this. General info, low accuracy requirements - perfect use case, and it shows.

The problem comes in when people then set expectations that a chat solution can solve non-chat problems. When people assume that generated content is the answer but haven't defined the problem.

We're not headed for AGI. We're also not going to just say, "oh, well, that was hype" and stop using LLMs. We are going to mature into an industry that understands when and where to apply the correct tools.

calmoo•42m ago
FWIW Derek Thompson (the author of this blogpost) isn't exactly a 'business guy'
jennyholzer•37m ago
If I'm not mistaken he's working with Ezra Klein to push the Democrats to embrace racism instead of popular economic measures.

Edit: I expect that these guys will try to make a J.D. Vance style Republican pivot in the next 4-8 years.

Second Edit:

Ezra Klein's recent interview with Ta-Nehisi Coates is very specifically why I expect he will pivot to being a Republican in the near future.

Listen closely. Ezra Klein will not under any circumstances utter the words "Black People".

Again and again, Coates brings up issues that Black People face in America, and Klein diverts by pretending that Coates is talking about Marginalized Groups in general or Trans People in particular.

Klein's political movement is about eradicating discussion of racial discrimination from the Democratic party.

Wowfunhappy•17m ago
We're very off topic, but if you're truly interested in Ezra Klein's worldview, I highly recommend his recent interview with Ta-Nehisi Coates. At minimum, I think you'll discover that Ezra's feelings are a lot more nuanced than you're making them out to be.

https://www.nytimes.com/2025/09/28/opinion/ezra-klein-podcas...

joules77•21m ago
Cause the story is no more about Business or Economics. This is more like the nuke arms race in the 1940s. Red Queen Dynamics.
Cthulhu_•15m ago
Why would we want AGI? I've yet to read a convincing argument in favor (but granted, I never looked into it, I'm still at science-fiction doomerism). One thing that irks me is that people see it as inevitable, and that we have to pursue AGI because if we don't, someone else will. Or more bleak, if we don't actively pursue us, our malignant future AGI overlords will punish us for not bringing it into existence (roko's basilisk, the thing Musk and Grimes apparently bonded over because they're weird)
rossant•1h ago
Paywall.
pryelluw•1h ago
Article is behind paywall and is simply saying the same things that people have been saying about the post tech crash.

Now, what this sort of article tends to miss (and I will never know because it’s paywalled like a jackass) is that these models services are used by everyday people for every day tasks. Doesn’t matter if they’re good or not. It enables them to do less work for the same pay. Don’t focus on the money the models are bringing in today, focus on the dependency they’re building on people’s minds.

huvarda•59m ago
A handful of the largest companies cyclically investing and buying from each other is propping up the entire economy. Also stuff like Deepseek and other open source models exist. Unless AGI comes from LLMs (it absolutely won't) then its foolish to think there wont be a bubble
viking123•53m ago
AGI might require a nobel price level invention, I am not even sure it will come in my lifetime and I am in my 30s.. Although I would hope we would get something that could solve difficult diseases that have more or less no treatment or cure today, at least Demis Hassabis seems interested in that.
hattmall•43m ago
The issue though being that most people aren't paying and even those paying if they use it moderately aren't profitable. Nvidia "investing" 100B in one of its largest customers is a cataclysmically bright red flag.
CuriouslyC•1h ago
I don't think it'll contract. The people dumping their money into AI think we are at end of days, new order for humanity type point, and they're willing to risk a large part of their fortune to ensure that they remain part of the empowered elite in this new era. It's an all hands on deck thing and only hard diminishing returns that make the AI takeoff story look implausible are going to cause a retrenchment.
admissionsguy•1h ago
So just the “it’s different this time” mentality shared by all bubbles. Some things never change.
viking123•54m ago
Yeah it wouldn't be a bubble if it didn't have that mentality. Every bubble has had that thought and it's the same now. Kind of hard to notice it though when you are in the eye of the storm.

There were people telling me during the NFT craze that I just don't get it and I am dumb. Not that I am comparing AI to it directly because AI has actual business value but it is funny to think back. I felt I was going mad when everyone tried to gaslight me

CuriouslyC•53m ago
The final AI push that doesn't lead to a winter will look like a bubble until it hits. We're realistically ~3 years away from fully autonomous software engineering (let's say 99.9% for a concrete target) if we can shift some research and engineering resources towards control, systems and processes. The economic value of that is hard to overstate.
piva00•49m ago
You are basically saying "it's different this time" with a lot of words.
reliabilityguy•37m ago
> We're realistically ~3 years away from fully autonomous software engineering

We had Waymo cars about 18 years ago, and only recently they started to roll out commercially. Just saying.

pessimizer•49m ago
You don't think it will contract just because rich people have bet so much on it that they'll be forced to throw good money after bad? That's the only reason?
lm28469•17m ago
It's probably exacerbated by the fact that everyone invest money now, I get daily ads from all my banking apps telling me to buy stocks and cryptos. People know they'll never get anywhere by working or saving, so they're more willing to gamble, high risk high reward, but they have nothing to lose
ttoinou•55m ago

  Every financial bubble has moments where, looking back, one thinks: How did any sentient person miss the signs?
Well maybe a lot of people agree already with what the author is saying : the economics might crash, but the technology is here to stay. So we don't care about the bubble
rwmj•48m ago
The technology is there til the GPUs become obsolete, so about 3 years.
_heimdall•46m ago
I don't think the question would be whether the technology literally disappears entirely, only how important it is going forward. The metaverse is still technically here, but that doesn't mean it is impactful or worth near the investment.

For LLMs, the architecture will be here and we know how to run them. If the tech hits a wall, though, and the usefulness doesn't balance well with the true cost of development and operation when VC money dries up, how many companies will still be building and running massive server farms for LLMs?

____mr____•26m ago
If the tech is here to stay, my question is: how and why? The how: The projects for the new data centers and servers housing this tech are incredibly expensive to build and maintain. These also jack up the price of electricity in the neighborhoods and afaik the US electrical grid is extremely fragile and is already being pushed to its limit with the existing compute being used on AI. All of this for AI companies to not make a profit. The only case you could make would be to nationalize the companies and have them subsidized by taxes.

But why?: This would require you to make a case that AI tools are useful enough to be sustained despite their massive costs and hard to quantify contribution to productivity. Is this really the case? I haven't really seen a productivity increase worth justifying the cost, and as soon as Anthropic tried to even remotely make a profit (or break even) power users instantly realized that the productivity is not really worth paying the actual compute required to do their tasks

scooletz•54m ago
> Some people think artificial intelligence will be the most important technology of the 21st century

We're just at 25% of it. Raising such a claim is foolish at least. People will be tinkering as usual and it's hard to predict the next big thing. You can bet on something, you can postdict (which is much easier), but being certain about it? Nope.

louwrentius•51m ago
Will Ed Zitron indeed be vindicated[0]?

[0]: https://www.wheresyoured.at/the-haters-gui/

rwmj•50m ago
Also no one is talking about how exposed we are to Taiwan. Nvidia, AMD, Apple, any company building out GPUs (so Google, Microsoft, Meta etc), even Intel a bit, are all manufacturing everything with one company, and it's largely happening in Taiwan.

If China invades Taiwan, why wouldn't TSMC, Nvidia and AMD stock prices go to zero?

_heimdall•50m ago
We must run in different circles as it were, I hear this raised frequently on a number of podcasts I listen to.
jennyholzer•41m ago
name the podcasts
grendelt•43m ago
> Also no one is talking about how exposed we are to Taiwan.

We aren't? It's one of the reasons the CHIPS Act et al get pushed through, to try to mitigate those risks. COVID showed how fragile supply chains are to shocks to the status quo and has forced a rethink. Check out the book 'World On The Brink' for more on that geopolitical situation.

therealmarv•49m ago
It reminds me what I said to somebody recently:

All my friends and family are using the free version of ChatGPT or something similar. They will never pay (although they have enough money to do so).

Even in my very narrow subjective circles it does not add up.

Who pays for AI and how? And when in the future?

delichon•42m ago
Someone is paying. OpenAI revenue was $4.3 billion in the first half of this year.
lm28469•19m ago
You forgot that part:

> The artificial intelligence firm reported a net loss of US$13.5 billion during the same period

If you sell gold at $10 a gram you'll also make billions in revenues.

boxed•5m ago
Reminds me of the Icelandic investment banks during the height of the financial bubble. They basically did this.
jstummbillig•36m ago
People are always so fidegty about this stuff — for super understandable reason, to be clear. People not much smarter than anyone else try to reason about numbers that are hard to reason about.

But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.

Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.

lm28469•21m ago
> All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.

It's like asking big pharma if medicine should be less regulated, "all the experts agree", well yeah, their paycheck depends on it. Same reason no one at meta tells Zuck that his metaverse is dogshit and no one wants it, they still spent billions on it.

You can't assume everyone is that dumb, but you certainly can assume that the yes men won't say anything other than "yes".

jstummbillig•6m ago
Again, this is not an argument. I am asking: Why do we assume that we know better and people with far more knowledge and insight would all be wrong?

This is not rhetorical question, I am not looking for a rhetorical answer. What is every important decision maker at all these companies missing?

The point is not that they could not all be wrong, they absolutely could. The point is: Make a good argument. Being a general doomsayer when things get very risky might absolutely make you right but it's not a interesting argument – or any argument at all.

alpineman•19m ago
Pets.com, Enron, Lehman Bros, WeWork, Theranos, too many to mention.

Investors aren’t always right. The FOMO in that industry is like no other

uniq7•33m ago
They will eventually get ads mixed in the responses.
lm28469•29m ago
I use it professionally and I rotate 5 free accounts on all platforms, money doesn't have any values anymore, people will spend $100 a month on LLMs and another $100 on streaming services, that's like half of my household monthly food budget
freetonik•25m ago
I'm sure providers will find ways of incorporating the fees into e.g. ISP or mobile network fees so that users end up paying in a less obvious, less direct way.
ACCount37•25m ago
The cost of serving an "average" user would only fall over time.

Most users rarely make the kind of query that would benefit a lot from the capabilities of GPT-6.1e Pro Thinking With Advanced Reasoning, Extended Context And Black Magic Cross Context Adaptive Learning Voodoo That We Didn't Want To Release To Public Yet But If We Didn't Then Anthropic Would Surely Do It First.

And the users that have this kind of demanding workloads? They'd be much more willing to pay up for the bleeding edge performance.

____mr____•5m ago
AI companies don't have a plausible path to productivity because they are trying to create a market while model is not scalable unlike different services that have done this in the past. (DoorDash, Uber, Neftlix etc.)
einrealist•49m ago
So much in this AI bubble is just fueled by a mixture of wishful thinking (by people who know better), Science Fiction (by people who don't know enough) and nihilism (by people who don't care about anything other than making money and gaining influence).
DebtDeflation•47m ago
>data-center related spending...probably accounted for half of GDP growth in the first half of the year. Which is absolutely bananas.

What? If that figure is true then "absolutely bananas" is the understatement of the century and "batshit insane" would be a better descriptor (though still an understatement).

roxolotl•13m ago
This has been reported many places over the past year the percent seems to be all over the place though.

Yesterday “As much as 1/3rd”: https://www.reuters.com/markets/europe/if-ai-is-bubble-econo...

A week ago “More than consumer spending(but the reality is complex)”:https://fortune.com/2025/09/17/how-much-gdp-artificial-intel...

August “1.3% of 3% however it might be tariff stockpiling”: https://www.barrons.com/articles/ai-spending-economy-microso...

gandalfian•45m ago
"The “pop” won’t be a single day like a stock market crash. It’ll be a gradual cooling as unrealistic promises fail, capital tightens, and only profitable or genuinely innovative players remain."

(This comment was written by ChatGPT)

jstummbillig•44m ago
> It’s not clear that firms are prepared to earn back the investment

I am confused by a statement like this. Does Derek know why they are not? If hes does, I would love to hear the case (and no, comparisons to a random countries GDP are not an explanation).

If he does not, I am not sure why we would not assume that we are simply missing something, when there are so many knowledgable players charting a similar course, that have access to all the numbers and probably thought really long and hard about spending this much money.

By no means do I mean that they are right for that. It's very easy to see the potential bubble. But I would love to see some stronger reasoning for that.

What I know (as someone running a smallish non-tech business) is that there is plenty of very clearly unrealized potential, that will probably take ~years to fully build into the business, but that the AI technology of today already supports capability wise and that will definitely happen in the future.

I have no reason to believe that we would be special in that.

alpineman•16m ago
It’s in the article. AI globally made 12bn USD of revenues in 2025, yet Capex next year is expected to be almost 50X that at 500bn USD
____mr____•13m ago
Most AI firms have not shown a path toward profitability
Devasta•38m ago
When there was a speculative mania in railways, afterward there were railroads everywhere that could still be used. A bubble in housing has a bunch of houses everywhere, or at the very least the skeleton of a house that could be finished later.

These tech bubbles are leaving nothing, absolutely nothing but destruction of the commons.

kzalesak•13m ago
That's not entirely true - they are leaving the data centers themselves, and also all the trained models. These are already used
jongjong•38m ago
I've been talking about the limited bandwidth of investors as a major problem with capital allocation for some time so it's good to see this idea acknowledged in this context. This problem will only get bigger and more obvious with increasing inequality. It is massive scale capital misallocation whereby the misallocation yields more nominal ROI than optimal allocation (if you were to consider real economic value and not numbers in dollars). Facilitated by the design of the monetary system as the value of dollars is kept decoupled from real economic value due to filter bubbles and dollar centralization.
jacknews•37m ago
I think these articles slightly miss the point.

Sure, AI as a tool, as it currently is, will take a very long time to earn back the $B being invested.

But what if someone reaches autonomous AGI with this push?

Everything changes.

So I think there's a massive, massive upside risk being priced into these investments.

bakugo•29m ago
> But what if someone reaches autonomous AGI with this push?

What is "autonomous AGI"? How do we know when we've reached it?

jacknews•18m ago
When you can use AI as though it's an employee, instead of repeatedly 'prompting' it with small problems and tasks.

It will have agency, it will perform the role. A part of that is that it will have to maintain a running context, and learn as it goes, which seem to be the missing pieces in current llms.

I suppose we'll know, when we start rating AI by 'performance review', like employees, instead of the current 'solve problem' scorecards.

incomingpain•32m ago
>Others insist that it is an obvious economic bubble.

The definition is that the assets are valuated above an intrinsic value.

The first graph is Amazon, Meta, Google, Microsoft, and Oracle. Lets check their PE ratio.

Amazon (AMZN) ~ 33.6

Meta (META) ~ 27.5

Google (GOOGL) ~ 25.7

Microsoft (MSFT) ~ 37.9

Oracle (ORCL) ~ 65

These are highish pe ratios, but certainly very far from bubble numbers. OpenAI and others are all private.

Objectively there is no bubble. Economic bubble territory is 100-200+ PE ratios.

Not to mention, who are you to think the top tech companies arent fully aware of the risks they are taking with AI?

randomtoast•27m ago
The thing is, if you say that "AI is a bubble that will pop" and repeat this every year for the next 15 years, then you have a good probability of being right in 1 out of 15 cases if there actually is a market recession within the next 15 years that is attributed to AI overspeculation.
mixedbit•21m ago
Back of the envelope calculation: Nvidia market cap is 4.5T$, their profit margin is 52%. This means Nvidia would need to sell 1067$ worth of equipment per human being on Earth for investors that buy Nvidia stock today to break even on the investment. Nvidia, unlike Apple, doesn't sell to end users (almost), but to AI companies that provide services to end users. The scale of required spending on Nvidia hardware is comparable to tech companies collectively buying IPhones for every human on Earth, because the value that IPhone users deliver to tech companies is large enough that giving away IPhones is justified.
lotsofpulp•19m ago
> This means Nvidia would need to sell 1067$ worth of equipment per human being on Earth for investors that buy Nvidia stock at current prices to break even on the investment

In what period of time?

mixedbit•12m ago
You break even, when you break even, the faster it happens the better for your investment. With the current earnings it will take 53 years for investors to break even.
donatj•21m ago
I feel kind of like a Luddite sometimes but I don't understand why EVERYONE is rushing to use AI? I use a couple different agents to help me code, and ChatGPT has largely replaced Google in my everyday use, but I genuinely don't understand the value proposition of every other companies offerings.

I really feel like we're in the same "Get it out first, figure out what it is good for later" bubble we had like 7 years ago with non-AI ChatBots. No users actually wanted to do anything important by talking to a chatbot then, but every company still pushed them out. I don't think an LLM improves that much.

Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...

I don't want AI taking any actions I can't inspect with a difftool, especially not anything important. It's like letting a small child drive a car.

amelius•18m ago
> It's like letting a small child drive a car.

Bad example, because FSD cars are here.

donatj•13m ago
Find me an FSD that can drive in non-Californian real world situations. A foot of snow, black ice, a sand drift.
gamerDude•3m ago
Well Waymo is coming to Denver, so it's about to get tested in some more difficult conditions.
xiphias2•16m ago
Just you switching away from Google is already justifying 1T infrastructure spend.

Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

boxed•6m ago
> Just you switching away from Google is already justifying 1T infrastructure spend.

How? OpenAI are LOSING money on every query. Beating Google by losing money isn't really beating Google.

mettamage•5m ago
> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

Optimistic view: maybe product quality becomes an actually good metric again as the LLM will care about giving good products.

Yea, I know, I said it's an optimistic view.

ivape•15m ago
AI figured out something on my mind that I didn’t tell it about yesterday (latest Sonnet). My best advice to you is to spend time and allow the AI to blow your mind. Then you’ll get it.

Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.

emp17344•13m ago
I don’t understand what you’re saying. You know the AI is incapable of reading your mind, right? Can you provide more information?
ivape•10m ago
More information:

Use the LLM more until you are convinced. If you are not convinced, use it more. Use it more in absurd ways until you are convinced.

Repeat the above until you are convinced.

emp17344•5m ago
You haven’t provided more information, you’ve just restated your original claim. Can you provide a specific example of AI “blowing your mind”?
piva00•5m ago
Is this satire? Really hard to tell in this year of 2025...
tbrownaw•3m ago
Well there was that example a while back of some store's product recommendation algo inferring that someone was pregnant before any of the involved humans knew.
emp17344•14m ago
Agreed… I feel increasingly alienated because I don’t understand how AI is providing enough value to justify the truly insane level of investment.
ForHackernews•12m ago
The same way that NFTs of ugly cartoons apes were a multi-billion dollar industry for about 28 months.
calrain•12m ago
With the ever increasing explosion of devices capable of consuming AI services, and internet infrastructure being so ubiquitous that billions of people can use AI...

Even if a little of everyone's day consumes AI services, then the investment required will be immense. Like what we see.

eek2121•6m ago
In my eyes, it'd be cheaper for a company to simply purchase laptops with decent hardware specs, and run the LLMs locally. I've had decent results from various models I've run via LMStudio, and bonus points: It costs nothing and doesn't even use all that much CPU/GPU power.

Just my opinion as a FORMER senior software dev (disabled now).

camillomiller•5m ago
There is none, zero value. What is the value of Sora 2, if even its creators feel like they have to pack it into a social media app with AI-slop reels? How is that not a testament to how suprisingly andvanced and useless at the same time the technology is?
rco8786•19m ago
AI, if nothing else, is already completely up-ending the Search industry. You probably already find yourself going to ChatGPT for lots of things you would have previously gone to Google for. That's not going to stop. And the ads marketplaces are coming.

We're also finding incredibly valuable use for it in processing unstructured documents into structured data. Even if it only gets it 80-90% there, it's so much faster for a human to check the work and complete the process than it is for them to open a blank spreadsheet and start copy/pasting things over.

There's obviously loads of hype around AI, and loads of skepticism. In that way this is similar to 2001. And the bubble will likely pop at some point, but the long tail value of the technology is very, very real. Just like the internet in 2001.

hamburgererror•18m ago
Until recently everyone was bragging about predicting bitcoin's bubble. To the best of my knowledge there was no huge crash, crypto just got out of fashion in mainstream media. I guess that's what's going to happen with AI.
JohnKemeny•8m ago
Almost everyone who has interacted with a blockchain ended up losing money.
uladzislau•16m ago
Calling this an “AI bubble” reads like pure sour grapes from folks who missed the adoption curve. Real teams are already banking gains - code velocity up, ticket resolution times down, and marketing lift from AI-assisted creative while capex always precedes revenue in platform shifts (see cloud 2010, smartphones 2007). The “costs don’t match cash flow” trope ignores lagging enterprise procurement cycles and the rapid glide path of unit economics as models, inference, and hardware efficiency improve. Habit formation is the moat: once workers rely on AI copilots, those workflows harden into paid seats and platform lock-in. We’re not watching a bubble pop; we’re watching infrastructure being laid for the next decade of products.
postexitus•13m ago
All the same arguments could be used for dot-com bubble. It was a boom and a bubble at the same time. When it popped, only the real stuff remained. Same will happen to AI. What you are describing are good use cases - there are 99 other companies doing 99 other useless things with no cost / cash flow match.
softwaredoug•13m ago
Things can be a bubble AND actual economic growth long term. Happens all the time with new tech.

Dotcom boom made all kinds of predictions about Web usage. That decade plus later turned out to be true. But at the time the companies got way ahead of consumer adoption.

Specific to AI copilots. We currently are building hundreds that nobody will use for every one success.

gorgoiler•14m ago
What’s the theoretical total addressable market for, say, consumer facing software services? Or discretionary spending? That puts one limit on the value of your business.

Another limit would be to think about stock purchases. How much money is available to buy stocks overall, and what slice of that pie do you expect your business to extract?

It’s all very well spending eleventy squillion dollars on training and saying you’ll make it back through revenue, but not if the total amount of revenue in the world is only seventy squillion.

Or maybe you just spend your $$$ on GPUs, then sell AI cat videos back to the GPU vendors?

seydor•11m ago
I don't think the Apollo project factories invested in each other circularly. The AI boom is nominally huge but very little money gets in or out of silicon valley. MS invests in OpenAI because it will get it back via Azure or whatever. ditto for nvidia.

What's the real investment in or out of silicon valley ?

ForHackernews•6m ago
They're all gambling that they can build the Machine God first and they will control it. The OpenAI guy is blathering that we don't even know what role money will have After the Singularity (aka The Rapture for tech geeks)