frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A convex polyhedron without Rupert's property

https://arxiv.org/abs/2508.18475
1•robinhouston•4m ago•0 comments

Microsoft 'escort' program gave China keys to Pentagon

https://www.theblaze.com/return/microsoft-escort-program-gave-china-keys-to-pentagon
3•madars•5m ago•0 comments

Benchmarking bot detection systems against modern AI agents

https://research.roundtable.ai/bot-benchmarking/
2•mdahardy•6m ago•0 comments

Show HN: SkipFees -free community directory of restaurants that deliver directly

https://skipfees.com
1•Aryagm•9m ago•0 comments

I built a production app in a week by managing a swarm of 20 AI agents

https://zachwills.net/i-managed-a-swarm-of-20-ai-agents-for-a-week-here-are-the-8-rules-i-learned/
1•zachwills•10m ago•1 comments

The medieval fruit with a vulgar name

https://www.bbc.com/future/article/20210325-the-strange-medieval-fruit-the-world-forgot
2•ohjeez•13m ago•0 comments

Open Source "Claude for Chrome"

https://github.com/AIPexStudio/open-claude-for-chrome
2•ropoz•16m ago•0 comments

Page Table Sharing in Linux

https://blogs.oracle.com/linux/post/yes-virginia-there-is-page-table-sharing-in-linux
1•tanelpoder•17m ago•0 comments

Can Cheaper Lasers Handle Short Distances?

https://semiengineering.com/can-cheaper-lasers-handle-short-distances/
1•PaulHoule•18m ago•0 comments

Typepad Is Shutting Down

https://everything.typepad.com/blog/2025/08/typepad-is-shutting-down.html
2•gmcharlt•19m ago•0 comments

Show HN: Add audio to your Anki cards

https://github.com/selmetwa/AnkiTTS
1•selmetwa•19m ago•0 comments

Spectrum – catching clojure.spec conform errors at compile time

https://github.com/arohner/spectrum
1•alhazraed•19m ago•0 comments

Testing Time (and Other Asyncronicities)

https://go.dev/blog/testing-time
1•ingve•20m ago•0 comments

Will There Ever Be a 10x Prompt Engineer?

https://embedworkflow.com/blog/will-there-ever-be-a-10x-prompt-engineer/
1•ewf•20m ago•0 comments

Rust for Everyone!

https://www.youtube.com/watch?v=R0dP-QR5wQo
2•dtartarotti•20m ago•0 comments

How the Japanese concept of "ikigai" was appropriated by the West

https://chiefwordofficer.substack.com/p/what-ikigai-really-meansand-why-it
2•itoshinoeri•20m ago•0 comments

Ask HN: What are you working on August 2025?

1•nsibr•20m ago•0 comments

Model Merging – A Biased Overview

https://crisostomi.github.io/blog/2025/model_merging/
1•jxmorris12•20m ago•0 comments

Desktop Linux Keeps Winning the Wrong Battles

https://www.howtogeek.com/desktop-linux-keeps-winning-the-wrong-battles/
3•the-mitr•21m ago•0 comments

Show HN: A simpler way to manage internationalization in component-based apps

https://github.com/aymericzip/intlayer
4•MarineCG40•21m ago•4 comments

A free curriculum to teach high school journalism

https://www.teachjournalismforall.com/
1•wubbahed•22m ago•1 comments

Taiwan indicts three over alleged theft of TSMC trade secrets

https://www.theregister.com/2025/08/27/tsmc_trade_secret_thefts/
2•rntn•23m ago•0 comments

I'm working on implementing a programming language all my own

https://eli.li/to-the-surprise-of-literally-no-one-im-working-on-implementing-a-programming-langu...
1•ingve•23m ago•0 comments

GMP damaging Zen 5 CPUs?

https://gmplib.org/gmp-zen5
1•sequin•23m ago•0 comments

AI's New Interface: Smart Glasses, Shift Beyond Screens, Why We Backed Mentra

https://www.hartmanncapital.com/news-insights/why-we-backed-mentra
1•walterbell•23m ago•0 comments

Cloudflare runs more AI models on fewer GPUs

https://blog.cloudflare.com/how-cloudflare-runs-more-ai-models-on-fewer-gpus/
2•HieronymusBosch•23m ago•0 comments

How we built the most efficient inference engine for Cloudflare's network

https://blog.cloudflare.com/cloudflares-most-efficient-ai-inference-engine/
1•jgrahamc•25m ago•0 comments

Ghostty Config Generator

https://ghostty.zerebos.com/
1•Luc•26m ago•0 comments

Nvidia (NVDA) Earnings Hub – Detailed Overview and Key Takeaways

https://dashboard-finance.com/stock/nvda/earnings
1•tchantchov•27m ago•1 comments

Read seinfeld subtitles to learn Spanish

https://guavabook.firebaseapp.com/
1•kikichiki•28m ago•0 comments
Open in hackernews

AI Bubble 2027

https://www.wheresyoured.at/ai-bubble-2027/
54•speckx•2h ago

Comments

toss1•1h ago
There is no question LLMs are truly useful in some areas, and the LLM bubble will inevitably burst. Both can be simultaneously true, and we're just running up the big first slope on the hype curve [0].

As we learn more about the capabilities and limits of LLMs, I see no serious arguments scaling up LLMs with increasingly massive data centers and training will actually reach anything like breakthrough to AGI or even anything beyond the magnitude of usefulness already available. Quite the opposite — most experts argue fundamental breakthroughs will be needed in different areas to yield orders-of-magnitude greater utility, nevermind yielding AGI (not that much more refinement won't yield useful results, only that it won't break out).

So one question is timing — When will the crash come?

The next is, how can we collect in an open and preferable independently/distributed/locally-usable way the best usable models to retain access to the tech when the VC-funded data centers shut down?

[0] https://en.wikipedia.org/wiki/Gartner_hype_cycle

fred_is_fred•1h ago
We even have prior art. Web 1.0 and e-Commerce were truly useful and the bubble also burst.
heathrow83829•43m ago
Unlike that time, some money is actually being made. I heard some figures thrown around yesterday, total combined investments of over 500 billion! and revenues of about 30 billion, 10 bil of which was payments to cloud providers, so actually 20 billion in revenues. that's not nothing.
jbreckmckye•43m ago
It might not be a paradox: Bubbles are most likely to occur when something is plausibly valuable.

If GenAI really was just a "glorified autocorrect", a "stochastic parrot", etc, it would be much easier to deflate AI Booster claims and contextualise what it is and isn't good at.

Instead, LLMs exist in a blurry space where they are sometimes genuinely decent, occasionally completely broken, and often subtly wrong in ways not obvious to their users. That uncertainty is what breeds FOMO and hype in the investor class.

wood_spirit•19m ago
I use LLMs all the time and do ML and stuff. But at the same time, they are literally averaging the internet, approximately. I think the terms glorified autocomplete and stochastic parrot describe how they work under the hood really well.
shishy•41m ago
Yes well bubbles are a core part of the innovation process (new tech being useful doesn't imply a lack of bubbles), see e.g."Technological Revolutions and Financial Capital" by Carlota Perez https://en.wikipedia.org/wiki/Technological_Revolutions_and_...
wood_spirit•46m ago
This is the best bubble post I’ve seen this week on HN: https://craigmccaskill.com/ai-bubble-history

(Although I think the utility of server farms will not be high after the bubble bursts: even if cheap they will quickly become outdated. In that respect things are different from railway tracks)

bryanlarsen•44m ago
The Internet bubble left physical artifacts behind, like thousands of miles of unlit fiber. However, that pales in comparison to the value of virtual artifacts like Apache et al. Similarly, the AI bubble's artifacts will primarily be virtual.
jsnell•42m ago
Paywalled.
jasonjmcghee•38m ago
Not for me? Never heard of this site but had no issues.
great_psy•13m ago
The introduction to the article is not paywalled. But the actual 2027 ai story is paywalled
jasonjmcghee•3m ago
Ah.
tartoran•40m ago
When the bubble burts, what kind of effects are we going to see? What are your thoughts on this?
warkdarrior•38m ago
Massive layoffs from BigTech and lots of startups going under.
jihadjihad•36m ago
When AI is on the rise, layoffs are "because AI", and then when the AI bubble pops the layoffs are also conveniently "because AI".
ProllyInfamous•12m ago
Pre ChatGPT:

•largest publicly-traded company in the world was ~$2T (Saudi Aramco, not even top ten anymore).

•nVidea (current largest @ $4.3T) was "only" ~$0.6T [$600,000 x Million]

•Top 7 public techs are where predominant gains have grown / held

•March 16, 2020, all publicly-traded companies worth ~$78T; at present, ~$129T

•Gold has doubled, to present.

>what kind of effects are we going to see

•Starvation and theft like you've probably barely witnessed in your 1st- or 3rd-world lifetime. Not from former stock-holders, but from former underling employees, out of simple desperation. Everywhere, indiscriminantly from the majority.

•UBI & conscription, if only to avoid previous bullet-point.

¢¢, hoping I'm wrong.

iamgopal•38m ago
What I think is, the team that pulled such large LLM off, is no stupid.
baal80spam•35m ago
So many hot takes for the AI bubble bursting ANY DAY NOW, yet we keep chugging on.
heathrow83829•33m ago
they said there's 6 more quarters of funding left, so should be busted by early to mid 2027
mountainriver•32m ago
Lots of AI apps are creating a lot of value, that somehow gets overlooked in these convos
VohuMana•19m ago
A lot of value is being created with some of these AI apps but are the people funding the development of these apps seeing a return on investment? (Honest question, I don't really know)

The article mentions

> This is a bubble driven by vibes not returns ...

I think this indicates some investors are seeing a return. I know AI is expensive to train and somewhat expensive to run though so I am not really sure what the reality is.

great_psy•18m ago
Can you provide some names of AI apps who’s revenue > cost ?
ricericerice•3m ago
I mean, ChatGPT could easily be profitable today if they wanted to, but they're prioritizing growth
heathrow83829•29m ago
meta already has a hiring freeze in AI
jbreckmckye•33m ago
Data point of two, but this podcast also recently floated 2027 as the crunch point: https://youtu.be/vp1-3Ypmr1Y?si=p4GlyPwZRWOkxFtt

In my uninformed opinion, though, companies who spent excessively on bad AI initiatives will begin to introspect as the fiscal year comes to an end. By summer 2026 I think a lot of execs will be getting antsy if they can't defend their investments

AndrewKemendo•18m ago
Having been through at least two AI hype cycles professionally, this is just another one.

Each cycle filters out people who are not actually interested in AI, they are grifters and sheisters trying to make money.

I have a private list of these starting from 2006 to today.

LLMs =/= AI and if you don’t know this then you should be worried because you are going to get left behind because you don’t actually understand the world of AI.

Those of us that are “forever AI” people are the cockroaches of the tech world and eventually we’ll be all that is left.

Every former “expert systems scientist”, “Bayesian probably engineers” “Computer vision experts” “Big Data Analysts” and “LSTM gurus” are having no trouble implementing LLMs

We’ll be fine

jasonjmcghee•16m ago
The author labels LLMs as "empty hype".

LLMs are inappropriately hyped. Surrounded in shady practices to make them a reality. I understand why so many people are anti-LLM.

But empty hype? I just can't disagree more.

They are generalized approximation functions that can approximate all manner of modalities, surprisingly quickly.

That's incredibly powerful.

They can be horribly abused, the failure modes unintuitive, using them can open entirely new classes of security vulnerabilities and we don't have proper observability tooling to deeply understand what's going on under the hood.

But empty hype?

Maybe we'll move away from them and adopt something closer to world models or use RL / something more like Sutton's OaK architecture, or replace back prop with something like forward-forward, but it's hard to believe Hal-style AI is going anywhere.

They are just too useful.

Programming and the internet were overhyped too and had many of the same classes of problems.

We have a rough draft of AI we've only seen in sci-fi. Pandora's box is open and I don't see us closing it.

politelemon•3m ago
I would love to reach a point where competent language models become commodities that anyone can run on low key hardware. Having one at your disposal can open up for some gorgeous applications and workflows by the community. As it stands at present though, there are insurmountable moats or very expensive ones.
bubblelicious•15m ago
Really hard to believe articles like this and even more hard to believe this is the hive mind of hacker news today.

Work for a major research lab. So much headroom, so much left on the table with every project, so many obvious directions to go to tackle major problems. These last 3 years have been chaotic sprints. Transfusion, better compressed latent representations, better curation signals, better synthetic data, more flywheel data, insane progress in these last 3 years that somehow just gets continually denigrated by this community.

There is hype and bullshit and stupid money and annoying influencers and hyperbolic executives, but “it’s a bubble” is absurd to me.

It would be colossally stupid for these companies to not pour the money they are pouring into infrastructure buildouts and R&D. They know it’s going to be a ton of waste, nobody in these articles are surprising anyone. These articles are just not very insightful. Only silver lining to reading the comments and these articles is the hope that all of you are investing optimally for your beliefs.

dehrmann•1m ago
Upvoted for a different perspective.

The thing to remember about the HN crowd is it can be a bit cynical. At the same time, realize that everyone's judging AI progress not on headroom and synthetic data usage, but on how well it feels like it's doing, external benchmarks, hallucinations, and how much value it's really delivering. The concern is that for all the enthusiasm, generative AI's hard problems still seem unsolved, the output quality is seeing diminishing returns, and actually applying it outside language settings has been challenging.