frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny True Stories: How Micro Memoirs Make Us Better Writers

https://brevity.wordpress.com/2025/10/07/tiny-true-stories/
1•mooreds•44s ago•0 comments

MonkeysPaw – a prompt-driven web framework in Ruby

https://worksonmymachine.ai/p/introducing-monkeyspaw-a-prompt-driven
1•mooreds•1m ago•0 comments

The Unofficial Jobs Numbers Are in and It's Rough Out There

https://www.wsj.com/economy/jobs/the-unofficial-jobs-numbers-are-in-and-its-rough-out-there-3518e239
1•zerosizedweasle•5m ago•0 comments

Japanese athletes inspired by raucous home fans at World Athletics Championships

https://www.japantimes.co.jp/sports/2025/09/16/more-sports/japanese-athletes-home-track-advantage/
1•PaulHoule•5m ago•0 comments

Welcome to Heroku Vibes

https://www.heroku.com/blog/turn-ideas-into-apps-heroku-vibes-pilot/
1•runesoerensen•5m ago•0 comments

Online Identity Verification with the Digital Credentials API

https://webkit.org/blog/17431/online-identity-verification-with-the-digital-credentials-api/
1•mooreds•6m ago•0 comments

AI-Driven Demand for Gas Turbines Risks a New Energy Crunch

https://www.bloomberg.com/features/2025-bottlenecks-gas-turbines/
1•toomuchtodo•6m ago•1 comments

A Computing Legend Speaks – A New Oral History with Ken Thompson

https://computerhistory.org/blog/a-computing-legend-speaks/
1•verdverm•7m ago•1 comments

Video projectors used to be ridiculously cool [video]

https://www.youtube.com/watch?v=ms8uu0zeU88
1•CaliforniaKarl•7m ago•0 comments

A Molecular Motor Minimizes Energy Waste

https://physics.aps.org/articles/v18/167
2•lc0_stein•9m ago•0 comments

"22.8% of pull requests to Express.js are README updates adding somebody's name" [video]

https://www.youtube.com/watch?v=YFkeOBqfQBw
2•nomilk•10m ago•0 comments

Data streaming software maker Confluent explores sale

https://www.reuters.com/business/data-streaming-software-maker-confluent-explores-sale-sources-sa...
1•gangtao•10m ago•0 comments

Notify The web, script and feed monitor

https://notify.pingie.com
1•simplytoast•11m ago•1 comments

Western Alliance Faces First Brands Risk via Jefferies Fund

https://www.bloomberg.com/news/articles/2025-10-08/western-alliance-faces-first-brands-risk-throu...
1•zerosizedweasle•11m ago•0 comments

Ortega Hypothesis

https://en.wikipedia.org/wiki/Ortega_hypothesis
2•Caiero•11m ago•0 comments

Ask HN: ArsTechnica qbits mechanics[0] for a scaled up version of roons[1]?

1•sargstuff•13m ago•0 comments

Canada Sport Canada's Obsession with Figure Skating and Ice Sports

https://craftercontent.blogspot.com/2025/10/%20Figure%20Skating%20.html
1•arianmarry•14m ago•0 comments

Nvidia CEO: Oracle will be 'wonderfully profitable' despite reported thin margin

https://www.cnbc.com/2025/10/07/nvidias-jensen-huang-on-oracles-reported-gpu-profit-squeeze-theyl...
1•zerosizedweasle•14m ago•0 comments

Suspicionless ChatControl must be taboo in a state governed by the rule of law

https://digitalcourage.social/@echo_pbreyer/115337976340299372
3•nabla9•14m ago•0 comments

Python 3.14 Is Here. How Fast Is It?

https://blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it
1•todsacerdoti•14m ago•0 comments

SAT problems are kind of cool

https://blog.karanjanthe.me/posts/Boolean-satisfiability-problem/
2•KMJ-007•14m ago•0 comments

What IMC 2025 Revealed About the State of Telecom

https://akshatjiwannotes.blogspot.com/2025/10/what-imc-2025-revealed-about-state-of.html
1•akshatjiwan•15m ago•0 comments

How much radiation can a Pi handle in space?

https://www.jeffgeerling.com/blog/2025/how-much-radiation-can-pi-handle-space
1•HieronymusBosch•15m ago•0 comments

Show HN: Cadence – Daily note taking app with mood tracking and AI querying

https://cadencenotes.com/
1•jram930•17m ago•0 comments

Show HN: Chrome extension to GIF YouTube videos in-player

https://chromewebstore.google.com/detail/ytgify/dnljofakogbecppbkmnoffppkfdmpfje
1•neonwatty•18m ago•0 comments

MEPs vote to ban plant-based food terms

https://www.theguardian.com/world/2025/oct/08/veggie-burgers-off-menu-meps-vote-ban-plant-based-f...
2•alibarber•19m ago•0 comments

React is transitioning from Meta to Linux Foundation

https://engineering.fb.com/2025/10/07/open-source/introducing-the-react-foundation-the-new-home-f...
4•dcas•20m ago•0 comments

Look mom HR application, look mom no job – phishing using Zoom docs

https://blog.himanshuanand.com/2025/10/look-mom-hr-application-look-mom-no-job/
1•unknownhad•20m ago•1 comments

Derivation of Hamiltonians for Accelerators (1998) [pdf]

https://www.aps.anl.gov/files/APS-sync/technical_bulletins/files/APS_1421576.pdf
2•nill0•20m ago•0 comments

FCC kicks off 'Space Month' with vow to fast-track satellite licensing

https://www.theregister.com/2025/10/07/fcc_satellite_licensing/
1•Bender•20m ago•0 comments
Open in hackernews

Bank of England flags risk of 'sudden correction' in tech stocks inflated by AI

https://www.ft.com/content/fe474cff-564c-41d2-aaf7-313636a83e5b
102•m-hodges•2h ago

Comments

boguscoder•1h ago
Any non paywalled links, please?
WithinReason•1h ago
I don't see a paywall
gainda•1h ago
https://archive.ph/BNUzu
ratelimitsteve•1h ago
you can almost always go to archive.is or one of the other mirrors and paste in the original link. it will get you past the paywall and also give you a link that will get others past the paywall. it seems to be a monkey-see monkey-do part of the hackernews microculture that if a link is paywalled a commenter will throw up the archive link
sampton•1h ago
A lot investment is banking on agi. There’s no sign agi is going to happen this decade.
hackernewds•1h ago
That's what people have said about technologies in every decade, Sam
estomagordo•1h ago
What's a sign it's going to happen ever?
NoMoreNicksLeft•1h ago
Humans. There are arrangements of atoms that if constructed and activated, act perfectly like human intelligence. Because they are human intelligence.

Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. If human intelligence is deterministic, then it can be written in software.

Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen. Failures to date can be attributed to various factors, but the gist is that we haven't yet identified the principles of intelligent software.

My guess is that we need less than 5 million years further development time even in a worst-case scenario. With luck and proper investment, we can get it down well below the 1 million year mark.

SideburnsOfDoom•53m ago
> Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term.

No, not all processes follow deterministic Newtonian mechanics. It could also be random, unpredictable at times. Are the there random processes in the human brain? Yes, there are random quantum processes in every atom, and there are atoms in the brain.

Yes, this is no less materialistic: Humans are still proof that either you believe in souls or such, or that human level intelligence can be made from material atoms. But it's not deterministic.

But also, LLMs are not anywhere close to becoming human level intelligence.

lm28469•35m ago
> Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen.

~200 years of industrial revolution and we already fucked up beyond the point of no return, I don't think we'll have resources to continue on this trajectory for 1m years. We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall

diffeomorphism•8m ago
> if deterministic, then can be done in software.

You just need a few Dyson spheres and someone omniscient to give you all the parameter values. Easy peazy.

Just like cracking any encryption: you just brute force all possible passwords. Perfectly deterministic decryption method.

</s>

an0malous•59m ago
I used to believe in AGI but the more AI has advanced the more I’ve come to realize that there’s no magic level of intelligence that can cure cancer and figure out warp drives. You need data, which requires experimentation, which requires labor and resources of which there is a finite supply. If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources. Isn’t that what the greatest minds in cancer research would say as well? Why do we think that just being more rational or being able to compute better than humans would be sufficient to solve the problem?

It’s very possible that human beings today are already doing the most intelligent things they can given the data and resources they have available. This whole idea that there’s a magic property called intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with, increasingly just seems like the fantasy of people who think they’re very intelligent.

SideburnsOfDoom•55m ago
Agreed.

And, if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory.

Isn’t that what the greatest minds in physics would say as well? Yes, yes it is.

No debate will be entered into on this topic by me today.

kragen•14m ago
Actually, no, it isn't. They say it isn't necessarily possible, but not self-contradictory as far as we know. It's good that you aren't going to debate this.

https://en.wikipedia.org/wiki/Alcubierre_drive

SideburnsOfDoom•11m ago
You failed reading comprehension.
regularfry•36m ago
AGI isn't a synonym for smarter-than-human.
an0malous•29m ago
What’s your point? I’m saying there’s no level of smartness that can cure cancer, the bottleneck is data and experimentation not a shortage of smartness/intelligence
lossolo•28m ago
Generally, I agree, but it also depends on perspective. Intelligence exists on many levels and manifests differently across species. From a monkey's standpoint, if they were capable of such reflection they might perceive themselves as the most capable creatures in their environment. Yet, humans possess cognitive abilities that go far beyond that, abstract reasoning, cumulative culture, large scale cooperation etc

A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.

As humans, we can easily visualize and reason about 2D and 3D spaces, it's natural because our sensory systems evolved to navigate a 3D world. But can we truly conceive of a million dimensions, let alone visualize them? We can describe them mathematically, but not intuitively grasp them. Our brains are not built for that kind of complexity.

Now imagine a form of intelligence that can directly perceive and reason about such high dimensional structures. Entirely new kinds of understanding and capabilities might emerge. If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.

Of course that's speculative, but it just illustrates how deeply intelligence is shaped and limited by its biological foundation.

SideburnsOfDoom•14m ago
> A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.

Humans existed in the world for hundreds of thousands of years before they did any of those things, with the exception of wooden hut, which took less time than that. But also wasn't instant.

Your example doesn't entirely contradict the argument that it takes time and experimentation as well, that intellect isn't the only limiting factor.

kragen•18m ago
Eliezer’s short story “That Alien Message” providing a convincing argument that humans are cognitively limited, not data-limited, through the device of a fictional world where people think faster: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...

This is also a commonplace in behavioral economics; the whole foundation of the field is that people in general don't think hard enough to fully exploit the information available to them, because they don't have the time or the energy.

Of course, that doesn't mean that great intelligence could figure out warp drives. Maybe warp drives are actually physically impossible! https://en.wikipedia.org/wiki/Warp_drive says:

> A warp drive or a drive enabling space warp is a fictional superluminal (faster than the speed of light) spacecraft propulsion system in many science fiction works, most notably Star Trek,[1] and a subject of ongoing real-life physics research. (...)

> The creation of such a bubble requires exotic matter—substances with negative energy density (a violation of the Weak Energy Condition). Casimir effect experiments have hinted at the existence of negative energy in quantum fields, but practical production at the required scale remains speculative.

Cancer, however, is clearly curable, and indeed often cured nowadays. It wouldn't be terribly surprising if we already had enough data to figure out how to solve it the rest of the time. We already have complete genomes for many species, AlphaFold has solved the protein-folding problem, research oncology studies routinely sequence tumors nowadays, and IHEC says they already have "comprehensive sets of reference epigenomes", so with enough computational power, or more efficient simulation algorithms, we could probably simulate an entire human body much faster than real time with enough fidelity to simulate cancer, thus enabling us to test candidate drug molecules against a particular cancer instantly.

Also, of course, once you can build reliable nanobots, you can just program them to kill a particular kind of cancer cell, then inject them.

Understanding this does not require believing that "intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with", which I think is a strawman you have made up. It doesn't even require believing that sufficient intelligence can solve every problem if it has sufficient data and resources to work with. It only requires understanding that being able to do the same thing regular humans do, but much faster, would be sufficient to cure cancer.

sampton•50m ago
There needs to be break through papers or hardware that can expand context size in exponential way. Or a new model that can address long term learning.
Delphiza•1h ago
Also reported in the Guardian.

https://www.theguardian.com/business/2025/oct/08/bank-of-eng...

For non-brits, Bank of England the UKs central bank and is a lot like the US Fed. Their comments carry a lot of weight and do impact government policy.

Not enough central banks were making comments about the sub-prime bubble that led to the 2008 crisis. Getting warnings about a possible AI bubble by a central bank is both significant and, in performing the functions of monetary and financial stability for a country, the prudent thing to do.

whimsicalism•1h ago
The mistake central banks made in 2007-2009* was keeping monetary policy far too tight for far too long, for no real discernable reason.

Offering commentary on which particular sectors they feel are a 'bubble' is outside their purview and not particularly productive IMO, the state is not very good at picking winners.

*edited to 2007

NewJazz•1h ago
Sorry you think the government wasn't pumping the 2006 economy enough?
whimsicalism•59m ago
2006 was too early, fair enough. But we were way too tight by late 2007 at least. We should never have let AD fall as much as it did.
bgwalter•1h ago
I like that the Bank of England spells out the "sudden correction" this time.

In 1996 Fed Chair Alan Greenspan warned about irrational exuberance, in 1999 he warned Congress about "the possibility that the recent performance of the equity markets will have difficulty in being sustained". The crash came in 2000.

The warning seems to have gone unnoticed. AMD just behaves exactly like Juniper in 1999.

Pxtl•1h ago
Seems obvious.

AI is useful. But it's not trillion-dollars useful, and it probably won't be.

whimsicalism•1h ago
Why is that obvious? Even with effectively complete stagnation and just existing technology + limited RLVR, I can see how this could be trillion-dollars level useful.
Delphiza•1h ago
It is the financial risk that is obvious. The big players are struggling to show meaningful revenue from the investment. Because the investment is so high, the revenue numbers need to be equally high, and growing fast. The 'correction' is when (ok, if) the markets realise that the returns aren't there. The worldwide risk is that AI-led growth has been a large chunk of the US stock market growth. If it 'corrects' US growth disappears overnight and takes everyone down with it. It is not an issue about the usefulness of AI, but the returns on investment and the market shocks caused by such large sums of money sloshing around one market.
whimsicalism•1h ago
I think we have only scratched the surface of what we can do with the existing technology. A much more present risk from stagnation IMO is that if we stagnate, it is almost certain that the value of the tech will not be able to be enclosed /captured by its creators.
Pxtl•34m ago
Imho it will take off in animation/illustration as soon as Adobe (or some competitor) figures out how to make good tooling for artists. Not for idiot wantrepeneurs who want to dump fully-generated-slop onto Amazon, but so that a person can draw rough pencil sketches and storyboards and reference character sheets and get back proper illustrations. Basically, don't replace the penciler but replace the inker and the colourist (and, in animation, the in-betweener).

That's more of a UI problem than a limitation in Diffusion tech.

That's a customer who'll pay, it might be worth a lot. But a $trillion per year?

an0malous•1h ago
The existing technology can’t even replace customer support systems, which seems like the lowest bar for a role that’s perfectly well suited to LLMs. How are you justifying the trillion dollar value?
whimsicalism•1h ago
I think with a bit of engineering, the existing tech can replace customer support systems - especially as the boomers are going away. But I realize this is an uphill battle on HN
lm28469•41m ago
> I think with a bit of engineering, the existing tech can replace customer support system

That's the lowest of the low and even you accept it doesn't work (yet), how can LLMs be worth 50% of the last years of gdp growth if it's that bad. Do you think customer support represents 50% of newly created value ? I bet it isn't event .5%

Pxtl•39m ago
But the point is the tech obviously isn't there yet. LLMs are still too prone to giving falsehoods and in that case a raw text-search of the support DB would be more useful anyways.

Maybe if companies would wire up their "oh a customer is complaining try and talk them out of canceling their account offer them a mild discount in exchange for locking in for a year contract" API to the LLM? Okay, but that's not a trillion-dollar service.

Pxtl•43m ago
I can't think of any tech with this kind of crazy yearly investment in infrastructure with no success stories.

Maybe it's because I find writing easy, but I find the text generation broadly useless except for scamming. The search capabilities are interesting but the falsehoods that come from LLM questions undermine it.

The programming and visual art capabilities are most impressive to me... but where's the companies making killings on those? Where's the animation studio cranking out Pixar-quality movies as weekly episodes?

kranke155•32m ago
The animation stuff is about to happen but not there yet.

I work in the industry and I know that ad agencies are already moving onto AI gen for social ads.

For VFX and films the tech is not there yet, since OpenAI believes they can build the next TikTok on AI (a proposition being tested now) and Google is just being Google - building amazing tools but with little understanding (so far) of how to deploy them on the market.

Still Google is likely ahead in building tools that are being used (Nano Banana and Veo 3) while the Chinese open source labs are delivering impressive stuff that you run locally or increasingly on a rented H100 on the cloud.

jf22•31m ago
You can easily google "generative AI success stories" and read about them.

There are always a few comments that make it seem like LLMs have done nothing valuable despite massive levels of adoption.

lm28469•43m ago
Where is all the productivity ? Everyone says they became a 100x employee thanks to LLMs yet not one company has seen any out of the ordinary growth or profit besides AI hyped companies.

What if the amount of slop generated counteracts the amount of productivity gained ? For every line of code it writes it also writes some BS paragraph in a business plan, a report, &c.

jf22•33m ago
How are you evaluating the phrase "yet not one company?"
Esophagus4•1h ago
> But it's not trillion-dollars useful, and it probably won't be.

The market disagrees.

But if you are sure of this, please show your positions. Then we can see how deeply you believe it.

My guess is you’re short the most AI-exposed companies if you think they’re overvalued? Hedged maybe? You’ve found a clever way to invest in bankruptcy law firms that handle tech liquidations?

ghaff•1h ago
One can be skeptical about the overall value of various technologies while also being conservative about specific bets in specific timeframes against them.
Esophagus4•49m ago
I think you’re making my point without realizing it.

If you are skeptical but also not willing to place a bet, you shouldn’t say “AI is overvalued” because you don’t actually believe it. You should say, “I think it might be overvalued, but I’m not really sure? And I don’t have enough experience in markets or confidence to make a bet on it, so I will go with everyone else’s sentiment and make the ‘safe’ bet of being long the market. But like… something feels weird to me about how much money is being poured into this? But I can’t say for sure whether it is overvalued or not.”

Those are two wildly different things.

ghaff•40m ago
Not at all. I may think $TECH is overvalued but some companies may well make it out the other side, some aspects of the $TECH may play out (or not), and the bubble may pop in 1 year or 5. So the sensible process may be to invest in broader indexes and let things play out at the more micro level (that may not be possible to invest in anyway).

I certainly had unease about the dot-com market and should have shifted more investments to the conservative side. But I made the "‘safe’ bet of being long the market" even after things started going south.

FWIW, I do think AI is overvalued for the relatively near term. But I'm not sure what to do about that other than being fairly conservatively invested which makes sense for me at this point anyway.

bartlettD•58m ago
Have you ever heard that "the market can stay irrational longer that you can stay solvent"?

The thing about bubbles is, you can often easily spot them, but can't so easily say when they'll pop.

Esophagus4•54m ago
No. Then you haven’t spotted a bubble.

You’ve just made a comment that “wow, things are going up!” That’s not spotting bubble, that’s my non-technical uncle commenting at a dinner party, “wow this bitcoin thing sure is crazy huh?”

Talk is cheap. You learn what someone really believes by what they put their money in. If you really believe we’re in a bubble, truly believe it based on your deep understanding of the market, then you surely have invested that way.

If not, it’s just idle talk.

vuggamie•36m ago
I truly believe we are in a bubble. I truly believe that AI will exist on the other side of that bubble, just as internet companies and banks existed on the other side of the dotcom crash and the housing crisis.

I don't know how to invest to avoid this bubble. My money is where my mouth is. My investments are conservative and long-term. Most in equity index funds, some bonds, Vanguard mutual funds, a few hand-picked stocks.

No interest in shorting the market or trying to time the crash. I would say I 90% believe a correction of 25% or more will happen in the next 12 months. No idea where my money might be safe. Palantir? Northrup Grumman?

ndsipa_pomu•9m ago
Surely you can spot a bubble if you see that it is rapidly expanding and ultimately unsustainable. Being able to predict when it finally pops would be equivalent to winning a lottery and people would be able to make a lot of money from that, but ultimately no-one can reliably predict when a bubble will pop - doesn't mean that they weren't bubbles.
Pxtl•48m ago
I generally buy index funds but I put some into AMD a while back as the "less-AI-part-of-tech". Will probably get out of that as they've been sucked into that vortex and shift more into global indexes instead of CAN/USA.

I'll leave shorting to the pros. The whole "double-your-money-or-infinite-losses" aspect of shorting is not a game I'm into.

ratelimitsteve•1h ago
this is how capitalism does things. no one wants to overinvest but no one wants to be left behind and everyone is sure that either there's not gonna be a pop or they can sell before it pops.

it has been educational to see how quickly the financier class has moved when they saw an opportunity to abandon labor entirely, though. that's worth remembering when they talk about how this system is the best one for everyone.

rkomorn•1h ago
Zero labor cost is the dream!
nobleach•29m ago
To whom does one sell when they've deleted their workforce? Seeing company after company add to unemployed workers shows they have no forward-thinking ecomonomists advising them. Further, AI for all of its positive potential is NOT going to be free... or even "cheap" once the investors dry up.
rkomorn•24m ago
I never said it was a smart dream. It all seems somewhat (at best) shortsighted to me.

I'm pretty sure they all see the it as someone else's problem to solve.

nobleach•16m ago
yeah.... always someone else's problem to solve. Just like a pyramid scheme.
forinti•1h ago
Leaving large portions of the population jobless surely can't be good for business and political stability.
thmsths•1h ago
They basically want to be like the spacers in Asimov's robots novels: a handful of supremely wealthy people living in vast domains where every single one of their needs and wants are provided by machines. There is literally no lower (human) class in this society.
nobleach•27m ago
This is what's making me laugh a bit about Ford's brazen "we're firing all the white-collar workers" nonsense. Ok, go for it. Who are you going to get to buy a $80,000 F-150?
ratelimitsteve•6m ago
I feel like a lot of people aren't fully examining what AGI would mean for labor. As of right now labor exists separate from capital, which is to say the economy is made of workers, stuff and money. Workers get stuff, put labor into it and turn it into more valuable stuff, capital owns that stuff so they sell it to other workers (usually) and give their workers some portion of the increase in value. AGI would mean that capital is labor. The stuff can go get more stuff and refine it. Capital won't make stuff to sell, they'll just make stuff they want and stuff to go get and make stuff they want. It will, of course, be wildly bad for political stability but I feel like a lot of people think they've found some sort of catch 182 in AGI when labor has no money to buy stuff. They think "That'll shut the whole economy down" but what would really happen is instead of building a machine that makes boots, hiring someone to run it, selling boots and using the money to buy a yacht they'll just build a machine that makes yachts and another machine that kills anyone who interferes with the yacht machine. An economy made of workers, stuff and money will become an economy just made of stuff, as workers will be replaced by stuff and money was only ever useful as a way to induce people to work.
whimsicalism•1h ago
AI is a risk. The thing we know is going to bite us in the butt is our continued massive sovereign debt burden and lack of any political will whatsoever to either increase taxes or reduce spending. The dollar is not going to do well this century and creditors confidence is already starting to decline.

In fact, the further we go into debt - the more we are implicitly betting our society on an AI hail mary.

monero-xmr•1h ago
There is only 1 solution to the global debt crisis and thats inflating the currency. They did it after WW2 and they will have to do it now. There is no other option. They can do it sneaky through fake measures of inflation, keeping a lid on cost of living adjustments, but ultimately they soak bond holders and standard of living.

You see it everywhere in things they can’t inflate. The price of houses and gold most obviously, but you see it in commodities that can’t expand production quickly as well. The solution is to buy assets of course.

whimsicalism•1h ago
Monetizing a debt of this magnitude would be disastrous, but agreed this appears to be the path we are going on by default - given that we are consistently above the inflation mandate yet still lowering rates.

It's no longer the early 20th, there are other competitive & well-run jurisdictions for creditors to dump their money in if they lose faith in the US.

disgruntledphd2•1h ago
> It's no longer the early 20th, there are other competitive & well-run jurisdictions for creditors to dump their money in if they lose faith in the US.

Where, pray tell are these competitive and well-run jurisdictions?

China has capital controls so that probably won't work. The EU might work if they ever get their sh*t together and centralise their bonds and markets, otherwise no.

Like, I too believe that the US is on an unsustainable path, but I just don't see where all that money is gonna go (specifically referring to the foreign investment in the US companies/markets here).

whimsicalism•45m ago
I think there are many smaller jurisdictions that are getting their shit together and might absorb demand - southeast Asia, Singapore obviously (but small), the Gulf. Some subsets of the EU, particularly Eastern Europe.

Plus, even worse-run higher yield jurisdictions become more appealing as the US fails.

CyberDildonics•55m ago
Will have to do it now? There was a huge amount of money printed off after covid.
CraigJPerry•1h ago
>> sovereign debt burden

So all the entities that want to hold the debt (social security administration, mutual funds, pension funds etc) where should they go instead? Riskier assets is what you're saying right? Is that a great idea?

whimsicalism•1h ago
I'm not giving investment advice, just commenting that our current fiscal trajectory has become completely unsustainable & dangerous and very few people seem to be seriously discussing it.

Probably the closest US bond equivalent would be debt from well-run Asian countries. I would avoid fixed-income dollar denominated assets.

marbro•43m ago
All investors should choose gold over the dollar because paper money is always debased. Organizations like Apple, Microsoft, and Google bought government bonds 10 years ago when the price of gold was $1100 and have watched their investments erode while gold has increased to $4000.
karmakurtisaani•28m ago
Do you also give forward looking investment advice, or strictly limit to looking what would have worked 10 years ago?
alphazard•1h ago
> either increase taxes or reduce spending

I see this sentiment a lot, they are not equivalent. The US must reduce spending, if it wants to protect the dollar. Tax increases may also help.

The relationship between tax rates, GDP, government revenue, the market value of new US debt, and the value of the dollar, is complicated and depends on uncertain estimates and models of the economy. Increasing taxes can reduce GDP, which needs to increase to outgrow the debt, there is an optimal tax rate, more doesn't always help. Decreasing spending is a more straightforward relationship, no new debt, no new dollars.

whimsicalism•1h ago
If the US reduces the debt, it removes pressure to monetize and removes market expectation that we will monetize, which directly boosts the dollar. I also think that "rich people are scamming us" is a politically more advantageous message than "old people are scamming us".
alphazard•48m ago
The most important thing is eliminating the annual deficit. That sends more of a signal about the future of the country and it's currency than the total amount of debt.

How it gets done is separate from that. Given that the only demographic that can comfortably weather a recession is also starting to collect social security, paid for by younger generations who would be meaningfully affected by a recession, "old people are scamming us" may actually be an effective message.

gjsman-1000•38m ago
> Tax increases may also help.

I don't live in a costal state, but when I do consulting work typically at charity rates alongside my standard full-time job, I have to pay 24% federal tax, 15.3% FICA, and 7.85% state tax. I am already taxed whenever I want to help anyone at 47.15%. That's before the required tax structures and consulting for doing all the invoicing legally. God himself only wanted 10%, so it seems a government playing God is awfully expensive.

You can't raise taxes any further before I'm done, and I don't think I'm alone, businesses and consultants are already crushed in taxes. I have to bill $40K to hopefully take home $20K; at which point, is it even worth my time? But if I don't consult because it isn't worth it, are small businesses suddenly going to afford an agency or a dedicated software developer? Of course not, so their growth is handicapped, and I wonder what the effects of that tax-wise are.

whimsicalism•36m ago
You're talking about your marginal rate and we simply are far to the left on the laffer curve, raising taxes will raise revenue. I'm not unsympathetic, my marginal is close to the same - but generally I think people's claims that they will stop working are generally more bark than bite and the evidence largely backs that up.

If you don't want a tax-based solution, I do hope you are agitating for SS and medicare cuts.

gjsman-1000•30m ago
> we simply are far to the left on the laffer curve, raising taxes will raise revenue

I don't believe this, actually. I think that we will raise more revenue, yes, by squeezing more from the Fortune 500; but you will absolutely crush small business and consultancy work further. It's kind of like how an 80% tax rate on everyone making over $100K would do a fantastic job of raising revenue, but it's fundamentally stupid and would kill all future golden geese.

(On that note, I see this comment a lot about how we had huge tax rates, 91% in the 1950s; but this is misleading. The effective tax rate for those earners was only 41%, due to the sheer number of exemptions, according to modern analysis. We have never had an actual effective 91% tax rate, or anywhere close to it. Those rates were theater, never reality.)

whimsicalism•20m ago
pretty much all modern economists disagree with you https://kentclarkcenter.org/surveys/laffer-curve/
gjsman-1000•20m ago
... in 2012?
whimsicalism•16m ago
are effective tax rates higher or lower than in 2012?
gjsman-1000•13m ago
Well, if we include property tax, sales tax, SALT deduction cap changes, compliance costs, regulatory burdens, state and local taxes... higher.

On that note, you have no evidence that economists focus solely on tax rates on the curve independently of the economy at large. By definition, the curve is determined from external factors and economic measurements, none of which currently resemble 2012. If the economy crashed and there was 20% unemployment, do you still think they'd stand behind the same curve?

whimsicalism•12m ago
okay, believe what you want. i just hope you are pushing for SS+medicare cuts
alphazard•9m ago
Just a reminder that professional macro-economists are paid to justify political decisions. That's the job. Find data that can arguably make this policy (made for other reasons) make sense to the voters, who have a much worse understanding of economics.

As always, the question with economists is "why aren't you rich?". You would get much better answers about macro-economic counterfactuals by going to a macro-trading firm like Bridgewater and asking the employees "what do you think would happen if..."

whimsicalism•4m ago
putting aside the fact that that is not really true about bias in the economics profession, I have good friends who are ex-Bridgewater who would agree with me... and listen to what Ray Dalio says about our fiscal trajectory.
Theodores•26m ago
Approximately half the S&P500 is in the Magnificent Seven. It doesn't matter what they sell, there is just too much money there. Calling this situation an 'AI risk' is disingenuous, or at best blinkered.

Everyone outside of the American empire knows that the gig is up. When Uncle Sam has his money printing press on full blast, the American people don't feel the full effect, but everyone in the global majority, where there are no dollar printing machines, gets to see too many dollars chasing the same goods, a.k.a. inflation.

The day when the American people elect a fiscally prudent government, for Americans to work hard, pay their taxes and get that deficit to a manageable number is never going to happen. But that is not a problem, the situation is out of America's hands now.

It was the 2022 sanctions on Russia that made the BRICS alliance take note. Freezing their foreign reserves was not well received. Hence we now have China trading in their own currency with their trading partners happy with that.

Soon we will have a situation where there is no 'exorbitant privilege' (reserve currency, which can only ever end up with massive deficits), instead the various BRICS currencies will be anchored to valuable commodities such as rare earth metals, gold and everything else that is 'proof of work' and important to the future. So that means no more 'petro-dollar', the store of value won't be hydrocarbons.

This sounds better than going back to a gold standard. As I see it, the problem with the gold standard is that you kind of know already who has all the gold and we don't want them to be the masters of the universe, because it will be the same bankers.

As for an AI 'Hail Mary', I do hope so. The money printed by Uncle Sam to end up in the Magnificent Seven means that it will be relatively easy to write this money off.

fasteo•1h ago
From the actual report[1]

>>> Despite persistent material uncertainty around the global macroeconomic outlook, risky asset valuations have increased and credit spreads have compressed. Measures of risk premia across many risky asset classes have tightened further since the last FPC meeting in June 2025. On a number of measures, equity market valuations appear stretched, particularly for technology companies focused on Artificial Intelligence (AI). This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic.

Actually, the quoted 'sudden correction' is not referring specifically to AI, but the market in general

[1] https://www.bankofengland.co.uk/financial-policy-committee-r...

NewJazz•1h ago
Yes but I think they have noted ai/tech companies are particularly exposed/stretched despite second order effects likely to impact the whole market.
j45•1h ago
Have there been other stock categories/industries receiving similar flags in the past?
throwacct•1h ago
This is fair. We're now evaluating open-source LLMs to develop our in-house solutions, adding them to our products and services. As soon as they released the models, the moat was, depending on the context, somewhat gone.
kragen•41m ago
Which models have you found most valuable? Are they still worse than the proprietary ones?
throwacct•13m ago
We're testing different models depending on the business case. Our initial tests using 3, 7, and 8B models are working fine. We're not using the big ones since our use cases don't demand them.
lucasRW•42m ago
Isn't it a self-fulfilling prophecy at that point ? I have been hearing so many "it's going to crash, sell" from all sorts of sources since mid-August...
BenFranklin100•39m ago
Current valuations are based on the belief genuine AGI is around the corner. It’s not. LLMs are an interesting technology with many use cases, but they can’t reason in the usual sense of the word and are a dead end for the type of AGI needed to justify current investments.

It’s going to be a gruesome train wreck.

johnny_canuck•34m ago
Scott Galloway had a podcast episode about this topic just over a week ago. https://www.youtube.com/watch?v=Oeepx2ZLrCA

I used to scoff at the idea of the AI-bubble (or any recently called-for tech bubble) being like the 90s given the way technology/the internet is now so integrated into our lives, but the way he spelled it out it does seem similar.

whitehexagon•6m ago
For me the question is who is going to subscribe who hasnt already. And that is before we consider the next gen hardware that can run this stuff locally.

But from what I see of the economy around me here, people just dont have the spare funds for LLM luxuries. It feels like 15+ years of wage deflation, and company streamlining, has removed what little spare spending power people had here. Not forgetting the inflation we have seen in the euro zone.

Even if the bet is now an 'all in' on AGI, I see that more as an existential threat than an economic golden egg bailout.