Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. If human intelligence is deterministic, then it can be written in software.
Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen. Failures to date can be attributed to various factors, but the gist is that we haven't yet identified the principles of intelligent software.
My guess is that we need less than 5 million years further development time even in a worst-case scenario. With luck and proper investment, we can get it down well below the 1 million year mark.
No, not all processes follow deterministic Newtonian mechanics. It could also be random, unpredictable at times. Are the there random processes in the human brain? Yes, there are random quantum processes in every atom, and there are atoms in the brain.
Yes, this is no less materialistic: Humans are still proof that either you believe in souls or such, or that human level intelligence can be made from material atoms. But it's not deterministic.
But also, LLMs are not anywhere close to becoming human level intelligence.
~200 years of industrial revolution and we already fucked up beyond the point of no return, I don't think we'll have resources to continue on this trajectory for 1m years. We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall
You just need a few Dyson spheres and someone omniscient to give you all the parameter values. Easy peazy.
Just like cracking any encryption: you just brute force all possible passwords. Perfectly deterministic decryption method.
</s>
It’s very possible that human beings today are already doing the most intelligent things they can given the data and resources they have available. This whole idea that there’s a magic property called intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with, increasingly just seems like the fantasy of people who think they’re very intelligent.
And, if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory.
Isn’t that what the greatest minds in physics would say as well? Yes, yes it is.
No debate will be entered into on this topic by me today.
A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.
As humans, we can easily visualize and reason about 2D and 3D spaces, it's natural because our sensory systems evolved to navigate a 3D world. But can we truly conceive of a million dimensions, let alone visualize them? We can describe them mathematically, but not intuitively grasp them. Our brains are not built for that kind of complexity.
Now imagine a form of intelligence that can directly perceive and reason about such high dimensional structures. Entirely new kinds of understanding and capabilities might emerge. If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.
Of course that's speculative, but it just illustrates how deeply intelligence is shaped and limited by its biological foundation.
Humans existed in the world for hundreds of thousands of years before they did any of those things, with the exception of wooden hut, which took less time than that. But also wasn't instant.
Your example doesn't entirely contradict the argument that it takes time and experimentation as well, that intellect isn't the only limiting factor.
This is also a commonplace in behavioral economics; the whole foundation of the field is that people in general don't think hard enough to fully exploit the information available to them, because they don't have the time or the energy.
Of course, that doesn't mean that great intelligence could figure out warp drives. Maybe warp drives are actually physically impossible! https://en.wikipedia.org/wiki/Warp_drive says:
> A warp drive or a drive enabling space warp is a fictional superluminal (faster than the speed of light) spacecraft propulsion system in many science fiction works, most notably Star Trek,[1] and a subject of ongoing real-life physics research. (...)
> The creation of such a bubble requires exotic matter—substances with negative energy density (a violation of the Weak Energy Condition). Casimir effect experiments have hinted at the existence of negative energy in quantum fields, but practical production at the required scale remains speculative.
Cancer, however, is clearly curable, and indeed often cured nowadays. It wouldn't be terribly surprising if we already had enough data to figure out how to solve it the rest of the time. We already have complete genomes for many species, AlphaFold has solved the protein-folding problem, research oncology studies routinely sequence tumors nowadays, and IHEC says they already have "comprehensive sets of reference epigenomes", so with enough computational power, or more efficient simulation algorithms, we could probably simulate an entire human body much faster than real time with enough fidelity to simulate cancer, thus enabling us to test candidate drug molecules against a particular cancer instantly.
Also, of course, once you can build reliable nanobots, you can just program them to kill a particular kind of cancer cell, then inject them.
Understanding this does not require believing that "intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with", which I think is a strawman you have made up. It doesn't even require believing that sufficient intelligence can solve every problem if it has sufficient data and resources to work with. It only requires understanding that being able to do the same thing regular humans do, but much faster, would be sufficient to cure cancer.
https://www.theguardian.com/business/2025/oct/08/bank-of-eng...
For non-brits, Bank of England the UKs central bank and is a lot like the US Fed. Their comments carry a lot of weight and do impact government policy.
Not enough central banks were making comments about the sub-prime bubble that led to the 2008 crisis. Getting warnings about a possible AI bubble by a central bank is both significant and, in performing the functions of monetary and financial stability for a country, the prudent thing to do.
Offering commentary on which particular sectors they feel are a 'bubble' is outside their purview and not particularly productive IMO, the state is not very good at picking winners.
*edited to 2007
In 1996 Fed Chair Alan Greenspan warned about irrational exuberance, in 1999 he warned Congress about "the possibility that the recent performance of the equity markets will have difficulty in being sustained". The crash came in 2000.
The warning seems to have gone unnoticed. AMD just behaves exactly like Juniper in 1999.
AI is useful. But it's not trillion-dollars useful, and it probably won't be.
That's more of a UI problem than a limitation in Diffusion tech.
That's a customer who'll pay, it might be worth a lot. But a $trillion per year?
That's the lowest of the low and even you accept it doesn't work (yet), how can LLMs be worth 50% of the last years of gdp growth if it's that bad. Do you think customer support represents 50% of newly created value ? I bet it isn't event .5%
Maybe if companies would wire up their "oh a customer is complaining try and talk them out of canceling their account offer them a mild discount in exchange for locking in for a year contract" API to the LLM? Okay, but that's not a trillion-dollar service.
Maybe it's because I find writing easy, but I find the text generation broadly useless except for scamming. The search capabilities are interesting but the falsehoods that come from LLM questions undermine it.
The programming and visual art capabilities are most impressive to me... but where's the companies making killings on those? Where's the animation studio cranking out Pixar-quality movies as weekly episodes?
I work in the industry and I know that ad agencies are already moving onto AI gen for social ads.
For VFX and films the tech is not there yet, since OpenAI believes they can build the next TikTok on AI (a proposition being tested now) and Google is just being Google - building amazing tools but with little understanding (so far) of how to deploy them on the market.
Still Google is likely ahead in building tools that are being used (Nano Banana and Veo 3) while the Chinese open source labs are delivering impressive stuff that you run locally or increasingly on a rented H100 on the cloud.
There are always a few comments that make it seem like LLMs have done nothing valuable despite massive levels of adoption.
What if the amount of slop generated counteracts the amount of productivity gained ? For every line of code it writes it also writes some BS paragraph in a business plan, a report, &c.
The market disagrees.
But if you are sure of this, please show your positions. Then we can see how deeply you believe it.
My guess is you’re short the most AI-exposed companies if you think they’re overvalued? Hedged maybe? You’ve found a clever way to invest in bankruptcy law firms that handle tech liquidations?
If you are skeptical but also not willing to place a bet, you shouldn’t say “AI is overvalued” because you don’t actually believe it. You should say, “I think it might be overvalued, but I’m not really sure? And I don’t have enough experience in markets or confidence to make a bet on it, so I will go with everyone else’s sentiment and make the ‘safe’ bet of being long the market. But like… something feels weird to me about how much money is being poured into this? But I can’t say for sure whether it is overvalued or not.”
Those are two wildly different things.
I certainly had unease about the dot-com market and should have shifted more investments to the conservative side. But I made the "‘safe’ bet of being long the market" even after things started going south.
FWIW, I do think AI is overvalued for the relatively near term. But I'm not sure what to do about that other than being fairly conservatively invested which makes sense for me at this point anyway.
The thing about bubbles is, you can often easily spot them, but can't so easily say when they'll pop.
You’ve just made a comment that “wow, things are going up!” That’s not spotting bubble, that’s my non-technical uncle commenting at a dinner party, “wow this bitcoin thing sure is crazy huh?”
Talk is cheap. You learn what someone really believes by what they put their money in. If you really believe we’re in a bubble, truly believe it based on your deep understanding of the market, then you surely have invested that way.
If not, it’s just idle talk.
I don't know how to invest to avoid this bubble. My money is where my mouth is. My investments are conservative and long-term. Most in equity index funds, some bonds, Vanguard mutual funds, a few hand-picked stocks.
No interest in shorting the market or trying to time the crash. I would say I 90% believe a correction of 25% or more will happen in the next 12 months. No idea where my money might be safe. Palantir? Northrup Grumman?
I'll leave shorting to the pros. The whole "double-your-money-or-infinite-losses" aspect of shorting is not a game I'm into.
it has been educational to see how quickly the financier class has moved when they saw an opportunity to abandon labor entirely, though. that's worth remembering when they talk about how this system is the best one for everyone.
I'm pretty sure they all see the it as someone else's problem to solve.
In fact, the further we go into debt - the more we are implicitly betting our society on an AI hail mary.
You see it everywhere in things they can’t inflate. The price of houses and gold most obviously, but you see it in commodities that can’t expand production quickly as well. The solution is to buy assets of course.
It's no longer the early 20th, there are other competitive & well-run jurisdictions for creditors to dump their money in if they lose faith in the US.
Where, pray tell are these competitive and well-run jurisdictions?
China has capital controls so that probably won't work. The EU might work if they ever get their sh*t together and centralise their bonds and markets, otherwise no.
Like, I too believe that the US is on an unsustainable path, but I just don't see where all that money is gonna go (specifically referring to the foreign investment in the US companies/markets here).
Plus, even worse-run higher yield jurisdictions become more appealing as the US fails.
So all the entities that want to hold the debt (social security administration, mutual funds, pension funds etc) where should they go instead? Riskier assets is what you're saying right? Is that a great idea?
Probably the closest US bond equivalent would be debt from well-run Asian countries. I would avoid fixed-income dollar denominated assets.
I see this sentiment a lot, they are not equivalent. The US must reduce spending, if it wants to protect the dollar. Tax increases may also help.
The relationship between tax rates, GDP, government revenue, the market value of new US debt, and the value of the dollar, is complicated and depends on uncertain estimates and models of the economy. Increasing taxes can reduce GDP, which needs to increase to outgrow the debt, there is an optimal tax rate, more doesn't always help. Decreasing spending is a more straightforward relationship, no new debt, no new dollars.
How it gets done is separate from that. Given that the only demographic that can comfortably weather a recession is also starting to collect social security, paid for by younger generations who would be meaningfully affected by a recession, "old people are scamming us" may actually be an effective message.
I don't live in a costal state, but when I do consulting work typically at charity rates alongside my standard full-time job, I have to pay 24% federal tax, 15.3% FICA, and 7.85% state tax. I am already taxed whenever I want to help anyone at 47.15%. That's before the required tax structures and consulting for doing all the invoicing legally. God himself only wanted 10%, so it seems a government playing God is awfully expensive.
You can't raise taxes any further before I'm done, and I don't think I'm alone, businesses and consultants are already crushed in taxes. I have to bill $40K to hopefully take home $20K; at which point, is it even worth my time? But if I don't consult because it isn't worth it, are small businesses suddenly going to afford an agency or a dedicated software developer? Of course not, so their growth is handicapped, and I wonder what the effects of that tax-wise are.
If you don't want a tax-based solution, I do hope you are agitating for SS and medicare cuts.
I don't believe this, actually. I think that we will raise more revenue, yes, by squeezing more from the Fortune 500; but you will absolutely crush small business and consultancy work further. It's kind of like how an 80% tax rate on everyone making over $100K would do a fantastic job of raising revenue, but it's fundamentally stupid and would kill all future golden geese.
(On that note, I see this comment a lot about how we had huge tax rates, 91% in the 1950s; but this is misleading. The effective tax rate for those earners was only 41%, due to the sheer number of exemptions, according to modern analysis. We have never had an actual effective 91% tax rate, or anywhere close to it. Those rates were theater, never reality.)
On that note, you have no evidence that economists focus solely on tax rates on the curve independently of the economy at large. By definition, the curve is determined from external factors and economic measurements, none of which currently resemble 2012. If the economy crashed and there was 20% unemployment, do you still think they'd stand behind the same curve?
As always, the question with economists is "why aren't you rich?". You would get much better answers about macro-economic counterfactuals by going to a macro-trading firm like Bridgewater and asking the employees "what do you think would happen if..."
Everyone outside of the American empire knows that the gig is up. When Uncle Sam has his money printing press on full blast, the American people don't feel the full effect, but everyone in the global majority, where there are no dollar printing machines, gets to see too many dollars chasing the same goods, a.k.a. inflation.
The day when the American people elect a fiscally prudent government, for Americans to work hard, pay their taxes and get that deficit to a manageable number is never going to happen. But that is not a problem, the situation is out of America's hands now.
It was the 2022 sanctions on Russia that made the BRICS alliance take note. Freezing their foreign reserves was not well received. Hence we now have China trading in their own currency with their trading partners happy with that.
Soon we will have a situation where there is no 'exorbitant privilege' (reserve currency, which can only ever end up with massive deficits), instead the various BRICS currencies will be anchored to valuable commodities such as rare earth metals, gold and everything else that is 'proof of work' and important to the future. So that means no more 'petro-dollar', the store of value won't be hydrocarbons.
This sounds better than going back to a gold standard. As I see it, the problem with the gold standard is that you kind of know already who has all the gold and we don't want them to be the masters of the universe, because it will be the same bankers.
As for an AI 'Hail Mary', I do hope so. The money printed by Uncle Sam to end up in the Magnificent Seven means that it will be relatively easy to write this money off.
>>> Despite persistent material uncertainty around the global macroeconomic outlook, risky asset valuations have increased and credit spreads have compressed. Measures of risk premia across many risky asset classes have tightened further since the last FPC meeting in June 2025. On a number of measures, equity market valuations appear stretched, particularly for technology companies focused on Artificial Intelligence (AI). This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic.
Actually, the quoted 'sudden correction' is not referring specifically to AI, but the market in general
[1] https://www.bankofengland.co.uk/financial-policy-committee-r...
It’s going to be a gruesome train wreck.
I used to scoff at the idea of the AI-bubble (or any recently called-for tech bubble) being like the 90s given the way technology/the internet is now so integrated into our lives, but the way he spelled it out it does seem similar.
But from what I see of the economy around me here, people just dont have the spare funds for LLM luxuries. It feels like 15+ years of wage deflation, and company streamlining, has removed what little spare spending power people had here. Not forgetting the inflation we have seen in the euro zone.
Even if the bet is now an 'all in' on AGI, I see that more as an existential threat than an economic golden egg bailout.
boguscoder•1h ago
WithinReason•1h ago
gainda•1h ago
ratelimitsteve•1h ago