The reason why true singularity-level ASI will never happen isn't because the technology cannot support it (though, to be clear: It can't). Its because no one actually wants it. The capital markets will take care of the rest.
Plenty of people want something capable of doing tasks well beyond what GPT-5 can and that are equally capable to a proficient human.
If you can do that cheaper or faster or more available than said skilled human, there is definitely a market for it.
As much as we hate on meta, open models are the answer.
My point in being pedantic about this is to just point out that an extreme amount of value could be generated by these systems if they only attained the capability to do the things 110 IQ humans already do today. Instead of optimizing toward that goal, the frontier labs seem obsessed with optimizing toward some other weird goal of hyper-intelligence that has very unclear marketability. When pressed on this, leaders in these companies will say "its transferable"; that hyper-intellectuality will suddenly unlock realistic 110 IQ use-cases; but we're increasingly seeing, quite clearly to me, that this isn't the case. And there's very good first principal reasons why you should believe that it won't be.
But 180 IQ intelligence that doesn’t sleep doesn’t stop with instant access to the worlds knowledge will absolutely be able to revolutionise the sciences.
...Why would it not be intellect?
What we're missing is anything that actually resembles thought at anything more than the very surface level.
LLMs are not intelligent. They sound intelligent because they're being trained very well at predicting what tokens to produce to sound that way, but they have no intellect, no understanding, no world model, no consciousness—no intelligence.
I'm eagerly waiting for the next big breakthrough when someone trains a model that switches between "thinking" and "outputting" modes mid-answer :)
Except it seems like it's a worse google, therapist, life coach and programming mate, with the personality of someone who spends all their time trying to solve the Riemann hypothesis.
But the rest I agree with.
It's just a different tool than Google, and quite complimentary in many ways: instead of doing many keywords searches to collect information, aggregate it in a mental model, and derive a conclusion, it's easier to ask it to answer the aggregation I need and back-verify if it makes sense. To me it has helped a lot to not have to know the right incantations of keywords for the information I'm trying to find right out of the bat, and from the LLM answer I can backtrace what searches to do to confirm the validity of it.
It's a small twist but it has definitely showed me some value in these tools for more general information lookup.
The thing is: I still want and need to have a search engine, if for some reason search engines cease to exist and LLMs take their place I will not be able to trust anything...
ASI isn't what people want or not. It's an AGI that is able to self improve at a rapid rate. We haven't built something that can self improve continuously yet. It's not related to whether people want it or not. Personally, I do want ASI.
We are investing literal hundreds of billions into something that is looking more and more likely to flop than succeed.
What scares me the most is we are being steered into a sunk cost fallacy. Industry will continue to claim it is just around the corner, more and more infrastructure will be built, even underground water is being rationed in certain places because AI datacenters apparently deserve priority.
Are we being forced into a situation where we are invested too much in this to come face to face with it doesn't work and it makes everything worse?
What is this capacity being built for? It no longer makes any sense.
Yes, but don't worry, they're getting close to the government so they'll get bailed out.
> What is this capacity being built for? It no longer makes any sense.
Give more power to Silicon Valley billionaires.
I'm just imagining in 2030 when there is an absolutely massive secondary market for used GPUs.
Which leaves Altman frankly looking increasingly awkward with his revolutionary proclamations. His company just isn't as cool as it used to be anymore. Jobs era Apple was arguably a cooler company.
Open source and last years models will be good enough for 99% of people. Very few are going to pay billions to use the absolute best model there is.
I don't think there's the kind of systemic risk that you had in a say 2008 is there?, but I do think there is likely to be a "correction" to put it lightly.
And regardless of being great investments or not, all of those companies have a burning desire for accelerated depreciation to lower their effective tax rate, which data center spend offers.
The more bubbly names will likely come down to earth eventually, but the growth stock sell-off we say in '22 due to the termination of the zero interest rate environment will probably dwarf it in scale. That was a true DotCom 2.0 bubble, with vaporware EV companies with nothing more than a render worth 10 billion, fake meat worth 10 billion, web chat worth 100 billion, 10 billion dollar treadmill companies... Space tourism, LIDAR... So many of those names have literally gone down 90 to 99%. It's odd to me that we don't look at that as what it was - a true dotcom bubble 2.0. The AI related stuff looks tame in comparison to what we saw just a few years ago.
This seems unlikely, because if and when the bottom falls out, it seems implausible that it will be the sort of systemic shock that the financial crisis was, much less the Great Depression. Lots of people would lose lots of money, but you wouldn't expect the same degree of contagion.
Is it just me or is anyone else worried what will happen when the industry realizes LLMs are just not going to be the path to AGI? We are building nuclear power plants, massive datacenters that make our old datacenters look like toys.
Nothing will happen. LLMs even at GPT5 intelligence but scaled up with significantly with higher context size, faster inference speed, and lower cost would change the world.For example, I want to use an LLM system to track every promise a politician makes ever and see if he or she actually did it. No one is going to give me $1 billion to do this. But I think it would enhance society.
I don't need an AGI to do this idea. I think current LLMs are already more than good enough. But LLM prices are still way to high to do this cost effectively. When inference is as cheap relatively as serving up a web page, LLMs will be ubiquitous.
Please don't get me wrong, I am not trying to be sarcastic here. I would love to see a perspective – just any perspective – of how to get out of the current political situation. Not just in the US but in many other countries the same playbook is followed by authoritarians with just as much success as in the US. So if you have material or some reasoning at hand why more information for the population would make a difference on the voting behaviour I would be super-interested. Thanks in advance!
I have a lot more ideas that are gated by low context size, inference speed, and price per token.
The bottomline is that we don't need AGI to change the world. Just more an cheaper compute.
How do you compare their vote record with the things they've said publicly?
You're describing a list. Why do you need GPU farms to create a list?
Yes, some people are concerned, see for example a recent Hank Greene YouTube video:
https://www.youtube.com/watch?v=VZMFp-mEWoM
I'm probably more concerned than he is in that a large market correction would be bad, but even scarier are the growing (in number and power) techo-religious folks who really think we are on the cusp of creating an electronic god and are willing to trash what's left of the planet in an attempt to birth that electronic god that they think will magically save us.
It's just like dot-com bubble, everyone was pumped that Internet is going to take over the world. And even though, the bubble popped, the Internet did eventually take over the world.
LLMs are not remotely comparable to the Internet in terms of their effect on ordinary people (unless you want to talk about their negative effects...).
People keep trotting out this comparison, and as time goes on it makes even less sense, as we come to see even more clearly just how false the promises of the AI bros are.
If progress stops with these models, we will be left with some interesting curiosities that can help with certain tasks, but are in no way worth the amount of resources it costs to run them en masse.
i don't see the sunk-cost fallacy angle, just the sunk costs. the capital allocators will absolutely shut off the spigot when they see that it isn't going to yield. yeah, there could be some dark data centers. not the end of the world, just another ai winter at worst - maybe a dot-com crash... whatever.
the world is way bigger than the techo chamber
If only
Who is? There'll be about 10 nuclear reactor construction starts this year, largely either replacing end-of-life reactors, or in China (which had a well-established nuclear build-out prior to AI stuff). Beyond media hype, there's little reason to think that the AI bubble is actually leading to nuclear reactors being built anywhere.
2) Maybe I'm biased because I'm using GPT5-Pro for my coding, but so far it's been quite good. Normal thinking mode isn't substantially better than o3 IMO, but that's a limitation of data/search.
> As is true with a good many tech companies, especially the giants, in the AI age, OpenAI’s products are no longer primarily aimed at consumers but at investors
Like, I don’t care how much Sam Altman hyped up GPT-5, or how many dumb people fell for it. I figured it was most likely GPT-5 would be a good improvement over GPT-4, but not a huge improvement, because, idk, that’s mostly how these things go? The product and price are fine, the hype is annoying but you can just ignore that.
If you feel it’s hard to ignore it, then stop going on LinkedIn.
All I want to know is which of these things are useful, what they’re useful for, and what’s the best way to use them in terms of price/performance/privacy tradeoffs.
I don't know if it improved at all lately, but for awhile it seemed like every startup I could find was the same variation on "healthcare note taking AI".
Speed.
ChatGPT 5 Thinking is So. Much. Slower. than o4-mini and o4-mini-high. Like between 5 and 10 times slower. Am I the only one experiencing this? I understand they were "mini" models, but those were the current-gen thinking models available to Pro. Is GPT 5 Thinking supposed to be beefier and more effective? Because the output feels no better.
AI bros at work, I guess, and criticism isn't allowed?
So I should probably write "All hail OpenAI, hail Hydra"?
petesergeant•15h ago
Also, that gpt-5-nano can't handle the work that gpt-4o-mini could (and is somehow slower?) is also surprising and bad, but they'd really painted themselves into a corner with the version numbers.
charcircuit•15h ago
SchemaLoad•15h ago
petesergeant•12h ago
what are you basing that on?
nickthegreek•3h ago