If it just found existing solutions then they obviously weren't "previously unsolved" so the tweet is wrong.
He clearly misunderstood the situation and jumped to the conclusion that GPT-5 had actually solved the problems because that's what he wanted to believe.
That said, the misunderstanding is understandable because the tweet he was responding to said they had been listed as "open", but solving unsolved erdos problems by itself would be such a big deal that he probably should have double checked it.
But yes, as an edge case handler humans still have an edge.
It's not obvious to me that they're better at admitting their mistakes. Part of being good at admitting mistakes is recognizing when you haven't made one. That humans tend to lean too far in that direction shouldn't suggest that the right amount of that behavior is... less than zero.
they feed an internet data into that shit, they basically "told" LLM to behave because surprise surprise, human sometimes can be more nasty
(Yes, not everyone, but we do have some mechanisms to judge or encourage)
This claim is ambiguous. The use of the word "Humans" here obscures rather than clarifies the issue. Individual humans typically do not "hallucinate" constantly, especially not on the job. Any individual human who is as bad at their job as an LLM should indeed be replaced, by a more competent individual human, not by an equally incompetent LLM. This was true long before LLMs were invented.
In the movie "Bill and Ted's Excellent Adventure," the titular characters attempt to write a history report by asking questions of random strangers in a convenience store parking lot. This of course is ridiculous and more a reflection of the extreme laziness of Bill and Ted than anything else. Today, the lazy Bill and Ted would ask ChatGPT instead. It's equally ridiculous to defend the wild inaccuracy and hallucinations of LLMs by comparing them to average humans. It's not the job of humans to answer random questions on any subject.
Human subject matter experts are not perfect, but they’re much better than average and don’t hallucinate on their subjects. They also have accountability and paper trails, can be individually discounted for gross misconduct, unlike LLMs.
Worst case (more probable): Lying
Works for Elon.
Off topic, but I saw The Onion on sale in the magazine rack of Barnes and Noble last month.
For those who miss when it was a free rag in sidewalk newsstands, and don't want to pony up for a full subscription, this is an option.
But it's only a matter of time before AI gets better at prompt engineering.
/s?
The inevitable collapse could be even more devastating than the 2008 financial crisis.
All while so vast resources are being wasted on non-verifiable gen AI slob, while real approaches (neuro-symbolic like DeepMind's AlphaFold) are mostly ignored financially because they don't generate the quick stock market increases that hype does.
2008 was a systemic breakdown rippling through the foundations of the financial system.
It would lead to a market crash (80% of gains this year were big tech/AI) and likely a full recession in the US, but nothing nearly as dramatic as a global systemic crisis.
In contrast to the dot com bubble, the huge AI spending is also concentrated on relatively few companies, many with deep pockets from other revenue sources (Google, Meta, Microsoft, Oracle), and the others are mostly private companies that won't have massive impact on the stock market.
A sudden stop in AI craze would be hard for hardware companies and a few big AI only startups , but the financial fallout would be much more contained than either dot com or 2008.
An AI bust would take the stock price down a good deal, but the stock gains have been relatively moderate. Year on year: Microsoft +14%, Meta +24%, Google +40, Oracle +60%, ... And a notable chunk of those gains have indirectly come from the dollar devaluing.
Nvidia would be hit much harder of course.
There is a good amount of smaller AI startups, but a lot of the AI development is concentrated on the big dogs, it's not nearly as systemic as in dot com, where a lot of businesses went under completely.
And even with an AI freeze, there is plenty of value and usage there already that will not go away, but will keep expanding (AI chat, AI coding, etc) which will mitigate things.
Well, an enormous amount of debt is being raised and issued for AI and US economic growth is nearly entirely AI. Crypto bros showed the other day that they were leveraged to the hilt on coins and it wouldn't surprise me if people are the same way on AI. It is pretty heavily tied to the financial system at this point.
If the stock market crashes, there’s lots of talk about how wealth and debt are interlinked. Could the crash be general enough to start calls on debt backed by stocks.
My recollection in 2008 was that we didn’t learn about how bad it was until after. The tech companies have been so desperate for a win, I wonder if some of them are over their skis in some way, and if there are banks that are risking it all on AI. (We know for some tech bros think the bet on AI is a longtermist like bet; closer to religion than reason and that it’s worth risking everything because the payback could be in the hundreds of trillions)
Combine this with the fact that AI is like what - 30% of the US economy? Magnificent 7 are 60%?
What happens if sustainable PE ratios in tech collapse. Does it take out Tesla?
Maybe the contagion is just the impact on the US economy which, classically anyways has been intermingled with everything.
I would bet almost everything that there is some lie at the center of this thing that we aren’t really aware of yet.
The US admin has been (almost desperately) trying to prop up markets and an already struggling economy. If it wasn't AI, it could have been another industry.
I think AI is more of a sideshow in this context. The bigger story is the dollar losing its dominant position , money draining out into Gold/Silver/other stock markets, India buying oil from Russia in Yen, a global economy that has for years been propped up by government spending (US/China/Europe/...), large and lasting geopolitical power balance shifts, ...
These things don't happen over night, but the effects are compounding.
Some of the above (dollar devaluation) is actually what the current admin wanted, which I would see as an admission of global shifts. We might see much larger changes to the whole financial system in the coming decades, which will have a lot of effects.
Nowhere close. US GDP is like $30 trillion. Open AI revenue is ~$4 billion. All the other AI companies revenue might amount to $10 billion at most, and that is being generous. $10 billion/ $30 trillaion is not even 1%.
You are forgetting all those "boring" sectors that form the basis of economies like agriculture and energy. They have always been bigger than the tech sector at any point, but they aren't "sexy" because there isn't the potential "exponential growth" that tech companies
Due to exorbitant privilege, with the dollar as the only currency that matters, every country that trades with America is swapping goods and services for 'bits of green paper'. Unless buying oil from Russia, these bits of green paper are needed to buy oil. National currencies and the Euro might as well be casino chips, mere proxies for dollars.
Just last week the IMF issued a warning regarding AI stocks and the risk they pose to the global economy if promises are not delivered.
With every hiccup, whether that be the dot com boom, 2008 or the pandemic, the way out is to print more money, with this money going in at the top, for the banks, not the masses. This amounts to devaluation.
When the Ukraine crisis started, the Russian President stopped politely going along with Western capitalism and called the West out for printing too much money during the pandemic. Cut off from SWIFT and with many sanctions, Russia started trading in other currencies with BRICS partners. We are now at a stage of the game where the BRICS countries, of which there are many, already have a backup plan for when the next US financial catastrophe happens. They just won't use the dollar anymore. Note that currently, China doesn't want any dollars making it back to its own economy, since that would cause inflation. So they invest their dollars in Belt and Road initiatives, keeping those green bits of paper safely away from China. They don't even need exports to the USA or Europe since they have a vast home market to develop.
Note that Russia's reserve of dollars and euros was confiscated. They have nothing to lose so they aren't going to come back into the Western financial system.
Hence, you are right. A market crash won't be a global systematic crisis, it just means that Shanghai becomes the financial capital of the world, with no money printing unless it is backed up by mineral, energy or other resources that have tangible value. This won't be great for the collective West, but pretty good for the rest of the world.
I just think that effects of the AI bubble bursting would be at most a symptom or trigger of much larger geopolitical and financial shifts that would happen anyway.
The first is how much of the capital expenditures are being fueled by debt that won't be repaid, and how much that unpaid debt harms lending institutions. This is fundamentally how a few bad debts in 2008 broke the entire financial system: bad loans felled Lehman Brothers, which caused one money market fund to break the buck, which spurred a massive exodus from the money markets rather literally overnight.
The second issue is the psychological impact of 40% of market value just evaporating. A lot of people have indirect exposure to the stock market and these stocks in particular (via 401(k)s or pensions), and seeing that much of their wealth evaporate will definitely have some repercussions on consumer confidence.
Solving problems that humanity couldn't solve is super-AGI or something like that. It's not there indeed.
They're good at the Turing test. But that only marks them as indistinguishable from humans in casual conversation. They are fantastic at that. And a few other things, to be clear. Quick comprehension of an entire codebase for fast queries is horribly useful. But they are a long way from human-level general intelligence.
Another case of culture flowing from the top I guess.
1) What good is your open problem set if really its a trivial "google search" away from being solved. Why are they not catching any blame here?
2) These answers still weren't perfectly laid out for the most part. GPT-5 was still doing some cognitive lifting to piece it together.
If a human would have done this by hand it would have made news and instead the narrative would have been inverted to ask serious questions about the validity of some these style problem sets and/or ask the question how many other solutions are out there that just need pieced together from pre-existing research.
But, you know, AI Bad.
They are a community run database, not the sole arbiter and source of this information. We learned the most basic research back in highschool, I'd hope researchers from top institutions now working for one of the biggest frontier labs can do the same prior to making a claim, but microblogging has and continues to be a blight on any accurate information so nothing new there.
> GPT-5 was still doing some cognitive lifting to piece it together.
Cognitive lifting? It's a model, not a person, but besides that fact, this was already published literature. Handy that a LLM can be a slightly better search, but calling claims of "solving maths problems" out as irresponsible and inaccurate is the only right choice in this case.
> If a human would have done this by hand it would have made news [...]
"Researcher does basic literature review" isn't news in this or any other scenario. If we did a press release every journal club, there wouldn't be enough time to print a single page advert.
> [...] how many other solutions are out there that just need pieced together from pre-existing research [...]
I am not certain you actually looked into the model output or why this was such an embarrassment.
> But, you know, AI Bad.
AI hype very bad. AI anthropomorphism even worse.
Please explain how this is in any way related to the matter at hand. What is the relation between the incompleteness of an math problem database, and AI hypesters lying about the capabilities of GPT5? I fail to see the relevance.
> If a human would have done this by hand it would have made news
If someone updated information on an obscure math problem aggregator database this would be news?? Again, I fail to see your point here.
The real problem here is that there's clearly a strong incentive for the big labs to deceive the public (and/or themselves) about the actual scientific and technical capabilities of LLMs. As Karpathy pointed out on the recent Dwarkesh podcast, LLMs are quite terrible at novel problems, but this has become sort of an "Emperor's new clothes" situation where nobody with a financial stake will actually admit that, even though it's common knowledge if you actually work with these things.
And this directly leads to the misallocation of billions of dollars and potentially trillions in economic damage as companies align their 5-year strategies towards capabilities that are (right now) still science fiction.
The truth is at stake.
Edit: we are in peak damage control phase of the hype cycle.
Definitely not anti-AI here. I think I have been disappointed though, since then, to slowly learn that they're (still) little beyond that.
Still amazing though. And better than a Google search (IMHO).
That seems out of character for him - more like something I'd expect from Elon Musk. What's the context I'm missing?
I remember a public talk, where he was on the stage with some young researcher from MS. (I think it was one of the authors of the "sparks of brilliance in gpt4" paper, but not sure).
Anyway, throughout that talk he kept talking above the guy, and didn't seem to listen, even though he obviously didn't try the "raw", "unaligned" model that the folks at MS were talking about.
And he made 2 big claims:
1) LLMs can't do math. He went on to "argue" that LLMs trick you with poetry that sounds good, but is highly subjective, and when tested on hard verifiable problems like math, they fail.
2) LLMs can't plan.
Well, merely one year later, here we are. AIME is saturated (with tool use), gold at IMO, and current agentic uses clearly can plan (and follow up with the plan, re-write parts, finish tasks, etc etc).
So, yeah, I'd take everything any one singular person says with a huge grain of salt. No matter how brilliant said individual is.
PS: So just we're clear: formal planning in AI </> making a coding plan in Cursor.
https://x.com/SebastienBubeck/status/1977181716457701775:
> gpt5-pro is superhuman at literature search:
> it just solved Erdos Problem #339 (listed as open in the official database https://erdosproblems.com/forum/thread/339) by realizing that it had actually been solved 20 years ago
https://x.com/MarkSellke/status/1979226538059931886:
> Update: Mehtaab and I pushed further on this. Using thousands of GPT5 queries, we found solutions to 10 Erdős problems that were listed as open: 223, 339, 494, 515, 621, 822, 883 (part 2/2), 903, 1043, 1079.
It's clearly talking about finding existing solutions to "open" problems.
The main mistake is by Kevin Weil, OpenAI CTO, who misunderstood the tweet:
https://x.com/kevinweil/status/1979270343941591525:
> you are totally right—I actually misunderstood @MarkSellke's original post, embarrassingly enough. Still very cool, but not the right words. Will delete this since I can't edit it any longer I think.
Obviously embarassing, but completely overblown reaction. Just another way for people to dunk on OpenAI :)
He, more than anyone else, should be able to for one parse the original statements correctly and for another maybe realize that if one of their models had accomplished what he seemed to think GPT-5 had, that may require some more scrutiny and research before posting it. That would have, after all, been a clear and incredibly massive development for the space, something the CTO of OpenAI should recognize instantly.
The amount of people that told me this is clear and indisputable proof that AGI/ASI/whatever is either around the corner or already here is far more than zero and arguing against their misunderstanding was made all the more challenging because "the CTO of OpenAI knows more than you" is quite a solid appeal to authority.
I'd recommend maybe a waiting period of 48h before any authority in any field can send a tweet, that might resolve some of the inaccuracies and the incredibly annoying need to just jump on wild bandwagons...
My boss always used to say “our only policy is, don’t be the reason we need to create a new policy”. I suspect OpenAI is going to have some new public communication policies going forward.
The deleted tweet that the article is about said "GPT-5 just found solutions to 10 (!) previously unsolved Erdös problems, and made progress on 11 others. These have all been open for decades." If it had been posted stand-alone then I would certainly agree that it was misleading, but it was not.
It was a quote-tweet of this: https://x.com/MarkSellke/status/1979226538059931886?t=OigN6t..., where the author is saying he's "pushing further on this".
The "this" in question is what this second tweet is in turn quote-tweeting: https://x.com/SebastienBubeck/status/1977181716457701775?t=T... -- where the author says "gpt5-pro is superhuman at literature search: [...] it just solved Erdos Problem #339 (listed as open in the official database erdosproblems.com/forum/thread/3…) by realizing that it had actually been solved 20 years ago"
So, reading the thread in order, you get
* SebastienBubeck: "GPT-5 is really good at literature search, it 'solved' an apparently-open problem by finding an existing solution"
* MarkSellke: "Now it's done ten more"
* kevinweil: "Look at this cool stuff we've done!"
I think the problem here is the way quote-tweets work -- you only see the quoted post and not anything that it in turn is quoting. Kevin Weil had the two previous quotes in his context when he did his post and didn't consider the fact that readers would only see the first level, so wouldn't have Sebastien Bubek's post in mind when they read his.That seems like an easy mistake to entirely honestly make, and I think the pile-on is a little unfair.
Previously unsolved. The context doesn't make that true, does it?
No, Weil said he himself misunderstood Sellke's post[1].
Note Weil's wording (10 previously unsolved Erdos problems) vs. Sellke's wording (10 Erdos problems that were listed as open).
Survivor bias.
I can assure you that GPT-5 fucks up even relatively easy searches. I need to have a very good idea how the results looks like and the ability to test it to be able to use any result from GPT-5.
If I throw the dice 1000 times and post about it each time that I got a double six. Am I the best dice thrower that there is?
Imagine if you were talking about your own work online, you make an honest mistake, then the whole industry roasts you for it.
I’m so tired of hearing everyone take stabs at people at OpenAI just because they don’t personally like sama or something.
No, it does not. It only produces a highly convincing counterfeit. I am honestly happy for people who are satisfied with its output: life is way easier for them than for me. Obviously, the machine discriminates me personally. When I spend hours in the library looking for some engineering-related math made in the 70s-80s, as a last resort measure, I can try to play this gambling with chat, hoping for any tiny clue to answer my question. And then for the following hours, I am trying to understand what is wrong with the chat output. Most often, I experience the "it simply can't be" feeling, and I know I am not the only one having it.
amelius•1h ago
* OpenAI researchers claimed or suggested that GPT-5 had solved unsolved math problems, but in reality, the model only found known results that were unfamiliar to the operator of erdosproblems.com.
* Mathematician Thomas Bloom and Deepmind CEO Demis Hassabis criticized the announcement as misleading, leading the researchers to retract or amend their original claims.
* According to mathematician Terence Tao, AI models like GPT-5 are currently most helpful for speeding up basic research tasks such as literature review, rather than independently solving complex mathematical problems.
HarHarVeryFunny•33m ago
So GPT-5 didn't derive anything itself - it was just an effective search engine for prior research, which is useful, but not any sort of breakthough whatsoever.