If it just found existing solutions then they obviously weren't "previously unsolved" so the tweet is wrong.
He clearly misunderstood the situation and jumped to the conclusion that GPT-5 had actually solved the problems because that's what he wanted to believe.
That said, the misunderstanding is understandable because the tweet he was responding to said they had been listed as "open", but solving unsolved erdos problems by itself would be such a big deal that he probably should have double checked it.
Worst case (more probable): Lying
The inevitable collapse could be even more devastating than the 2008 financial crisis.
All while so vast resources are being wasted on non-verifiable gen AI slob, while real approaches (neuro-symbolic like DeepMind's AlphaFold) are mostly ignored financially because they don't generate the quick stock market increases that hype does.
2008 was a systemic breakdown rippling through the foundations of the financial system.
It would lead to a market crash (80% of gains this year were big tech/AI) and likely a full recession in the US, but nothing nearly as dramatic as a global systemic crisis.
In contrast to the dot com bubble, the huge AI spending is also concentrated on relatively few companies, many with deep pockets from other revenue sources (Google, Meta, Microsoft, Oracle), and the others are mostly private companies that won't have massive impact on the stock market.
A sudden stop in AI craze would be hard for hardware companies and a few big AI only startups , but the financial fallout would be much more contained than either dot com or 2008.
Solving problems that humanity couldn't solve is super-AGI or something like that. It's not there indeed.
Another case of culture flowing from the top I guess.
1) What good is your open problem set if really its a trivial "google search" away from being solved. Why are they not catching any blame here?
2) These answers still weren't perfectly laid out for the most part. GPT-5 was still doing some cognitive lifting to piece it together.
If a human would have done this by hand it would have made news and instead the narrative would have been inverted to ask serious questions about the validity of some these style problem sets and/or ask the question how many other solutions are out there that just need pieced together from pre-existing research.
But, you know, AI Bad.
amelius•37m ago
* OpenAI researchers claimed or suggested that GPT-5 had solved unsolved math problems, but in reality, the model only found known results that were unfamiliar to the operator of erdosproblems.com.
* Mathematician Thomas Bloom and Deepmind CEO Demis Hassabis criticized the announcement as misleading, leading the researchers to retract or amend their original claims.
* According to mathematician Terence Tao, AI models like GPT-5 are currently most helpful for speeding up basic research tasks such as literature review, rather than independently solving complex mathematical problems.