quanta published an article that talked about a physics lab asking chatGPT to help come up with a way to perform an experiment, and chatGPT _magically_ came up with an answer worth pursuing. but what actually happened was chatGPT was referencing papers that basically went unread from lesser famous labs/researchers
this is amazing that chatGPT can do something like that, but `referencing data` != `deriving theorems` and the person posting this shouldn't just claim "chatGPT derived a better bound" in a proof, and should first do a really thorough check if it's possible this information could've just ended up in the training data
Which is actually huge. Reviewing and surfacing all the relevant research out there that we are just not aware of would likely have at least as much impact as some truly novel thing that it can come up with.
now let's invalidate probably 70% of all patents
if LLMs arent being used by https://patents.stackexchange.com/ or patent troll fighters, shame on them.
Context: https://x.com/GeoffLewisOrg/status/1945864963374887401
The paper in question is an arxiv preprint whose first author seems to be an undergraduate. The theorem in it which GPT improves upon is perfectly nice, there are thousands of mathematicians who could have proved it had they been inclined to. AI has already solved much harder math problems than this.
Of course, because I am a selfish person, I'd say I appreciate most his work on convex body chasing (see "Competitively chasing convex bodies" on the Wikipedia link), because it follows up on some of my work.
Objectively, you should check his conference submission record, it will be a huge number of A*/A CORE rank conferences, which means the best possible in TCS. Or the prizes section on Wikipedia.
https://x.com/ErnestRyu/status/1958408925864403068?t=QmTqOcx...
aka the Grothendieck prime!
A few things to consider:
1. This is one example. How many other attempts did the person try that failed to be useful, accurate, coherent? The author is an OpenAI employee IIUC, so it begs this question. Sora's demos were amazing until you tried it, and realized it took 50 attempts to get a usable clip.
2. The author noted that humans had updated their own research in April 2025 with an improved solution. For cases where we detect signs of superior behavior, we need to start publishing the thought process (reasoning steps, inference cycles, tools used, etc.). Otherwise it's impossible to know whether this used a specialty model, had access to the more recent paper, or in other ways got lucky. Without detailed proof it's becoming harder to separate legitimate findings from marketing posts (not suggesting this specific case was a pure marketing post)
3. Points 1 and 2 would help with reproducibility, which is important for scientific rigor. If we give Claude the same tools and inputs, will it perform just as well? This would help the community understand if GPT-5 is novel, or if the novelty is in how the user is prompting it
How many times did a stochastic parrot by pure chance bring words into an order that made up a new proof?
And why should a stochastic parrot get any credit?
High chance given that this is the same guy that came up with SVG unicorn (sparks of AGI) which raises the same question even more obviously.
If you could combine this with automated theorem proving, it wouldn't matter if it was right only 1 out of a 1000 times.
The entire field of math is fractal-like. There are many, many low hanging fruits everywhere. Much of it is rote and not life changing. A big part of doing “interesting” math is picking what to work on.
A more important test is to give an AI access to the entire history of math and have it _decide_ what to work on, and then judge it for both picking an interesting problem and finding a novel solution.
But it's a separate question of whether this is a good example of that. I think there is a certain dishonesty in the tagline. "I asked a computer to improve on the state-of-the-art and it did!". With a buried footnote that the benchmark wasn't actually state-of-the-art, and that an improved solution was already known (albeit structured a bit differently).
When you're solving already-solved problems, it's hard to avoid bias, even just in how you ask the question and otherwise nudge the model. I see it a lot in my field: researchers publish revolutionary results that, upon closer inspection, work only for their known-outcome test cases and not much else.
Another piece of info we're not getting: why this particular, seemingly obscure problem? Is there something special about it, or is it data dredging (i.e., we tried 1,000 papers and this is the only one where it worked)?
I’m absolutely confident that AI/LLM can solve things, but you have to shift through a lot of crap to get there. Even further, it seems AI/LLM tend to solve novel problems in very unconventional ways. It can be very hard to know if an attempt is doomed, or just one step away from magic.
But similarly to how a computer plays chess, using heuristics to narrow down a vast search space into tractable options, LLMs have the potential to be a smarter way to narrow that search space to find proofs. The big question is whether these heuristics are useful enough, and the proofs they can find valuable enough, to make it worth the effort.
If LLMs were already a breakthrough in proving theorems, even for obscure minor theorems, there would be a massive increase in published papers due to publish or perish academic incentives.
Our chemists were split: some argued it was an artifact, others dug deep and provided some reasoning as to why the generations were sound. Keep in mind, that was a non-reasoning, very early stage model with simple feedback mechanisms for structure and molecular properties.
In the wet lab, the model turned out to be right. That was five years ago. My point is, the same moment that arrived for our chemists will be arriving soon for theoreticians.
https://www.economist.com/science-and-technology/2025/07/02/...
My understanding is, iterating on possible sequences (of codons, base pairs, etc) is exactly what LLMs, these feedback-looped predictor machines, are especially great at. With the newest models, those that "reason about" (check) their own output, are even better at it.
For instance, you can put a thousand temperature sensors in a room, which give you 1000 temperature readouts. But all these temperature sensors are correlated, and if you project them down to latent space (using PCA or PLS if linear, projection to manifolds if nonlinear) you’ll create maybe 4 new latent variables (which are usually linear combinations of all other variables) that describe all the sensor readings (it’s a kind of compression). All you have to do then is control those 4 variables, not 1000.
In the chemical space, there are thousands of possible combinations of process conditions and mixtures that produce certain characteristics, but when you project them down to latent variables, there are usually less than 10 variables that give you the properties you want. So if you want to create a new chemical, all you have to do is target those few variables. You want a new product with particular characteristics? Figure out how to get < 10 variables (not 1000s) to their targets, and you have a new product.
There are also nonlinear techniques. I’ve used UMAP and it’s excellent (particularly if your data approximately lies on a manifold).
https://umap-learn.readthedocs.io/en/latest/
The most general purpose deep learning dimensionality reduction technique is of course the autoencoder (easy to code in PyTorch). Unlike the above, it makes very few assumptions, but this also means you need a ton more data to train it.
T-SNE is good for visualization and for seeing class separation, but in my experience, I haven’t found it to work for me for dimensionality reduction per se (maybe I’m missing something). For me, it’s more of a visualization tool.
On that note, there’s a new algorithm that improves on T-SNE called PaCMAP which preserves local and global structures better. https://github.com/YingfanWang/PaCMAP
https://www.biorxiv.org/content/10.1101/2025.05.08.652944v1....
https://www.pnas.org/doi/10.1073/pnas.1611138113
You summarized it very well!
You never quite know.
Right now, it's mostly the former. I fully expect the latter to become more and more common as the performance of AI systems improves.
Wouldnt that mean a fall of US pharmaceutical conglomate based on current laws about copyright and AI content?
Similar for physicists, I think there’s a very confusing/unconventional antenna called the “evolved antenna” which was used on a NASA spacecraft. The idea behind it was supported from genetic programming. The science or understanding “why” the way the antenna bends at different areas supporting increased gain is not well understood by us today.
This all boils down to empirical reasoning, which underlies the vast majority of science (or science adjacent fields like software engineering, social sciences etc).
The question I guess is; does LLMs, “AI”, ML give us better hypothesis or tests to run to support empirical evidence-based science breakthroughs? The answer is yes.
Will these be substantial, meaningful or create significant improvements on today’s approaches?
I can’t wait to find out!
GPT-5 (and other LLMs) are by definition language models and though they will happily spew tokens about whatever you ask, they don't necessarily have the training data to properly encode the latent space of (e.g) drug interactions.
Confusing these two concepts could be deadly.
But yes, it's getting better and better.
On the other hand, I have a collection of unpublished results in less active fields that I’ve tested every frontier model on (publicly accessible and otherwise) and each time the models have failed to solve them. Some of these are simply reformulations of results in the literature that the models are unable to find/connect which is what leads me to formulate this as a search problem with the space not being densely populated enough in this case (in terms of activity in these subfields).
There are a few masters-level publishable research problems that I have tried with LLMs on thinking mode, and it had produced a nearly complete proof before we had a chance to publish it. Like the problem stated here, these won't set the world on fire, but they do chip away at more meaningful things.
It often doesn't produce a completely correct proof (it's a matter of luck whether it nails a perfect proof), but it very often does enough that even a less competent student can fill in the blanks and fix up the errors. After all, the hardest part of a proof is knowing which tools to employ, especially when those tools can be esoteric.
brcmthrowaway•4d ago
ac29•4d ago
mrcwinn•7h ago