That was dense but seemed nuanced. Anyone care to summarize for those of us who lack the mathematics nomenclature and context?
qsort•47m ago
I'm not claiming to be an expert, but more or less what the article says is this:
- Context: Terence Tao is one of the best mathematician alive.
- Context: AlphaEvolve is an optimization tool from Google. It differs from traditional tools because the search is guided by an LLM, whose job is to mutate a program written in a normal programming language (they used Python). Hallucinations are not a problem because the LLM is only a part of the optimization loop. If the LLM fucks up, that branch is cut.
- They tested this over a set of 67 problems, including both solved and unsolved ones.
- They find that in many cases AlphaEvolve achieves similar results to what an expert human could do with a traditional optimization software package.
- The main advantages they find are: ability to work at scale, "robustness", i.e. no need to tune the algorithm to work on different problems, better interpretability of results.
- Unsurprisingly, well-known problems likely to be in the training set quickly converged to the best known solution.
- Similarly unsurprisingly, the system was good at "exploiting bugs" in the problem specification. Imagine an underspecified unit test that the system would maliciously comply to. They note that it takes significant human effort to construct an objective function that can't be exploited in this way.
- They find the system doesn't perform as well on some areas of mathematics like analytic number theory. They conjecture that this is because those problems are less amenable to an evolutionary approach.
- In one case they could use the tool to very slightly beat an existing bound.
- In another case they took inspiration from an inferior solution produced by the tool to construct a better (entirely human-generated) one.
It's not doing the job of a mathematician by any stretch of the imagination, but to my (amateur) eye it's very impressive. Google is cooking.
nsoonhui•35m ago
>> If the LLM fucks up, that branch is cut.
Can you explain more on this? How on earth are we supposed to know LLM is hallucinating?
khafra•32m ago
Math is a verifiable domain. Translate a proof into Lean and you can check it in a non-hallucination-vulnerable way.
tux3•26m ago
In this case AlphaEvolve doesn't write proofs, it uses the LLM to write Python code (or any language, really) that produces some numerical inputs to a problem.
They just try out the inputs on the problem they care about. If the code gives better results, they keep it around. They actually keep a few of the previous versions that worked well as inspiration for the LLM.
If the LLM is hallucinating nonsense, it will just produce broken code that gives horrible results, and that idea will be thrown away.
qsort•24m ago
We don't, but the point is that it's only one part of the entire system. If you have a (human-supplied) scoring function, then even completely random mutations can serve as a mechanism to optimize: you generate a bunch, keep the better ones according to the scoring function and repeat. That would be a very basic genetic algorithm.
The LLM serves to guide the search more "intelligently" so that mutations aren't actually random but can instead draw from what the LLM "knows".
energy123•6m ago
Google's system is like any other optimizer, where you have a scoring function, and you keep altering the function's inputs to make the scoring function return a big number.
The difference here is the function's inputs are code instead of numbers, which makes LLMs useful because LLMs are good at altering code. So the LLM will try different candidate solutions, then Google's system will keep the good ones and throw away the bad ones (colloquially, "branch is cut").
There seems to be zero reason for anyone to invest any time into learning anything besides trades anymore.
AI will be better than almost all mathematicians in a few years.
andrepd•25m ago
I'm very sorry for anyone with such a worldview.
quchao•8m ago
very nice~
tornikeo•3m ago
I love this. I think of mathematics as writing programs but for brains. Not all programs are useful and to use AI for writing less useful programs would generally save humans our limited time. Maybe someday AI will help make even more impactful discoveries?
piker•1h ago
qsort•47m ago
- Context: Terence Tao is one of the best mathematician alive.
- Context: AlphaEvolve is an optimization tool from Google. It differs from traditional tools because the search is guided by an LLM, whose job is to mutate a program written in a normal programming language (they used Python). Hallucinations are not a problem because the LLM is only a part of the optimization loop. If the LLM fucks up, that branch is cut.
- They tested this over a set of 67 problems, including both solved and unsolved ones.
- They find that in many cases AlphaEvolve achieves similar results to what an expert human could do with a traditional optimization software package.
- The main advantages they find are: ability to work at scale, "robustness", i.e. no need to tune the algorithm to work on different problems, better interpretability of results.
- Unsurprisingly, well-known problems likely to be in the training set quickly converged to the best known solution.
- Similarly unsurprisingly, the system was good at "exploiting bugs" in the problem specification. Imagine an underspecified unit test that the system would maliciously comply to. They note that it takes significant human effort to construct an objective function that can't be exploited in this way.
- They find the system doesn't perform as well on some areas of mathematics like analytic number theory. They conjecture that this is because those problems are less amenable to an evolutionary approach.
- In one case they could use the tool to very slightly beat an existing bound.
- In another case they took inspiration from an inferior solution produced by the tool to construct a better (entirely human-generated) one.
It's not doing the job of a mathematician by any stretch of the imagination, but to my (amateur) eye it's very impressive. Google is cooking.
nsoonhui•35m ago
Can you explain more on this? How on earth are we supposed to know LLM is hallucinating?
khafra•32m ago
tux3•26m ago
They just try out the inputs on the problem they care about. If the code gives better results, they keep it around. They actually keep a few of the previous versions that worked well as inspiration for the LLM.
If the LLM is hallucinating nonsense, it will just produce broken code that gives horrible results, and that idea will be thrown away.
qsort•24m ago
The LLM serves to guide the search more "intelligently" so that mutations aren't actually random but can instead draw from what the LLM "knows".
energy123•6m ago
The difference here is the function's inputs are code instead of numbers, which makes LLMs useful because LLMs are good at altering code. So the LLM will try different candidate solutions, then Google's system will keep the good ones and throw away the bad ones (colloquially, "branch is cut").