That was dense but seemed nuanced. Anyone care to summarize for those of us who lack the mathematics nomenclature and context?
qsort•30m ago
I'm not claiming to be an expert, but more or less what the article says is this:
- Context: Terence Tao is one of the best mathematician alive.
- Context: AlphaEvolve is an optimization tool from Google. It differs from traditional tools because the search is guided by an LLM, whose job is to mutate a program written in a normal programming language (they used Python). Hallucinations are not a problem because the LLM is only a part of the optimization loop. If the LLM fucks up, that branch is cut.
- They tested this over a set of 67 problems, including both solved and unsolved ones.
- They find that in many cases AlphaEvolve achieves similar results to what an expert human could do with a traditional optimization software package.
- The main advantages they find are: ability to work at scale, "robustness", i.e. no need to tune the algorithm to work on different problems, better interpretability of results.
- Unsurprisingly, well-known problems likely to be in the training set quickly converged to the best known solution.
- Similarly unsurprisingly, the system was good at "exploiting bugs" in the problem specification. Imagine an underspecified unit test that the system would maliciously comply to. They note that it takes significant human effort to construct an objective function that can't be exploited in this way.
- They find the system doesn't perform as well on some areas of mathematics like analytic number theory. They conjecture that this is because those problems are less amenable to an evolutionary approach.
- In one case they could use the tool to very slightly beat an existing bound.
- In another case they took inspiration from an inferior solution produced by the tool to construct a better (entirely human-generated) one.
It's not doing the job of a mathematician by any stretch of the imagination, but to my (amateur) eye it's very impressive. Google is cooking.
nsoonhui•18m ago
>> If the LLM fucks up, that branch is cut.
Can you explain more on this? How on earth are we supposed to know LLM is hallucinating?
khafra•15m ago
Math is a verifiable domain. Translate a proof into Lean and you can check it in a non-hallucination-vulnerable way.
tux3•9m ago
In this case AlphaEvolve doesn't write proofs, it uses the LLM to write Python code (or any language, really) that produces some numerical inputs to a problem.
They just try out the inputs on the problem they care about. If the code gives better results, they keep it around. They actually keep a few of the previous versions that worked well as inspiration for the LLM.
If the LLM is hallucinating nonsense, it will just produce broken code that gives horrible results, and that idea will be thrown away.
qsort•7m ago
We don't, but the point is that it's only one part of the entire system. If you have a (human-supplied) scoring function, then even completely random mutations can serve as a mechanism to optimize: you generate a bunch, keep the better ones according to the scoring function and repeat. That would be a very basic genetic algorithm.
The LLM serves to guide the search more "intelligently" so that mutations aren't actually random but can instead draw from what the LLM "knows".
piker•54m ago
qsort•30m ago
- Context: Terence Tao is one of the best mathematician alive.
- Context: AlphaEvolve is an optimization tool from Google. It differs from traditional tools because the search is guided by an LLM, whose job is to mutate a program written in a normal programming language (they used Python). Hallucinations are not a problem because the LLM is only a part of the optimization loop. If the LLM fucks up, that branch is cut.
- They tested this over a set of 67 problems, including both solved and unsolved ones.
- They find that in many cases AlphaEvolve achieves similar results to what an expert human could do with a traditional optimization software package.
- The main advantages they find are: ability to work at scale, "robustness", i.e. no need to tune the algorithm to work on different problems, better interpretability of results.
- Unsurprisingly, well-known problems likely to be in the training set quickly converged to the best known solution.
- Similarly unsurprisingly, the system was good at "exploiting bugs" in the problem specification. Imagine an underspecified unit test that the system would maliciously comply to. They note that it takes significant human effort to construct an objective function that can't be exploited in this way.
- They find the system doesn't perform as well on some areas of mathematics like analytic number theory. They conjecture that this is because those problems are less amenable to an evolutionary approach.
- In one case they could use the tool to very slightly beat an existing bound.
- In another case they took inspiration from an inferior solution produced by the tool to construct a better (entirely human-generated) one.
It's not doing the job of a mathematician by any stretch of the imagination, but to my (amateur) eye it's very impressive. Google is cooking.
nsoonhui•18m ago
Can you explain more on this? How on earth are we supposed to know LLM is hallucinating?
khafra•15m ago
tux3•9m ago
They just try out the inputs on the problem they care about. If the code gives better results, they keep it around. They actually keep a few of the previous versions that worked well as inspiration for the LLM.
If the LLM is hallucinating nonsense, it will just produce broken code that gives horrible results, and that idea will be thrown away.
qsort•7m ago
The LLM serves to guide the search more "intelligently" so that mutations aren't actually random but can instead draw from what the LLM "knows".