> Starting from a dense or fully connected graph, PyTheus uses gradient descent combined with topological optimization to find minimal graphs corresponding to some target quantum experiment
The AI rediscovered an interferometer technique the Russian's found decades ago, optimized a graph in an unusual way and came up with a formula to better fit a dark matter plot.
It's like seeing things in clouds or tea leaves.
At least, that's the thinking.
Yes the AI "resurfaced" the work, but it also incorporated the Russian's theory into the practical design. At least enough to say "hey make sure you look at this" - this means the system produced a workable-something w/ X% improvement, or some benefit that the researchers took it seriously and investigated. Obviously, that yielded an actual design with 10-15% improvement and a "wish we had this earlier" statement.
No one was paying attention to the work before.
If I've understood it right, calling this AI is a stretch and arguably even misleading. Gradient descent is the primary tool of machine learning, but this isn't really using it the way machine learning uses it. It's more just an application of gradient descent to an optimisation problem.
The article and headline make it sound like they asked an LLM to make an experiment and it used some obscure Russian technique to make a really cool one. That isn't true at all. The algorithm they used had no awareness of the Russian research, or of language, or experimental design. It wasn't "trained" in any sense. It was just a gradient descent program. It's the researchers that recognised the Russian technique when analyzing the experiment the optimiser chose.
There are a few things like that where we can throw AI at a problem is generating something better, even if we don't know why exactly it's better yet.
This description reminds me of NASA’s evolved antennae from a couple of decades ago. It was created by genetic algorithms:
> Just nothing that a human being would make, because it had no sense of symmetry, beauty, anything. It was just a mess.
NASA describing their antenna:
> It has an unusual organic looking structure, one that expert antenna designers would not likely produce.
— https://ntrs.nasa.gov/citations/20060024675
The parallel seems obvious to me.
The design seemed alien and somewhat organic, but I can’t seem to find it now.
But I'm not paid by the click, so different incentives.
[1] https://en.wikipedia.org/wiki/Random_forest
[2] https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Trainin...
[3] https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
AI for attempts at general intelligence. (Not just LLMs, which already have a name … “LLM”.)
ML for any iterative inductive design of heuristical or approximate relationships, from data.
AI would fall under ML, as the most ambitious/general problems. And likely best be treated as time (year) relative, i.e. a moving target, as the quality of general models to continue improve in breadth and depth.
crypto must now be named cryptography and AI must now be named ML to avoid giving the scammers and hypers good press.
I think image and video generation that aren't based on LLMs can also use the term AI without causing confusion.
You can have your own definition of words but it makes it harder to communicate.
For me, when someone says, "I'm working on AI", it's almost meaningless. What are you doing, actually?
https://github.com/artificial-scientist-lab/GWDetectorZoo/
Nothing remotely LLM-ish, but I'm glad they used the term AI here.
Isn't that a delay line? The benefit being that when the undelayed and delayed signals are mixed, the phase shift you're looking for is amplified.
We’ve been doing that for decades, it’s just more recently that it’s come with so much more funding.
anonym00se1•5h ago
"AI comes up with bizarre ___________________, but it works!"
ninetyninenine•5h ago
Imagine these headlines mutating slowly into “all software engineering performed by AI at certain company” and we will just dismiss it as generic because being employed and programming with keyboards is old fashioned. Give it twenty years and I bet this is the future.
hammyhavoc•4h ago
somenameforme•4h ago
Of course today call something "AI" and suddenly interest, and presumably grant opportunities, increase by a few orders of magnitude.
ninetyninenine•3h ago
somenameforme•3h ago
ordu•2h ago
OTOH, AI is very much a search in multidimensional spaces, it is so into it, that it would probably make sense to say that gradient descent is an AI tool. Not because it is used to train neural networks, but because the specialty of AI is a search in multidimensional spaces. People probably wouldn't agree, like they don't agree that Fundamental Theorem of Algebra is not of algebra (and not fundamental btw). But the disagreement is not about the deep meaning of the theorem or gradient descent, but about tradition and "we always did it this way".
omnicognate•2m ago
The researchers in this article didn't do that. They used gradient descent to choose from a set of experiments. The choice of experiment was the end result and the direct output of the optimisation. Nothing was "learned" or "trained".
Gradient descent and other optimisation tools are used in machine learning, but long predate machine learning and are used in many other fields. Taking "AI" to include "anything that uses gradient descent" would just render an already heavily abused term almost entirely meaningless.
JimDabell•3h ago
https://en.wikipedia.org/wiki/AI_effect
dns_snek•15m ago
viraptor•4h ago
sandspar•52m ago
eleveriven•46m ago
amelius•40m ago