It remains to be seen exactly how much a climate model can be improved by AI. They're already based on woefully sparse data points.
Conventionally, they are. Classification, perception, learning, and vast data manipulation are problems associated with human intelligence. When a machine does them, it is AI. That is the currently accepted definition[0].
I think that once a model, such as an influence field for weather, can self-calibrate (so that a past forecast aligns with present conditions), that is already artificial intelligence. It meets the learning criteria and handles vast spatial data manipulation. We also have models that learn to classify and recognize patterns in meteo and climate science, far exceeding the AI threshold.
It may not meet some more unusual definitions of AI, such as what OpenAI et al have been advertising (chats, agentic, bla bla bla). But it meets the ML-based definition that's currently accepted.
Certainly a good thing to try, but the article feels like a PR piece more than anything else, as it's not answering anything, just giving a short overview of a few things they're trying with no data on those things whatsoever.
It does fit in with the "Throw LLM spaghetti at a wall and see what sticks" trend these days though.
DeepVariant, Enformer, ParticleNet, DeepTau, etc. are some well-known individual models that are advanced branches of science. And there are the very famous ones, like AlphaFold (Nobel in Chemistry 2024).
We need to think of AI not as a product (chats, agents, etc.), but as neural nets (AlexNet). Unfortunately, large companies are "chat-washing" these tremendously useful technologies.
ML is more of a bag of techniques that can be applied to many things than a pure domain. Of course you can study the properties of neural networks for their own sake but it’s more common as a means to an end.
I checked some of the nuclear fusion startups and didn’t see anything.
1. We have the sense that "science progresses one funeral at a time." An AI model could be used to recognize situations where a single viewpoint is getting a disproportionate amount of attention in the literature, and warn journals+funding agencies about any disconnects between attention and quality of work.
2. We have the sense that "It Ain’t What You Don’t Know That Gets You Into Trouble. It’s What You Know for Sure That Just Ain’t So" An AI model could identify the most high-profile results in the literature that have contradicting evidence and call for further, decisive study.
3. Interdisciplinary translation. There are many many cases of different branches of science re-discovering each other's work. I believe I read an article a little bit ago about an academic in a somewhat softer science publishing a paper proudly claiming the discovery of linear regression. Obviously not all cases are so egregious, but an AI could advance a discipline just by pointing out areas where that discipline is using outdated/inferior methods compared to the state of the art.
Tycho•6h ago
Q6T46nT668w6i3m•6h ago
monoid73•5h ago
parpfish•5h ago
First, give it the abstract for a fresh paper that it couldn’t have been trained on, then see if it can come up with the same proofs to see if it can replicate the logic knowing the conclusion.
Second, you could give it all the papers cited in the intro and ask a series of leading questions like “based on this work, what new results can you derive”?
CJefferson•2h ago
Finding a set of papers, whose results can be combined in a reasonable amount of time to make a new interesting result is itself a hard problem. This is often a thing Professors do for PhD students -- give them a general area to research and some papers to start reading.
It's still a contribution, but so much easier than just asking "Hey, choose a set of papers from which you can derive new interesting results"
thorum•4h ago