"Designing AI for Disruptive Science" is a bit market-ey, but "AI Risks 'Hypernormal' Science" is just a trimmed section heading "Current AI Training Risks Hypernormal Science".
Maybe they could be, but it seems pretty unlikely. The edges of a lot of scientific understanding are now past practical applicability. The edges are essentially models of things impossible to test. In fact, relativity was only recently fully backed up with experimental data.
I think also what's practically applicable changes constantly. Perhaps we're truly at the End of Science, but empirically we've been wrong every other time we've said that. My money is that there's more race to run.
But they do. Paradigm shifts happen because the new paradigm explains the unexplained and importantly also covers the old model. If prior data is unexplained with a paradigm shift, the shift will never be adopted.
> Perhaps we're truly at the End of Science
Who said that? Just because the core of our current models seem pretty rock steady doesn't mean there's not more science. It simply means that we can mostly just expect refining rather than radical discovery.
There will be sub-paradigm shifts, but there's likely not going to be major "relativity" moments from here on out.
I'm also a little skeptical about the practical value of the bleeding edge of both experimental and theoretical physics. Interesting? Sure.
And the closer you get to physics, the less likely any sort of major paradigm shift will be discovered (though the article focuses pretty heavily on physics which is why I do as well).
But even in those fields, there are core parts that aren't likely to ever see any sort of paradigm shift. For example, in biology, I doubt we'll see a shift from evolution as it'll be impossible for a new model to also explain what evolution does.
I agree that at the edges you'll possibly see more paradigm shifts and discovery, but those are all going to be working from things that will not see paradigm shifts. For example, biology can't escape things like single celled organisms made up from atoms and chemical compounds.
But ultimately, what I disagree with in the article is the notion that discovery won't ultimately be a process of hypernormalization. In medicine, we are unlikely to see a new paradigm that isn't germ theory. When it comes to the research, it'll mostly be focused on finding new compounds and delivery mechanisms for treatment rather than finding a new paradigm for how to treat a disease.
The softer sciences are the only place where you might find new paradigms, but that's simply because the data itself is so squishy and poor anyways that it's easy to shift around. There it's less a question of the science and more of the utility of the model (regardless of whether or not it aligns with reality).
Alternatively: there's plenty of mainstream, accepted science that's plain, flat out, provably wrong. Yet, it is against good taste (job security, people's feelings, status quo bias, etc.) to point this out.
Hence, it can actually be tricky to catch wind of, or get a grasp on, such issues to begin with, much less pursue such issues toward meaningful, published, recognized change in understanding (that is to say: paradigm shift).
I'd name some examples, but you wouldn't believe me.
With respect to the article, it seems the current LLMs can (though, obviously, do not necessarily have to) return text that appears to reason (pretty reasonably!) about paradigm shifts, when given the context required and nudged quite forcefully toward particular directions. But, as the article seems to indicate, the LLMs seem to not tend toward finding, investigating, and reporting on paradigm shifts all on their own very much. (But maybe part of that is intrinsic to how they are programmed and/or their context?)
I highly doubt that.
There are a lot of people that think they've proving the mainstream wrong. But more often than not, it's cranks using bad non-repeated tests. These bad tests are propped up, ironically, because of people's feelings and job security more than a built up body of evidence.
They also almost always have to ignore the mainstream body of evidence and just say it's wrong and bad because of a conspiracy.
For example, plenty of creationists believe they have irrefutable evidence that evolution is provably wrong. It's usually a few cherry picked or poorly interpreted results or sometimes just flat out lying. And often they simply flat out lie about the existing body of evidence that support evolution.
Another example is the antivaxx movement. Wakefield and RFK both built careers that made them a lot of money talking about how the mainstream was wrong. Even when the industry adopted some of the recommendations (abandoning Thimerosal), they simply ignored the fact that further data didn't support their claims.
which contains Heathrow Terminals 1, 2, 3, 4 & 5 on the Picadilly line. For about 15 seconds I imagined a world where Heathrow has had 5 terminals since 1933, then I read the map itself: "Recreated by Arthurs D". Phew.
Awesome example of improving information conveyance through abstractions though!
Worsen. LLMs discard/loses and mixes data on their statistical "compression" to create their vectorial database model. Across the time, successive feed back will be homologous to create a jpg image sourcing a jpg image that was created from another jpg image, through this "gaussian" loop.
Those faster (but worst) results will degrade real valuable data and science at a speed/rate that will statistically discard good done science on a regular basis, systematically.
IMHO.
vivid242•1h ago
Taking away some complexity comes at a price, and for some people, it’s hard to see that it outweighs the practicality.
ortusdux•1h ago