The reason this is big news is that modern physics theories such as quantum electrodynamics and the Standard Model can be used to calculate certain measurements such as the anomalous magnetic moment of the electron to absurdly high precision, with prediction and experiment differing in only about one part per ten billion.
Run the same calculations for the Muon, and... err... not so good, previously differing by 3.5 standard deviations.
Either the theory is wrong, or the experiments are wrong. The former is very interesting, because Muons are easy to experiment on, and if we can find "new physics" in something so ordinary, then it's an "accessible" regime for conditions that can be reproduced in a lab (albeit a big one).
This paper is saying that the discrepancy has been solved by using a more fancy set of computations and newer experiments at Fermilab.
In other words: No new exciting physics.
Still though, this is interesting because a mystery was solved, even if the answer is in some sense boring.
kayo_20211030•1d ago
Interesting. Quick, what's the correct response to the statement: "the standard model is wrong"? Generally, it's "it's not". Maybe it will be some day wrong, but glad to know it still holds.
staunton•1d ago
At this point, "the standard model" basically means "state of the art particle physics without highly speculative stuff". People tell me that neutrino masses are part of the standard model now...
So if the standard model is wrong, long live the right standard model. At least, perhaps until it takes a completely new paradigm to go further.
JumpCrisscross•1d ago
> People tell me that neutrino masses are part of the standard model now
People say stupid things. It’s a bit silly to blame that on the model.
staunton•1d ago
Ok, how many free parameters are in the standard model?
Ask five random physics professors, insist on an answer while declining to answer questions for clarification. I guarantee you get at least two answers, maybe three (that's assuming you manage to get an answer from each...). See also people giving various possible answers on physics stack exchange...
JumpCrisscross•1d ago
You’re acting like the SM is an unfalsifiable string theory that just revises all of its parameter every time the last batch is disproven. It’s not.
SM is not fully developed. And we know where it is wrong or painfully silent, e.g. neutrinos and gravity. But it’s a rigorous theory, possibly the most rigorous our species has ever developed, with central tenets that have held to ridiculous levels of precision. Of course there are conflicting hypotheses at its frontier. That’s sort of what defines the frontier. But at its core, the SM is robust. So robust that we mostly don’t talk about it, obsessing—as science should—with the parts where it doesn’t fit together as perfectly.
staunton•1d ago
I'm saying that the standard model is evolving and there's no authority or institution that gets to definine what "the standard model" means. All of this is completely reasonable.
Maybe it can evolve forever to accommodate new results (e.g. by adding new fields), likely it can't (it's hard to imagine a reasonable modification for breaking CPT symmetry, not that this is the best example). If at any point noone can figure out how to evolve it before a radically different theory emerges to explain current discrepancies, we'll have what Kuhn might call a "paradigm shift", say "the standard model is falsified", and invent a different name for the new theory. I even included this possibility in my original comment, so I'm not sure what gave you offense...
That's completely unrelated to any given instance of this evolution being falsifiable. Each of them is, they stick around for a while and there aren't that many. All very proper, Popper is happy.
Meanwhile, string theory hasn't produced a single prediction that was later observed experimentally and one can even argue whether it has produced any predictions at all...
JumpCrisscross•1d ago
> the standard model is evolving and there's no authority or institution that gets to definine what "the standard model" means
The standard model per se is not materially evolving. We're bolting extensions onto it. And there is disagreement over what those extensions mean, whether they're true, and what free parameters they may add.
> Maybe it can evolve forever to accommodate new results (e.g. by adding new fields)
The Higgs field was the last field to have been "added" to the model. Its existence wasn't prompted by experimentation, but vice versa.
avpix•1d ago
19, each corresponding to a physical quantity that can be measured.
Could you help me with which variable we should focus on?
staunton•15h ago
> Could you help me with which variable we should focus on?
I'm not sure what you mean... I'm not asking anyone to focus on anything?!
The answers I linked suggest several different numbers, e.g. 19, 25, 26, I've heard people suggest adding even more "non-Higgs caused masses" since "neutrinos have those and you should add all parameters you can't find theoretical reasons to rule out"...)
JumpCrisscross•13h ago
> answers I linked suggest several different numbers, e.g. 19, 25, 26
But there is no actual ambiguity. 19 in the original formulation. 26 with the PMNS bolt-on. Other numbers with other bolt-ons. The argument is over the extensions.
staunton•9h ago
It was about the phrase "the standard model", which doesn't specify what "extensions" are implied. You seem to think it means "no extensions" to a somewhat arbitrarily chosen step of the evolution, which has 19 free parameters. I don't think that's how most people use the phrase. As far as I can tell, we are in complete agreement on all other points...
I still think my original comment, which seems to have given you offense for some reason, was completely appropriate. "The standard model" means "state of the art particle physics without highly speculative stuff", at least until we have a paradigm shift.
AnimalMuppet•1d ago
If you decline to answer questions for clarification, it seems fair that they should decline to answer your question.
EA-3167•1d ago
"All models are wrong, but some are useful."
or
Granted, but good luck trying to find a new model that matches the tested predictions QM/SM makes, AND reveals new physics you can hope to test.
tmiku•1d ago
I agree that it's not wrong, but I would certainly call it incomplete too. It's your choice which of those two points you want to emphasize, but calling the Standard Model "not wrong" with no mention of its incompatibility with General Relativity would feel disingenuous to me.
vlovich123•1d ago
> Maybe it will be some day wrong, but glad to know it still holds.
Why? The last time we got relativity and quantum mechanics which completely upended our standard of living and technological progress in the 20th century. Wouldn’t you be excited for finding out exactly how the current model is wrong and a better more accurate explanation for the workings of the universe?
kayo_20211030•17h ago
I was just poking fun at the avalanche of breathless papers that claim some anomalous experimental result "breaks the standard model". Generally, it doesn't. Maybe someday it'll happen. If it does I would still expect any new paradigm to contains within it the kernel of the standard model: as relativity contains within itself the newtonian model, at certain scales and levels of accuracy; and as quantum mechanics contains within itself all the physics and chemistry of an earlier era. I hope the standard model isn't the ne plus ultra, the end, but it just keeps getting reaffirmed despite all the breaks-the-standard-model papers, which seem almost desperate in their pleading.
vlovich123•7h ago
Well we know it's wrong & incomplete in pretty observable ways since gravity isn't part of it (theoretically incomplete) and neutrinos in the model are massless even though we know they have mass (observationally incomplete).
But sure, a replacement model has to account for a lot of observations we know do hold for the SM, but I think it'll be closer to how the heliocentric simplified and improved on the Ptolemaic model by providing a simpler perspective that also simplified the math & got rid of various math constructs as "illusions". Similarly, it took time for the heliocentric model to be developed enough to make predictions of higher accuracy because for a long time the Ptolemaic model had much finer prediction powers because the math & supporting data to tune constants was more mature.
I think the SM of particle physics is more likely to follow that path to fix the known problems than how special relativity degrades to Newtonian equations at non-relativistic speeds.
JumpCrisscross•1d ago
The promise of new science from a muon collider [1] is compelling.
Emphasis on "kinda." These are mitigatable concerns, not roadblocks.
bcoates•1d ago
The idea of a medically relevant neutrino dose is absolutely wild
PaulHoule•14h ago
It's not the neutrinos themselves that are dangerous, it is the showers that are produced when the neutrinos interact with 'shielding' material between the source and the victim.
It's related to the problem in space colonization that the optimal amount of shielding is maybe 2 meters of lunar soil. Less than that and moderate energy particles are harmful. High energy particles mostly blast right through you but if one is stopped by a nucleus in the shielding, that nucleus will be blasted apart and particles from that could blow up more nuclei so a large amount of dangerous and penetrating (muons, pions) radiation is produced. It seems that some very high energy particles (neutrinos?) can pass through the earth and cause a shower, so it's not reasonable you can stop all of them. The earth's atmosphere is spread out over a long distance that gives particles a chance to decay so a shield that is better than the 2 meter shield would be difficult to construct so space colonists are going to accept a minimum level of radiation than we have on Earth.
jmyeet•1d ago
It's kind of amazing that the Stanndard Model, which by any objective measure has been a stunningly successful theory, can be both incredibly accurate and ludicrously inaccurate. The magnetic moment, which you mention, is an example of the latter, accurate to 8-10 significant digits. An example of the latter is the so-called "vacuum catastrophe" where QFT predicts the energy of a vacuum and is off... by 120 orders of magnitude.
Like no one really knows why we have three generations of particles and what that means or why they're so massive.
I only found out about hyperons [1] last year, where (at least) one down quark is replaced with a strange quark. And this matter has weird properties. IIRC the nuclei get smaller.
Many years ago I'd assumed it was only a matter of time until we make significant progress merging quantum mechanics and gravity but honestly, I'm starting to have doubts. The universe is under no obligation to make sense or give up its secrets. Just like in maths, some things may be unknowable.
Often times, anomalies to existing theories bring "scientific revolutions". In this case, it is good to see anomaly being resolved with fixing the lab and the computational apparatus.
magicalhippo•1d ago
> This paper is saying that the discrepancy has been solved by using a more fancy set of computations
That's underselling an important point I think. Here's my understanding of it:
There have been two methods to provide the theoretical value, one based on calculating loop corrections, the so-called data-driven approach, and one based on lattice QCD.
From what I can gather, the point in the paper is rather that they stopped using the data-driven prediction, and relied solely on the lattice QCD prediction. This is unlike the previous paper where they combined the two predictions into one.
The former method requires a bunch of measured data as inputs, like how frequently many different interactions happens in nature (their cross-section). It requires more theoretical work, figuring out the loop correction formulas, and all those measurements, but once you have that you can relatively quickly compute the result.
What happened was that one very important cross-section was measured significantly better, but it disagreed with all former measurements. This despite very thorough cross-checking of the experiment. So when those new results were plugged in, the data-driven method gave a much worse prediction.
Meanwhile, the latter method is more brute-force, and relies on simulating QCD on a lattice[1]. A much more complex version of those water simulations used in movies and such. Due to how the physics is, it's very expensive to simulate, and thus the simulations haven't been quite good enough. Recent improvements to the method has changed that, making its predictions in line with the measured value.
The authors states fixing the data-driven method is of high importance, so hopefully this will be fully resolved in the future.
This is a great summary. Is it plausible that the discrepancy between the two approaches points toward something else beyond the Standard Model? My impression from reading was 'probably not, because it would seem very ad-hoc and unusual,' but I don't have an insider's perspective.
buffer1337•1d ago
I am looking for collaborators on exploring whether geometric principles might provide a foundation for understanding how aspects of the Standard Model could emerge from simpler underlying structures. This is obviously an incredibly challenging area that many brilliant physicists have worked on, so approaching it as a learning exercise - happy to share details if anyone's interested in diving into theoretical physics rabbit holes. bufferoverflow (at) gmail.com
jiggawatts•1d ago
Run the same calculations for the Muon, and... err... not so good, previously differing by 3.5 standard deviations.
Either the theory is wrong, or the experiments are wrong. The former is very interesting, because Muons are easy to experiment on, and if we can find "new physics" in something so ordinary, then it's an "accessible" regime for conditions that can be reproduced in a lab (albeit a big one).
This paper is saying that the discrepancy has been solved by using a more fancy set of computations and newer experiments at Fermilab.
In other words: No new exciting physics.
Still though, this is interesting because a mystery was solved, even if the answer is in some sense boring.
kayo_20211030•1d ago
staunton•1d ago
So if the standard model is wrong, long live the right standard model. At least, perhaps until it takes a completely new paradigm to go further.
JumpCrisscross•1d ago
People say stupid things. It’s a bit silly to blame that on the model.
staunton•1d ago
Ask five random physics professors, insist on an answer while declining to answer questions for clarification. I guarantee you get at least two answers, maybe three (that's assuming you manage to get an answer from each...). See also people giving various possible answers on physics stack exchange...
JumpCrisscross•1d ago
SM is not fully developed. And we know where it is wrong or painfully silent, e.g. neutrinos and gravity. But it’s a rigorous theory, possibly the most rigorous our species has ever developed, with central tenets that have held to ridiculous levels of precision. Of course there are conflicting hypotheses at its frontier. That’s sort of what defines the frontier. But at its core, the SM is robust. So robust that we mostly don’t talk about it, obsessing—as science should—with the parts where it doesn’t fit together as perfectly.
staunton•1d ago
Maybe it can evolve forever to accommodate new results (e.g. by adding new fields), likely it can't (it's hard to imagine a reasonable modification for breaking CPT symmetry, not that this is the best example). If at any point noone can figure out how to evolve it before a radically different theory emerges to explain current discrepancies, we'll have what Kuhn might call a "paradigm shift", say "the standard model is falsified", and invent a different name for the new theory. I even included this possibility in my original comment, so I'm not sure what gave you offense...
That's completely unrelated to any given instance of this evolution being falsifiable. Each of them is, they stick around for a while and there aren't that many. All very proper, Popper is happy.
Meanwhile, string theory hasn't produced a single prediction that was later observed experimentally and one can even argue whether it has produced any predictions at all...
JumpCrisscross•1d ago
The standard model per se is not materially evolving. We're bolting extensions onto it. And there is disagreement over what those extensions mean, whether they're true, and what free parameters they may add.
> Maybe it can evolve forever to accommodate new results (e.g. by adding new fields)
The Higgs field was the last field to have been "added" to the model. Its existence wasn't prompted by experimentation, but vice versa.
avpix•1d ago
https://en.wikipedia.org/wiki/Mathematical_formulation_of_th...
staunton•1d ago
JumpCrisscross•1d ago
staunton•15h ago
I'm not sure what you mean... I'm not asking anyone to focus on anything?!
The answers I linked suggest several different numbers, e.g. 19, 25, 26, I've heard people suggest adding even more "non-Higgs caused masses" since "neutrinos have those and you should add all parameters you can't find theoretical reasons to rule out"...)
JumpCrisscross•13h ago
But there is no actual ambiguity. 19 in the original formulation. 26 with the PMNS bolt-on. Other numbers with other bolt-ons. The argument is over the extensions.
staunton•9h ago
I still think my original comment, which seems to have given you offense for some reason, was completely appropriate. "The standard model" means "state of the art particle physics without highly speculative stuff", at least until we have a paradigm shift.
AnimalMuppet•1d ago
EA-3167•1d ago
or
Granted, but good luck trying to find a new model that matches the tested predictions QM/SM makes, AND reveals new physics you can hope to test.
tmiku•1d ago
vlovich123•1d ago
Why? The last time we got relativity and quantum mechanics which completely upended our standard of living and technological progress in the 20th century. Wouldn’t you be excited for finding out exactly how the current model is wrong and a better more accurate explanation for the workings of the universe?
kayo_20211030•17h ago
vlovich123•7h ago
But sure, a replacement model has to account for a lot of observations we know do hold for the SM, but I think it'll be closer to how the heliocentric simplified and improved on the Ptolemaic model by providing a simpler perspective that also simplified the math & got rid of various math constructs as "illusions". Similarly, it took time for the heliocentric model to be developed enough to make predictions of higher accuracy because for a long time the Ptolemaic model had much finer prediction powers because the math & supporting data to tune constants was more mature.
I think the SM of particle physics is more likely to follow that path to fix the known problems than how special relativity degrades to Newtonian equations at non-relativistic speeds.
JumpCrisscross•1d ago
[1] https://arxiv.org/abs/2303.08533
jxjnskkzxxhx•1d ago
PaulHoule•1d ago
JumpCrisscross•1d ago
bcoates•1d ago
PaulHoule•14h ago
It's related to the problem in space colonization that the optimal amount of shielding is maybe 2 meters of lunar soil. Less than that and moderate energy particles are harmful. High energy particles mostly blast right through you but if one is stopped by a nucleus in the shielding, that nucleus will be blasted apart and particles from that could blow up more nuclei so a large amount of dangerous and penetrating (muons, pions) radiation is produced. It seems that some very high energy particles (neutrinos?) can pass through the earth and cause a shower, so it's not reasonable you can stop all of them. The earth's atmosphere is spread out over a long distance that gives particles a chance to decay so a shield that is better than the 2 meter shield would be difficult to construct so space colonists are going to accept a minimum level of radiation than we have on Earth.
jmyeet•1d ago
Like no one really knows why we have three generations of particles and what that means or why they're so massive.
I only found out about hyperons [1] last year, where (at least) one down quark is replaced with a strange quark. And this matter has weird properties. IIRC the nuclei get smaller.
Many years ago I'd assumed it was only a matter of time until we make significant progress merging quantum mechanics and gravity but honestly, I'm starting to have doubts. The universe is under no obligation to make sense or give up its secrets. Just like in maths, some things may be unknowable.
[1]: https://en.wikipedia.org/wiki/Hyperon
raincom•1d ago
magicalhippo•1d ago
That's underselling an important point I think. Here's my understanding of it:
There have been two methods to provide the theoretical value, one based on calculating loop corrections, the so-called data-driven approach, and one based on lattice QCD.
From what I can gather, the point in the paper is rather that they stopped using the data-driven prediction, and relied solely on the lattice QCD prediction. This is unlike the previous paper where they combined the two predictions into one.
The former method requires a bunch of measured data as inputs, like how frequently many different interactions happens in nature (their cross-section). It requires more theoretical work, figuring out the loop correction formulas, and all those measurements, but once you have that you can relatively quickly compute the result.
What happened was that one very important cross-section was measured significantly better, but it disagreed with all former measurements. This despite very thorough cross-checking of the experiment. So when those new results were plugged in, the data-driven method gave a much worse prediction.
Meanwhile, the latter method is more brute-force, and relies on simulating QCD on a lattice[1]. A much more complex version of those water simulations used in movies and such. Due to how the physics is, it's very expensive to simulate, and thus the simulations haven't been quite good enough. Recent improvements to the method has changed that, making its predictions in line with the measured value.
The authors states fixing the data-driven method is of high importance, so hopefully this will be fully resolved in the future.
At least that's my armchair understanding.
[1]: https://en.wikipedia.org/wiki/Lattice_QCD
qnleigh•23h ago