I'm not sure if that's exactly it.
Question: Is there any relationship between this and Axiomatic Thermodynamics? I recall that also uses differential geometry.
But the metric tensor of spacetime, in General Relativity, which is our best current classical theory of gravity, only explains gravity. Gravity, by itself, uses up all of the degrees of freedom in the metric tensor. There aren't any left for electromagnetism or anything else.
To get classical electromagnetism, you need to add another, different tensor to your model--a stress-energy tensor with the appropriate properties to describe an electromagnetic field. Of course doing this in standard GR is straightforward and is discussed in GR textbooks; but it does not involve somehow extracting electromagnetism from the metric tensor. It involves describing electromagnetism with the stress-energy tensor, i.e., with different degrees of freedom from the ones that describe gravity.(And if you want to describe the sources of the field, you need to add even more degrees of freedom to the stress-energy tensor to describe charged matter.)
The paper does not, as far as I can see, address this issue; the authors don't even appear to be aware that it is an issue. Which makes me extremely skeptical of the paper's entire approach.
That is, the same deal as with gravity in GR.
This sort of "classic" line only applies to a 3+1 slicing or 1+3 threading of the fully-solved Einstein Field Equations, and only in a certain limit.
A complete solution of the Einstein Field Equations relates the field-values (the curvature and matter tensors) at every point in the manifold, and therefore there's no moving or bending to speak of in the so-called block universe - there's just tensor-values at every point, and the tensor-values of each tensor have a strict relationship.
The choice of any curve along which to slice is totally arbitrary (it doesn't have to be a timelike geodesic, even!), although it certainly helps if for every slice there is no everywhere-non-spacelike curve which passes through the slice more than once, and (where the full solution allows) every point in the manifold is in exactly one slice.
Once you've sliced spacetime, you can consider the evolution of a field quantity from one slice to its neighbours - this is 3+1 general relativity (3-dimensional spacetime-filling fields exist on every spatial (or really "constant time coordinate") slice, the slices ordered by some choice of time axis & given coordinates: this is called a foliation), of which there are some formalisms. Taken along an infinite timelike geodesic, this approach gives you quantities that "look like" 3-d objects evolving in time, moving from point to point in spacetime, and interacting (feeling and generating) the gravitational interaction.
Threading is similar, and is often done in a 1+3 formalism, but decomposes the tensors in terms of the "radar distance" among sets of initially parallel and initially close-to-each-other curves. Threading is a lot less famous; Landau & Lifshitz's Classical theory of fields goes into some detail. For the purposes of this HN comment, threading also lets one see the evolution of 3-d objects' trajectories as they interact with each other, even though it is just applying "radar coordinates" among a set of curves traced through the fully solved block universe. (Arguably one can trace actual physical objects that are coupled to these curves, and the physical objects could beam light to one another, so it could be a little less arbitrary than selecting one curve (with an object on it or not) as the basis for of a foliation, but then one can get into ratholes involving real interacting matter fields where local interactions can delay "radar" signals... so one probably wants to add back in some arbitrariness by e.g. gauge fixing).
Perplexingly humans do not normally think of themselves or their parts (like those carbon atoms you ingested at lunch the other day but just exhaled) coupled to curves through a 4-d manifold, and struggle with the consequences of taking constant-time slices based on e.g. their wristwatches (c.f. practically any HN discussion dealing with recent detections of astronomical events, especially things like stellar deaths).
Of course also perplexingly humans do not remember their future as well as their past (although they're honestly not very good at remembering their past in detail, and tend to reconstruct it seemingly similarly to how they predict their future).
Finally I guess you could think of the decomposition of the tensors in a 3+1 or 1+3 as producing other tensors, just like you could think of scalar or vector quantities being tensor quantities, but then they are not the metric tensor and the stress-energy tensor of the Einstein Field Equations.
Also of course there are other tensors and scalars on the LHS of the EFEs, and they inevitably enter into equations of motion (dissipation from tides raised on partners in relativistic few-body relativistic systems explain certain orbital characteristics!), so the metric and its encoding of changes in angles and displacements isn't the full story in the slogan anyway.
That's the problem with slogans. And there are rather a lot of them that get thrown about wrt Einsteinian gravitation.
P.S.: I brought up threading (which few working scientists will ever really run into) principally because presently the top comment in this discussion essentially wonders -- somewhat off topically -- how the expansion of the universe could happen instead of everything collapsing into black holes. I was tempted to answer that in terms of the Raychaudhuri equations (two pre-Standard-Model particles of matter initially close together are taken away from each other by e-folds of inflation, and other factors (angular momentum and inelasticities from phase changes / symmetry breaking that gives rise to electrons, photons, protons, and so forth) also tend to keep their trajectories from ever "focusing"). However, I didn't have the time or energy to ELI5 it and scrolled along until deciding to think about your unusual wording of a usual slogan that I have repeated myself.
But it can't be quite "the same deal", because gravity obeys the equivalence principle, and electromagnetism does not. (Nor do the other known fundamental interactions.) The paper does not appear to address this at all.
From the conclusion: >Charge is therefore to be understood as a local compression of the metric in the spacetime, which relates to longitudinal waves as described in [12]. This provides some aesthetical features into the model, as electromagnetism seems to be orthogonal to gravity in the sense that current theory of gravity is a theory based on metric compatible connections.
From what I can see, this is just a particularly obfuscatory way of saying the same thing I said in response to philipov upthread, where I described how there are different tensors in GR to model gravity and EM. It's not in any way a theory that derives EM from properties of the metric tensor. It adds additional degrees of freedom that aren't in the metric tensor, and then tries to obfuscate what it's doing.
Cool because traditional QM wave function waves are not electromagnetic waves even though they seem to be the same thing in a double slit experiment.
What happens to the Higgs field excitation and the Higgs boson, given the experiments confirming their existence? If this paper explains phenomena more effectively, does it require us to reinterpret these findings?
Part of what made Einstein's theories so good is that they reinterpreted past theories while providing new explanations that fixed major unknowns in the science of the time. Both for GR and QFT.
What this paper appears to be doing (although I can't make complete sense of it) is to somehow derive Maxwell's Equations (or more precisely a nonlinear generalization of them--which seems to me to mean that they aren't actually deriving electromagnetism, but let that go) as a property of the geometry of spacetime alone, without any abstract spaces or extra dimensions or anything of that sort.
*typo
That phenotype is well-represented in mathematical physics.
You ignore the reality of nature at your own peril.
Besides, you can just use computers automate the wrangling of this hell. It's what they are good at, after all.
I claim to be qualified in both disciplines. With this background, I disagree.
If you are very certain what you want to model, abstractions are often very useful to shed light on "what really happens in the system" (both in mathematics and computer science, but also in physics).
The problem with applying abstractions in computer programs (in this way) lies somewhere else: in business, users/customers are often very "volatile" what they want from the computer program, instead of pondering deeply about this question (even though this would be a very good idea). Thus (certain kinds of) abstractions in computer code make it much harder to adjust the program if new, very different requirements come up.
In math (i am not a mathematician), abstractions are a base to build on. You define some concept (e.g. a group, a set, whatever) then you prove things about it, building ever more complexity around your abstraction.
This works great because in math your abstractions don't change. You are never going to redefine what a group is. If you need something different,maybe you define some related concept, a ring, a semigroup, or whatever, but you never change your original abstraction. It is the base you build on.
As a result you can pack a lot of complexity. E.g. if something is a group, what are all the logical consequences of that? Probably so many you can't list them all, and that's ok. The whole point of math is to pick out some pattern and figure out what that entails.
In contrast in computer programming, the goal of abstraction is largely isolation. You want to be able to change something in the abstraction, and it not affect the system very much. The things the abstraction entails should be as limited as reasonably possible as everything it entails is a potential depedency that will get messed up if anything changes. Ideally you should be able to understand what the abstraction does by only looking at the abstraction's code and not the whole system.
Just think about the traditional SOLID principle in OOP design. Its all about ensuring abstractions are as isolated as possible from each other.
To summarize, i think in math abstractions are the base of the complexity pyramid. All the complexity is built on top of them. In computers its the opposite. They should be the tip of the complexity pyramid.
P.S. my controversial opinion is that this is the flaw in a lot of reasoning haskell fans use.
> This works great because in math your abstractions don't change.
This is just a different formulation about the "volatility" of a lot of requirements of software by the users/customers.
> In contrast in computer programming, the goal of abstraction is largely isolation. You want to be able to change something in the abstraction, and it not affect the system very much.
Here my opinion differs: isolation is at best just one aspect of abstraction (and I would even claim that these concepts are even only tangentially related). I claim that the better isolation is rather a (very useful) side effect of some abstractions that are very commonly used in software development. But on the other hand, I don't think that it is really hard to come up with abstractions for software development that would be very useful, but don't lead to better isolation.
The central purpose of abstraction in computer programs is to make is easier to reason about the the code, and being able to avoid having to write "related" code multiple times. Similar to mathematics: you want to prove a general theorem (e.g. about groups) instead of having to prove one theorem about S_n, one theorem about Z_n etc.
You actually partly write about the aspect of reasoning about the code by yourself:
> Ideally you should be able to understand what the abstraction does by only looking at the abstraction's code and not the whole system.
In this sense using more abstractions is a particular optimization for the goals:
- you want to make it easier to reason about the code abstractly
- you want to avoid having to duplicate code (i.e. save money since less lines have to be written)
But this is not a panacea:
- If the abstraction turns out to be bad, you either have to re-engineer a lot, or you will have a maintenance nightmare (my "volatility of customer requirements" argument). Indeed, I claim that the question of "do we really use the best possible abstractions in our code for the problem that we want to solve" is nearly always neglected in software projects, because the answer is nearly always very inconvenient, necessitating lots of re-engineering of the code.
- low-level optimizations become harder, so making the code really fast gets much more complicated
- since abstractions are more "abstract", (depending on the abstraction) you might need "smarter" programmers (who can be more expensive). For an example consider some complicated metaprogramming libraries of Boost (C++): in the hands of really good programmers such abstractions can become "magic", but worse programmers will likely be overwhelmed by them.
- fighting about the "right" abstraction can become very political (for low-level code there is often less of such a fight, because here "what is more performant is typically right").
---
Concerning
> To summarize, i think in math abstractions are the base of the complexity pyramid. All the complexity is built on top of them.
This is actually not a bad idea to organize code (under my stated specific edge conditions! When these specific edge conditions are violated, my judgment might change). :-)
---
> P.S. my controversial opinion is that this is the flaw in a lot of reasoning haskell fans use.
I am not a particular fan of Haskell, but I think the Haskell fans' flaw lies in a very different point: they emphasize very particular aspects of computer programming, and, admittedly, often come up with clever solutions for these.
The problem is: in my opinion there exist aspects of software development that are in my opinion far more important, but don't fit into the kind of structures that Haskell fans appreciate. The difficulty is thus in my experience convincing Haskell fans that such aspects actually matter a lot instead of being unimportant side aspects of software development.
> This is just a different formulation about the "volatility" of a lot of requirements of software by the users/customers.
Yes. I would say that this is a defining feature of computer programming - change. The complexity of change is what computer programmers primarily want to use abstractions to deal with (where such a concern is really absent to a mathematician. Everything is immutable to them).
And yes, reasoning about the code is part of that too, but in computer programming often its in the form of being able to reason about a code base that is slowly shifting under you as other programmers make changes in other parts (in the context of a large project with many devs. I suppose its a different story for a solo project)
To bring it back to the original start of the thread, i guess what i'm saying is that what makes an abstraction good for math is different then what makes it good for a computer program, so naturally they are going to look a little different.
I think abstractions in CS (Turing machines, etc) or other building blocks in computer systems (OS interfaces,computer languages, etc) are a different story and much more similar to how abstractions are used in math.
https://en.wikipedia.org/wiki/Feynman_checkerboard
and the work of David Hestenes:
Zitterbewegung in Quantum Mechanics
https://davidhestenes.net/geocalc/pdf/ZBWinQM15**.pdf
Zitterbewegung structure in electrons and photons
https://arxiv.org/abs/1910.11085
Zitterbewegung Modeling
"Collective Electrodynamics: Quantum Foundations of Electromagnetism" https://www.amazon.com/Collective-Electrodynamics-Quantum-Fo...
The rest is just how magnetism emerges from this, and Einstein already figured it out. This guy explains it pretty well in layman's terms: https://www.youtube.com/watch?v=sDlZ-aY9GN4
Uh ...
https://youtu.be/fHG7qVNvR7w?t=27m4s
I love the entire rant/communication, but it ends with a classical view from a David Griffiths AIP interview, regarding a project by Jacob Barandes
bit closer https://youtu.be/fHG7qVNvR7w?t=29m46s
rkagerer•9mo ago
xeonmc•9mo ago
sitkack•9mo ago
And then I got a bot check on researchgate, first time and I download a lot of papers from them.
oriel•9mo ago
andyjohnson0•9mo ago
Nothing like that for me. I just clicked the big "article pdf" button at the bottom of the page.
Direct link to full pdf:
https://iopscience.iop.org/article/10.1088/1742-6596/2987/1/...
0hijinks•9mo ago
xeonmc•9mo ago
Iwan-Zotow•9mo ago
Roverlord•9mo ago