For years, the residual nano-second drift in GNSS satellites and NASA's Earth Flyby Anomalies have been treated as stochastic environmental noise or unknown hardware limitations.
I've been analyzing raw IGS data (SP3, CLK, SNX) and noticed something strange. If you stop assuming the Euclidean spatial metric is perfectly rigid and instead apply a local distortion factor based on altitude gravity (S_loc = pi^2 / g_loc), the "noise" disappears.
I built an open-source Python engine and a Streamlit app to visualize this. When you run global terrestrial station data through this calibration, the spatial/temporal residuals align into a deterministic geometric line with a 99.99997% Pearson correlation. Furthermore, applying this exact same baseline constant perfectly resolves the velocity changes of historical NASA flybys (NEAR, Galileo I, Rosetta I) without any retro-fitting.
The core thesis is that we are measuring the universe using a "curved ruler" (the Earth-calibrated SI meter).
You can test your own geodetic data on the app here: https://k-protocol.streamlit.app/
Or review the 250-line Python logic in the GitHub repo (linked inside the app). I'd love for the data scientists and physicists here to tear into the code, run some data, and tell me if I've stumbled onto a fundamental geometric truth, or if there's a flaw in my data processing.