The idea is that if you use flash photographs, in your renderer, you can solve backwards to see which inputs give a really close render to the photographs at each point, and it all turns into nonlinear least squares for each texel. There's two academic papers that do this and also a startup, m-xr.com (they're calling their program "marso"), this online version I made happens to be a few times faster than both. Which, they're clocking times in the hours range with their expensive server PC with four GPUs, so I thought faster is good.
One neat thing I did, for the solving, WGSL (webgpu shading language) doesn't have templating or function overloading, which all the solver libraries need to do auto-derivatives on equations. So I wrote a python script that generated the equivelant wgsl instead, so the derivatives get generated and you don't need to do it by hand still, then solving can happen on compute shaders with that.
I wrote about it here too: https://www.reddit.com/r/photogrammetry/comments/1rt06rm/i_m...
p.s. I'm looking for work rn, market's awful