Is there a similar flow but to transform either a video/photo/NeRF of a scene into a tighter, minimal polygon approximation of it. The reason I ask is that it would make some things really cool. To make my baby monitor mount I had to knock out the calipers and measure the pins and this and that, but if I could take a couple of photos and iterate in software that would be sick.
Why no landscape or underwater scenes or something in space, etc.?
I believe this company is doing image (or text) -> off the shelf image model to generate more views -> some variant of gaussian splatting.
So they aren't really "generating" the world as one might imagine.
It’s a website that collects people’s email addresses
1. Sky looks jank 2. Blurry/warped behind the horse 3. The head seems to move a lot more than the body. You could argue that this one is desirable 4. Bit of warping and ghosting around the edges of the flowers. Particularly noticeable towards the top of the image. 5. Very minor but the flowers move as if they aren't attached to the wall
CUDA is needed to render side scrolling video, but there is many ways to do other things with result.
(I am oversimplifying).
I just want to emphasize that this is not a NERF where the model magically produces an image from an angle and then you ask "ok but how did you get this?" and it throws up its hands and says "I dunno, I ran some math and I got this image" :D.
I guess this is what they use for the portrait mode effects.
Gaussian splashing is pretty awesome.
Imagine history documentaries where they take an old photo, free objects from the background, and then move them round to give the illusion of parallax.
Even using commas, if you leave the ambiguous “free” I suggest you prefix “objects” with “the” or “any”.
Or if you prefer Blade Runner: https://youtu.be/qHepKd38pr0?t=107
Without that that it's hard to tell how cherry-picked the NVS video samples are.
EDIT: I did it myself, if anyone wants to check out the result (caveat, n=1): https://github.com/avaer/ml-sharp-example
Photoshop content aware fill could do equally or better many years ago.
This is really interesting to me because the model would have to encode the reflection as both the depth of the reflecting surface (for texture, scattering etc) as well as the "real depth" of the reflected object. The examples in Figure 11 and 12 already look amazing.
Long tail problems indeed.
brcmthrowaway•4h ago
duskwuff•4h ago
Terretta•3h ago
spike021•3h ago