frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•2m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
1•rcarmo•3m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•4m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•4m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•5m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•7m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•8m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•10m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•11m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•11m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•12m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•12m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•12m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•13m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•13m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•15m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•20m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•21m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•21m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
25•bookofjoe•22m ago•9 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•23m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•23m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•24m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•24m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•25m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•25m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•25m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•26m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•27m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•27m ago•0 comments
Open in hackernews

SHARP, an approach to photorealistic view synthesis from a single image

https://apple.github.io/ml-sharp/
531•dvrp•1mo ago

Comments

brcmthrowaway•1mo ago
So this is the secret sauce behind Cinematic mode. The fake bokeh insanity has reached its climax!
duskwuff•1mo ago
As well as their "Spatial Scene" mode for lock screen images, which synthesizes a mild parallax effect as you move the phone.
Terretta•1mo ago
It's available for everyday photos, portraits, everything, not just lock screens.
spike021•1mo ago
you can also press the button while viewing a photo in the Photos app to see this.
calvinmorrison•1mo ago
I understand AI for reasoning, knowledge, etc. I haven't figured out how anyone wants to spend money for this visual and video stuff. It just seems like a bad idea.
accurrent•1mo ago
Simulation. It takes a lot of effort today to bring up simulations in various fields. 3 D programming is very nontrivial and asset development is extremely expensive. If I have a workspace I can take a photo of and just use it to generate a 3d scene I can then use it in simulations to test ideas out. This is particularly useful in robotics and industrial automation already.
jijijijij•1mo ago
I don't see any examples of a 3D scene information usable for simulation. If you want to simulate something hitting a table, you need the whole table (surface) in space, not just some spatial illusion effect extrapolated from an image of a table. I also think modelling the 3D objects for simulation is the least expensive part of an simulation... the simulation is the expensive thing.

I doubt this will be useful for robotics or industrial automation, where you need an actual spatial, or functional understanding of the object/environment.

accurrent•1mo ago
With research like this you need to start somewhere. The fact we can get 3d information helps. There are people looking into making splats capture collision information [1].

I have worked on simulation and in my day job do a lot of simulation. While physics is oftem hard and expensive you only need to write the code once.

Assets? You need to comission 3d artists and then spend hours wrangling file formats. Its extremely tedious. If we could take a photo and extract meshes Im sure we'd have a much easier time.

[1] https://trianglesplatting.github.io/

re-thc•1mo ago
Do people not spend on entertainment? Commercials? It's probably less of a bad idea than knowledge. AI giving a bad visual has less negatives than giving the wrong knowledge leading to the wrong decision.
rv3392•1mo ago
This specific paper is pretty different to the kind of photo/video generation that has been hyped up in recent years. In this case, I think this might be what they're using for the iOS spatial wallpaper feature, which is arguably useless but is definitely an aesthetic differentiator to Android devices. So, it's indirectly making money.
netsharc•1mo ago
Photo apps on phones (can you still call them cameras?) already have a lot of "AI" to enhance photos and videos taken. Some of it is technological necessity, since you're capturing something through a tiny hole, a lot of it is sexying it up to appeal to people, because hey, people would prefer a cinema-quality depiction of their memories rather than the reality...
yodon•1mo ago
> photorealistic 3D representation from a single photograph in less than a second
arjie•1mo ago
This is incredibly cool. It's interesting how it fails in the section where you need to in-paint. SVC seems to do that better than all the rest, though not anywhere close to the photorealism of this model.

Is there a similar flow but to transform either a video/photo/NeRF of a scene into a tighter, minimal polygon approximation of it. The reason I ask is that it would make some things really cool. To make my baby monitor mount I had to knock out the calipers and measure the pins and this and that, but if I could take a couple of photos and iterate in software that would be sick.

necovek•1mo ago
You'd still need one real measurement at least: this might get proportions right if background can be clearly separated, but the absolute size of an object can be worlds apart.
arjie•1mo ago
That's true. And there's lens correction and all that, but it would be nice to accelerate the CAD modeling.
Geee•1mo ago
This is great for turning a photo into a dynamic-IPD stereo pair + allows some head movement in VR.
SequoiaHope•1mo ago
Ah and the dynamic IPD component preserves scale?
benatkin•1mo ago
That is really impressive. However, it was a bit confusing at first because in the koala example at the top, the zoomed in area is only slightly bigger than the source area. I wonder why they didn't make it 2-3x as big in both axes like they did with the others.
yodon•1mo ago
See also Spaitial[0] which announced today full 3D environment generation from a single image

[0]https://www.spaitial.ai/

andsoitis•1mo ago
Why are all their examples of rooms?

Why no landscape or underwater scenes or something in space, etc.?

jaccola•1mo ago
Constrained environments are much simpler.

I believe this company is doing image (or text) -> off the shelf image model to generate more views -> some variant of gaussian splatting.

So they aren't really "generating" the world as one might imagine.

boguscoder•1mo ago
Requires email to view anything, that’s sad
dag11•1mo ago
I'm confused, does it actually generate environments from photographs? I can't view the galleries since I didn't sign up for emails but all of the gallery thumbnails are AI, not photos.
jrflowers•1mo ago
> I'm confused, does it actually generate environments from photographs?

It’s a website that collects people’s email addresses

avaer•1mo ago
The best I've seen so far is Marble from World Labs, though that gives you a full 360 environment and takes several minutes to do so.
superfish•1mo ago
"Unsplash > Gen3C > The fly video" is nightmare fuel. View at your own risk: https://apple.github.io/ml-sharp/video_selections/Unsplash/g...
ghurtado•1mo ago
Seth Brundle has entered the chat.
Traubenfuchs•1mo ago
Early AI „everything turns into dog heads“ vibes. Beautiful.
drcongo•1mo ago
I miss those. Anyone know if it's still possible to get the models etc. needed to generate them?
Traubenfuchs•1mo ago
I wish there was an archive of all those melty dreamscapes.

https://m.youtube.com/watch?v=DgPaCWJL7XI&t=1s&pp=2AEBkAIB0g...

https://www.youtube.com/watch?v=X0oSKFUnEXc

StilesCrisis•1mo ago
Google was using them as wall mural artwork in one of the Sunnyvale offices. Very trippy.
what-the-grump•1mo ago
All this work to recreate a WinAmp viz from 20 years ago :) ?
tecleandor•1mo ago
I also wanted to generate one of those this year, so I'll camp around here just in case anybody comments on it :)
kennyadam•1mo ago
https://github.com/kenjibailly/Deep_Dream_GUI

and

https://www.tensorflow.org/tutorials/generative/deepdream

oh and Google's original repo from 10 years ago with a Python notebook about running it: https://github.com/google/deepdream/blob/master/dream.ipynb

kennyadam•1mo ago
https://github.com/kenjibailly/Deep_Dream_GUI
schneehertz•1mo ago
san check, 1d10
uwela•1mo ago
Goading companies into improving image and video generation by showing them how terrible they are is only going to make them go faster, and personally I’d like to enjoy the few moments I have left thinking that maybe something I watch is real.

It will evolve into people hooked into entertainment suits most of the day, where no one has actual relationships or does anything of consequence, like some sad mashup of Wall-E and Ready Player One.

If we’re lucky, some will want to meatspace with augmented reality.

Maybe we’ll get really nice holovisions, where we can chat with virtual celebrities.

Who needs that?

We’re already having to shoot up weight-loss drugs because we binge watch streaming all the time. We’ve all given up, assuming AI will do everything. What good will come from having better technology when technology is already doing harm?

camgunz•1mo ago
It turns out the Great Filter is that any species with the technology to colonize space also has the technology to soma itself into annihilation.

https://en.wikipedia.org/wiki/Great_Filter

jodrellblank•1mo ago
There are way ways past this, from religion and Amish-style cultural approaches, to legal prohibition of making and selling and using it, to dictatorial control of the companies which could make it, to individuals being personally immune, to paying people money if they don't use it. Like there are people who avoid alcohol, opioids, heroin, all other wireheading-style drugs and experiences that exist already, and people who do exercise and stay thin in a world of fast food and cars.

A great filter needs to apply to every civilisation imaginable, no exceptions, nerfing billions of species before they get to a higher Kardashev scale, not just something that "could happen" or the latest “Dunning-Kruger” mic-drop in every thread. In 1960s "the great filter is nuclear war", in 1890 "the great filter is heroin", in 1918 "the great filter is world war, we are destined to destroy ourselves", in 2015 "the great filter is climate change our emissions will end us like bacteria in a petri dish", in antiquity "the great filter is the punishment for crossing the will of the Gods".

It's got to be something you cannot get around even if you try really really hard and get very very lucky, because there are ~200,000,000,000 stars in the Milky Way and with those numbers there will be some species which lucks its way past almost any candidate, spreads out and in a mere 100k years is all over this galaxy leaving rocket trails and explosion signatures and radio signals and terraforming signs and megastructures.

Maybe when NASA, ESA, SpaceX, RosCOSMOS, CNSA, IRSA all collapse because of this effect… look how many countries have a space agency! https://en.wikipedia.org/wiki/List_of_government_space_agenc...

harhargange•1mo ago
TMPI looks just as good if not better.
jjcm•1mo ago
Disagree - look at the sky in the seaweed shot. It doesn't quite get the depth right in anything, and the edges of things look off.
shwaj•1mo ago
Agreed. The head of the fly also seems to have weird depth.
wfme•1mo ago
Have a look through the rest of the images. TMPI has some pretty obvious shortcomings in a lot of them.

1. Sky looks jank 2. Blurry/warped behind the horse 3. The head seems to move a lot more than the body. You could argue that this one is desirable 4. Bit of warping and ghosting around the edges of the flowers. Particularly noticeable towards the top of the image. 5. Very minor but the flowers move as if they aren't attached to the wall

tartoran•1mo ago
Impressive but something doesn't feel right to me.. Possibly too much sharpness, possibly a mix of cliches, all amplified at once.
a3w•1mo ago
For me, TMPI and SHARP look great. TMPI is consistently brighter, though, with me having no clue which is more correct.
remh•1mo ago
Enhance! https://www.youtube.com/watch?v=LhF_56SxrGk
mvandermeulen•1mo ago
I thought this was going to be the Super Troopers version
moondev•1mo ago
cuda gpu only

https://github.com/apple/ml-sharp#rendering-trajectories-cud...

delis-thumbs-7e•1mo ago
Interestingly Apple’s own models don’t work on MPS. Well, I guess you just have to wait for few years..
matthewmacleod•1mo ago
This is specifically only for video rendering. The model itself works across GPU, CPU, and MPS.
diimdeep•1mo ago
No, model works without CUDA then you have .ply that you can drop into gaussian splatter viewer like https://sparkjs.dev/examples/#editor

CUDA is needed to render side scrolling video, but there is many ways to do other things with result.

rcarmo•1mo ago
Fixed that: https://github.com/rcarmo/ml-sharp
gs17•1mo ago
The gaussian splat output can be generated with CPU (this was honestly one of the easiest AI repos to get running).
Leptonmaniac•1mo ago
Can someone ELI5 what this does? I read the abstract and tried to find differences in the provided examples, but I don't understand (and don't see) what the "photorealistic" part is.
eloisius•1mo ago
From a single picture it infers a hidden 3D representation, from which you can produce photorealistic images from slightly different vantage points (novel views).
avaer•1mo ago
There's nothing "hidden" about the 3d represenation. It's a point cloud (in meters) with colors, and a guess at the the "camera" that produced it.

(I am oversimplifying).

eloisius•1mo ago
Hidden in the sense of neural net layers. I mean intermediary representation.
avaer•1mo ago
Right.

I just want to emphasize that this is not a NERF where the model magically produces an image from an angle and then you ask "ok but how did you get this?" and it throws up its hands and says "I dunno, I ran some math and I got this image" :D.

uh_uh•1mo ago
"Hidden" or "latent" in a context like this just means variables that the algo is trying to infer because it doesn't have direct access to them.
ares623•1mo ago
Takes a 2D image and allows you to simulate moving the angle of the camera with correct-ish parallax effect and proper subject isolation (seems to be able to handle multiple subjects in the same scene as well)

I guess this is what they use for the portrait mode effects.

p-e-w•1mo ago
Agreed, this is a terrible presentation. The paper abstract is bordering on word salad, the demo images are meaningless and don’t show any clear difference to the previous SotA, the introduction talks about “nearby” views while the images appear to show zooming in, etc.
emsign•1mo ago
Imagine history documentaries where they take an old photo and free objects from the background and move them round giving the illusion of parallax movement. This software does that in less than a second, creating a 3D model that can be accurately moved (or the camera for that matter) in your video editor. It's not new, but this one is fast and "sharp".

Gaussian splashing is pretty awesome.

kurtis_reed•1mo ago
What are free objects?
ferriswil•1mo ago
The "free" in this case is a verb. The objects are freed from the background.
Retr0id•1mo ago
Until your comment I didn't realise I'd also read it wrong (despite getting the gist of it). Attempted rephrase of the original sentence:

Imagine history documentaries where they take an old photo, free objects from the background, and then move them round to give the illusion of parallax.

necovek•1mo ago
I'd suggest a different verb like "detach" or "unlink".
thenthenthen•1mo ago
isolate from the background?
necovek•1mo ago
Even better, agreed!
tzot•1mo ago
> Imagine history documentaries where they take an old photo, free objects from the background

Even using commas, if you leave the ambiguous “free” I suggest you prefix “objects” with “the” or “any”.

nashashmi•1mo ago
Free objects in the background.
Sharlin•1mo ago
No, free objects in the foreground, from the background.
crazygringo•1mo ago
Oh man. I never thought about how Ken Burns might use that.

Already you sometimes see where manually cut out a foreground person from the background and enlarge them a little bit and create a multi-layer 3D effect, but it's super-primitive and I find it gimmicky.

Bringing actual 3D to old photographs as the camera slowly pans or rotates slightly feels like it could be done really tastefully and well.

derleyici•1mo ago
It turns a single photo into a rough 3D scene so you can slightly move the camera and see new, realistic views. "Photorealistic" means it preserves real textures and lighting instead of a flat depth effect. Similar behavior can be seen with Apple's Spatial Scene feature in the Photos app: https://files.catbox.moe/93w7rw.mov
avaer•1mo ago
It makes your picture 3D. The "photorealistic" part is "it's better than these other ways".
carabiner•1mo ago
Black Mirror episode portraying what this could do: https://youtu.be/XJIq_Dy--VA?t=14. If Apple ran SHARP on this photo and compared it to the show, that would be incredible.

Or if you prefer Blade Runner: https://youtu.be/qHepKd38pr0?t=107

diimdeep•1mo ago
One more example from Star Trek Into Darkness https://youtu.be/p7Y4nXTANRQ?t=61
rasz•1mo ago
I was thinking Enemy of the State (1998) https://www.youtube.com/watch?v=3EwZQddc3kY
zipy124•1mo ago
Basically depth estimation to split the scene into various planes, and then inpainting to work out the areas in the obscured parts of the planes, and then the free movement of them to allow for parallax. Think of 2D side scrolling games that have various different background depths to give illusion of motion and depth.
skygazer•1mo ago
Apple does something similar right now in their photos app, generating spatial views from 2d photos, where parallax is visible by moving your phone. This paper’s technique seems to produce them faster. They also use this same tech in their Vision Pro headset to generate unique views per eye, likewise on spatialized images from Photos.
avaer•1mo ago
Is there a link with some sample gaussian splat files coming from this model? I couldn't find it.

Without that that it's hard to tell how cherry-picked the NVS video samples are.

EDIT: I did it myself, if anyone wants to check out the result (caveat, n=1): https://github.com/avaer/ml-sharp-example

derleyici•1mo ago
Apple's Spatial Scene in the Photos app shows similar behavior, turning a single photo into a small 3D scene that you can view by tilting the phone. Demo here: https://files.catbox.moe/93w7rw.mov
Traubenfuchs•1mo ago
It‘s awful and often creates a blurry mess in the imaginated space behind the object.

Photoshop content aware fill could do equally or better many years ago.

diimdeep•1mo ago
Works great, model file is 2.8 GB, on M2 rendering took a few seconds, result is guassian .ply file but repo implementation requires CUDA card to render video, I have used one of webgl live renderers from here https://github.com/scier/MetalSplatter?tab=readme-ov-file#re...
Dumbledumb•1mo ago
In Chapter D.7 they describe: "The complex reflection in water is interpreted by the network as a distant mountain, therefore the water surface is broken."

This is really interesting to me because the model would have to encode the reflection as both the depth of the reflecting surface (for texture, scattering etc) as well as the "real depth" of the reflected object. The examples in Figure 11 and 12 already look amazing.

Long tail problems indeed.

yieldcrv•1mo ago
I want to see with people
BoredPositron•1mo ago
The paper is just a word salad and it's not better than previous sota? I might be missing a key element here.
codebyprakash•1mo ago
Quite cool!
supermatt•1mo ago
I note the lack of human portraits in the example cases.

My experience with all these solutions to date (including whatever apple are currently using) is that when viewed stereoscopically the people end up looking like 2d cutouts against the background.

I haven't seen this particular model in use stereoscopically so I can't comment as to its effectiveness, but the lack of a human face in the example set is likely a bit of a tell.

Granted they do call it "Monocular View Synthesis", but i'm unclear as to what its accuracy or real-world use would be if you cant combine 2 views to form a convincing stereo pair.

sorenjan•1mo ago
They're using their Depth Pro model for depth estimation, and that seems to do faces really well.

https://github.com/apple/ml-depth-pro

https://learnopencv.com/depth-pro-monocular-metric-depth/

supermatt•1mo ago
Im not sure how the depth estimation alone translates into the view synthesis, but the current implementation on-device is definitely not convincing for literally any portrait photographs I have seen.

True stereoscopic captures are convincing statically, but don't provide the parallax.

sorenjan•1mo ago
Good monocular depth estimation is crucial if you want to make a 3D representation from a single image. Ordinarily you have images from several camera poses and can create the gaussian splats using triangulation, with a single image you have to guess z position for them.
Someone•1mo ago
For selfies, I think iPhones with Face ID use the TrueDepth camera hardware to measure Z position. That’s not full camera resolution, but it will definitely help.
pmontra•1mo ago
So Deckard got lucky that the picture enhancement machine allucinated the correct clue? But that was boundto happen 6 years ago, no AI yet.
nashashmi•1mo ago
I could not find any mention of it but does this use regenerative AI? I can’t imagine it able to accomplish anything like this without using a large graphical Model in the back.
rcarmo•1mo ago
Well, I got _something_ to work on Apple Silicon:

https://github.com/rcarmo/ml-sharp (has a little demo GIF)

I am looking at ways to approximate Gaussian splats without having to reinvent the wheel, but I'm a bit over my depth since I haven't been playing a lot of attention to those in general.

7moritz7•1mo ago
The example doesn't look particularly impressive to say the least. Look at the bottom 20 %
rcarmo•1mo ago
I just refactored the rendering and resampling approach. Took me a few tries to figure out how to remove the banding masks from the layers, but with more stacked layers and a bit of GPT-foo to figure out the API it sort of works now (updated the GIF)

Keep in mind that this is not Gaussian splat rendering but just a hacked approximation--on my NVIDIA machine that looks way smoother.

esperent•1mo ago
I'm quite delighted that the gif banding artefacts make it look life the photi of a fire is flickering, and also highly impressed that the AI was able to recognize the fire as a photo within a photo and keep it in 2d.
orthoxerox•1mo ago
The resulting animations feel more like "Live2D" than 3D.
mhalle•1mo ago
It would be interesting to see how much better this algorithm would be with a stereo pair as input.

Not only do many VR and AR systems acquire stereo, we have historical collections of stereo views in many libraries and museums.

pluralmonad•1mo ago
This seems like what they have been doing with album covers on applemusic for a couple years.
reactordev•1mo ago
This would be really fun to create stereoscopic videos with. Take a video input, offset x+0.5 or some coefficient, take the output, put them side by side (or interlaced for shutter glasses) and viola! 3D movies.
alexgotoi•1mo ago
Apple dropping this is interesting. They've been quiet on the flashy AI stuff while everyone else is yelling about transformers, but 3D reconstruction from single images is actually useful hardware integration stuff.

What's weird is we're getting better at faking 3D from 2D than we are at just... capturing actual 3D data. Like we have LiDAR in phones already, but it's easier to neural-net your way around it than deal with the sensor data properly.

Five years from now we'll probably look back at this as the moment spatial computing stopped being about hardware and became mostly inference. Not sure if that's good or bad tbh.

Will include this one in my https://hackernewsai.com/ newsletter.

momojo•1mo ago
I wonder if humans are any different. We don't have LIDAR in our eyes but we approximate depth "enough" with only our 2D input
dTal•1mo ago
We also constantly move our heads and refocus our eyes. We can get a rough idea of depth from only a static stereo pair, but in reality we ingest vastly more information than that and constantly update our internal representation in real time.
jakefromstatecs•1mo ago
We don't have 2d input, we have 3d input.

We have two eyes that gives us depth by default.

stronglikedan•1mo ago
That's cool and all, but it seems like only the first step in this, where they go from 2D photo all the way to fully animated (animatable?) characters: https://www.youtube.com/watch?v=DSRrSO7QhXY
somethingsome•1mo ago
Last time I tried depth pro it was not really metric, I wonder if this one is as they claim. If someone has some experience on that side I would be interested