frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Show HN: Real-Time Gaussian Splatting

https://github.com/axbycc/LiveSplat
144•markisus•1mo ago
LiveSplat is a system for turning RGBD camera streams into Gaussian splat scenes in real-time. The system works by passing all the RGBD frames into a feed forward neural net that outputs the current scene as Gaussian splats. These splats are then rendered in real-time. I've put together a demo video at the link above.

Comments

sreekotay•1mo ago
This is realtime capture/display? Presumable (at this stage) for local viewing? Is that right?
markisus•1mo ago
Yes realtime capture and display. Locality is not required. You can send the source RGBD video streams over IP and in fact I have that component working in the larger codebase that this was split off from. For that use case, you need to do some sort of compression. The RGB stream compression is a pretty solved problem, but the depth channel needs special consideration since "perceptual loss" in the depth space is not a well researched area.
patrick4urcloud•1mo ago
nice
echelon•1mo ago
OP, this is incredible. I worry that people might see a "glitchy 3D video" and might not understand the significance of this.

This is getting unreal. They're becoming fast and high fidelity. Once we get better editing capabilities and can shape the Gaussian fields, this will become the prevailing means of creating and distributing media.

Turning any source into something 4D volumetric that you can easily mold as clay, relight, reshape. A fully interactable and playable 4D canvas.

Imagine if the work being done with diffusion models could read and write from Gaussian fields instead of just pixels. It could look like anything: real life, Ghibli, Pixar, whatever.

I can't imagine where this tech will be in five years.

markisus•1mo ago
Thanks so much! Even when I was putting together the demo video I was getting a little self-critical about the visual glitches. But I agree the tech will get better over time. I imagine we will be able to have virtual front row seats at any live event, and many other applications we haven't thought of yet.
echelon•1mo ago
> I imagine we will be able to have virtual front row seats at any live event, and many other applications we haven't thought of yet.

100%. And style-transfer it into steam punk or H.R. Giger or cartoons or anime. Or dream up new fantasy worlds instantaneously. Explore them, play them, shape them like Minecraft-becomes-holodeck. With physics and tactile responses.

I'm so excited for everything happening in graphics right now.

Keep it up! You're at the forefront!

_verandaguy•1mo ago
I know enough about 3D rendering to know that Gaussian splatting's one of the Big New Things in high-performance rendering, so I understand that this is a big deal -- but I can't quantify why, or how big a deal it is.

Could you or someone else wise in the ways of graphics give me a layperson's rundown of how this works, why it's considered so important, and what the technical challenges are given that an RGB+D(epth?) stream is the input?

markisus•1mo ago
Gaussian Splatting allows you to create a photorealistic representation of an environment from just a collection of images. Philosophically, this is a form of geometric scene understanding from raw pixels, which has been a holy grail of computer vision since the beginning.

Usually creating a Gaussian splat representation takes a long time and uses an iterative gradient-based optimization procedure. Using RGBD helps me sidestep this optimization, as much of the geometry is already present in the depth channel and so it enables the real-time aspect of my technique.

When you say "big deal", I imagine you are also asking about business or societal implications. I can't really speak on those, but I'm open to licensing this IP to any companies which know about big business applications :)

corysama•1mo ago
So, is there some amount of gradient-based optimization going on here? I see RGBD input, transmission, RGBD output. But, other than multi-camera registration, it's difficult to determine what processing took place between input and transmission. What makes this different from RGBD camera visualizations from 10 years ago?
markisus•1mo ago
There is no gradient-based optimization. It's (RGBD input, Current Camera Pose) -> Neural Net -> Gaussian Splat output.

I'm not aware of other live RGBD visualizations except for direct pointcloud rendering. Compared to pointclouds, splats are better able to render textures, view-dependent effects, and occlusions.

rsp1984•1mo ago
Except that no view-dependent effects that would benefit multi-view consistency are present in your splats.

So yes, it's very much like the RGB-D visualizations from 10 years ago, just with splats instead of points.

markisus•1mo ago
Here is an example of a view dependent effect produced by LiveSplat [1]. Look closely at the wooden chair handle as the view changes.

I'll concede that ten years ago, someone could have done this. But no one did, as far as I know.

[1] https://imgur.com/a/2yA7eMU

_verandaguy•1mo ago
Thanks! That makes a lot of sense, I might dig into this after work some more.

By "big deal," I meant more for people specializing around computer graphics, computer vision, or even narrower subfields of either of those two -- a big deal from an academic interest perspective.

Sure, this might also have implications in society and business, but I'm a nerd, and I appreciate a good nerding out over something cool, niche, and technically impressive.

sendfoods•1mo ago
Please excuse my naive question - isn't Gaussian Splatting usually used to create 3D imagery from 2D? How does providing 3D input data make sense in this context?
ttoinou•1mo ago
Well if you have the D channel you might as well benefit from it and have better output
markisus•1mo ago
Yes, the normal case uses 2D input, but it can take hours to create the scene. Using the depth channel allows me to create the scene in 33 milliseconds, from scratch, every frame. You could conceptualize this as a compromise between raw pointcloud rendering and fully precomputed Gaussian splat rendering. With pointclouds, you have a lot visual artifacts due to sparsity (low texture information, seeing "through" objects"). With Gaussian splatting, you can transfer a lot more of the 2D texture information into 3D space and render occlusion and view-dependent effects better.
Retr0id•1mo ago
How do the view-dependent effects get "discovered" from only a single source camera angle?
markisus•1mo ago
Actually there are multiple source cameras. The neural net learns to interpolate the source camera colors based on where the virtual camera is. Under the hood it's hard to say exactly what's going on in the mind of the neural net, but I think it's something like "If I'm closer to camera A, take most of the color from camera A."
ttoinou•1mo ago
So we’re not sure how it works exactly ?
markisus•1mo ago
Yup, this is the case for all neural nets.
BSVogler•1mo ago
Gaussian splatting does not use neural nets. It runs an optimizer on the Gaussian splattering parameters. I think in your comment you are talking about Neural Radiance Fields (Nerfs).
Retr0id•1mo ago
Traditionally you'd use an optimizer, but OP isn't doing it traditionally, which is what makes it interesting. NeRFs work differently.
sendfoods•1mo ago
That makes things clearer, thanks!
jayd16•1mo ago
Splatting is about building a scene that supports synthetic view angles.

The depth is helpful to properly handle the parallaxing of the scene as the view angle changes. The system should then ideally "in-paint" the areas that are occluded from the input.

You can either guess the input depth from matching multiple RGB inputs or just use depth inputs along with RGB inputs if you have them. It's not fundamental to the process of building the splats either way.

yuchi•1mo ago
The output looks terribly similar to what sci-fi movies envisioned as 3D reconstruction of scenes. It is absolutely awesome. Now, if we could project them in 3D… :)
tough•1mo ago
Apple Vision maybe?
drac89•1mo ago
maybe
mandeepj•1mo ago
Another implementation of splat https://github.com/NVlabs/InstantSplat
jasonjmcghee•1mo ago
The quality is better, no doubt, but this method (from the paper) takes on the order of 10-45s depending on input from their table. Which is much better than 10 minutes etc.

That being said, afaict OP's method is 1000x faster, at 33ms.

markisus•1mo ago
Note that the method you linked is "Splatting in Seconds" where as real-time requires splatting in tens of milliseconds.

I'm also following this work https://guanjunwu.github.io/4dgs/ which produces temporal Gaussian splats but takes at least half an hour to learn the scene.

metalrain•1mo ago
How did you train this? I'm thinking there isn't reference output for live video frame to splats so supervised learning doesn't work.

Is there some temporal accumulation?

markisus•1mo ago
There is no temporal accumulation, but I think that's the next logical step.

Supervised learning actually does work. Suppose you have four cameras. You input the three of them into the net and use the fourth as the ground truth. The live video aspect just emerges from re-running the neural net every frame.

corysama•1mo ago
So, I see livesplat_realsense.py imports livesplat. Where’s livesplat?
IshKebab•1mo ago
The README says it's closed source.
markisus•1mo ago
I've tried to make it clear in the link that the actual application is closed source. I'm distributing it as a .whl full of binaries (see the installation instructions).

I've considered publishing the source but the source code is is dependent on some proprietary utility libraries from my bigger project and it's hard to fully disentangle it and I'm not sure if this project has some business applications but I'd like to keep that door open at this time.

armchairhacker•1mo ago
Gaussian Splatting looks pretty and realistic in a way unlike any other 3D render, except UE5 and some hyper-realistic not-realtime renders.

I wonder if one can go the opposite route and use gaussian splatting or (more likely) some other method to generate 3D/4D scenes from cartoons. Cartoons are famously hard to emulate in 3D even entirely manually; like with traditional realistic renders (polygons, shaders, lighting, post-processing) vs gaussian splats, maybe we need a fundamentally different approach.

spyder•1mo ago
Correct me if I'm wrong but looking at the video this just looks like a 3D point cloud using equal-sized "gaussians" (soft spheres) for each pixel, that's why it looks still pixelated especially at the edges. Even when it's low resolution the real gaussian splatting artifacts look different with spikes an soft blobs at the lower resolution parts. So this is not really doing the same as a real gaussian splatting of combining different sized view-dependent elliptic gaussians splats to reconstruct the scene and also this doesn't seem to reproduce the radiance field as the real gaussian splatting does.
markisus•1mo ago
I had to make a lot of concessions to make this work in real-time. There is no way that I know to replicate the fidelity of "actual" Gaussian splatting training process within the 33ms frame budget.

However, I have not baked in the size or orientation into the system. Those are "chosen" by the neural net based on the input RGBD frames. The view dependent effects are also "chosen" by the neural net, but not through an explicit radiance field. If you run the application and zoom in, you will be able to see the splats of different sizes pointing in different directions. The system as limited ability to re-adjust the positions and sizes due to the compute budget leading to the pixelated effect.

markisus•1mo ago
I've uploaded a screenshot from LiveSplat where I zoomed in a lot on a piece of fabric. You can see that there is actually a lot of diversity in the shape, orientation, and opacity of the Gaussians produced [1].

[1] https://imgur.com/a/QXxCakM

kookamamie•1mo ago
[flagged]
dang•1mo ago
Whoa—please don't be a jerk on HN and especially not when discussing other people's work.

You broke the site guidelines badly here (https://news.ycombinator.com/newsguidelines.html), and the Show HN guidelines even more so (https://news.ycombinator.com/showhn.html).

If you wouldn't mind reviewing those links and sticking to the rules when posting to HN, we'd appreciate it.

whywhywhywhy•1mo ago
Would be good to see how it's different from just the depth channel applied to the Z of the RGB pixels. Because it looks very similar to that.
markisus•1mo ago
The application has this feature and lets you switch back and forth. What you are talking about is the standard pointcloud rendering algorithm. I have an older video where I display the corresponding pointcloud [1] in a small picture in picture frame so you can compare.

I actually started with pointclouds for my VR teleoperation system but I hated how ugly it looked. You end up seeing through objects and objects becoming unparseable if you get too close. Textures present in the RGB frame also become very hard to make out because everything becomes "pointilized". In the linked video you can make out the wood grain direction in the splat rendering, but not in the pointcloud rendering.

[1] https://youtu.be/-u-e8YTt8R8?si=qBjYlvdOsUwAl5_r&t=14

badmonster•1mo ago
What is the expected frame rate and latency when running on a typical setup with one Realsense camera and an RTX 3060?
markisus•1mo ago
I don't have a 3060 at hand so I'm not sure. Ideally someone with that setup will try it out and report back. There is no noticeable latency when comparing visually with standard pointcloud rendering.

With framerate, there are two different frame rates that are important. One is the splat construction framerate, which the speed that an entirely new set of Gaussian's can be constructed. LiveSplat can usually maintain 30fps in this case.

The second important splat rendering framerate. In VR this is important to prevent motion sickness. Even if you have a static set of splats, you need the rendering to react to the user's minor head movements at around 90fps for the best in-headset experience.

All these figures are on my setup with a 4090 but I have gotten close results with a 3080 (maybe 70fps splat rendering instead of 90fps).

smusamashah•1mo ago
The demo video does not show constructing 3d from input. Is it possible to do something like that with this? Take a continus feed of a static scene and keep improving the 3D view?

This is what I thought from the title, but the demo video is just a conitnuously changing stream of points/splats with the video.

markisus•1mo ago
If the scene is static, the normal Gaussian splatting pipeline will give much better results. You take a bunch of photos and then let the optimizer run for a while to create the scene.
drewbeck•1mo ago
imo this is a key component of a successful VR future for live events. Many cameras at a venue, viewers strap on a headset at home and get to sit/stand anywhere in the room and see the show.

Also I love the example video. Folks could make some killer music videos with this tech.

asadm•1mo ago
This is amazing! Video calls of the future (this + vision pro) would be lovely.
hi_hi•1mo ago
While undoubtedly technically impressive, this left me a little confused. Let me explain.

What I think I'm seeing is like one of those social media posts where someone has physically printing out a tweet, taken a photo of them holding the printout, and then posted another social media post of the photo.

Is the video showing me a different camera perspective than what was originally captured, or is this taking a video feed, doing technical magic to convert to gaussian splats, and then converting it back into a (lower quality) video of the same view?

Again, congratulations, this is amazing from a technical perspective, I'm just trying to understand some of the potential applications it might have.

markisus•1mo ago
Yes this converts video stream (plus depth) into Gaussian splats on the fly. While the system is running you can move the camera around to view the splats at different angles.

I took a screen recording of this system as it was running and cut it into clips to make the demo video.

I hope that makes sense?

donclark•1mo ago
Holodeck coming soon?

Bypassing Google's big anti-adblock update

https://0x44.xyz/blog/web-request-blocking/
615•deryilz•12h ago•527 comments

Nuclear Explosion for Carbon Sequestration

https://arxiv.org/abs/2501.06623
34•energy123•3h ago•34 comments

Switching to Claude Code and VSCode Inside Docker

https://timsh.org/claude-inside-docker/
86•timsh•1d ago•31 comments

Zig's New Async I/O

https://kristoff.it/blog/zig-new-async-io/
138•afirium•8h ago•87 comments

Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model

https://github.com/MoonshotAI/Kimi-K2
320•ConteMascetti71•14h ago•79 comments

MacPaint Art from the Mid-80s Still Looks Great Today

https://blog.decryption.net.au/posts/macpaint.html
857•decryption•22h ago•178 comments

Hacking Coroutines into C

https://wiomoc.de/misc/posts/hacking_coroutines_into_c.html
74•jmillikin•6h ago•18 comments

Chrome's hidden X-Browser-Validation header reverse engineered

https://github.com/dsekz/chrome-x-browser-validation-header
147•dsekz•2d ago•32 comments

Edward Burtynsky's monumental chronicle of the human impact on the planet

https://www.newyorker.com/culture/photo-booth/earths-poet-of-scale
29•pseudolus•4h ago•5 comments

Aeron: Efficient reliable UDP unicast, UDP multicast, and IPC message transport

https://github.com/aeron-io/aeron
12•todsacerdoti•11h ago•3 comments

Parse, Don't Validate (For C)

https://www.lelanthran.com/chap13/content.html
45•lelanthran•3d ago•10 comments

Programming Affordances That Invite Mistakes

https://thetechenabler.substack.com/p/programming-affordance-when-a-languages
19•ingve•2d ago•5 comments

Light exposure at night predicts incidence of cardiovascular diseases

https://www.medrxiv.org/content/10.1101/2025.06.20.25329961v1
103•gnabgib•8h ago•65 comments

C++: Maps on Chains

http://bannalia.blogspot.com/2025/07/maps-on-chains.html
5•signa11•2d ago•1 comments

Lost Chapter of Automate the Boring Stuff: Audio, Video, and Webcams in Python

https://inventwithpython.com/blog/lost-av-chapter.html
148•AlSweigart•14h ago•9 comments

The fish kick may be the fastest subsurface swim stroke yet (2015)

https://nautil.us/is-this-new-swim-stroke-the-fastest-yet-235511/
208•bookofjoe•19h ago•138 comments

Two-step system makes plastic from carbon dioxide, water and electricity

https://phys.org/news/2025-06-plastic-carbon-dioxide-electricity.html
57•PaulHoule•3d ago•14 comments

Experimental imperative-style music sequence generator engine

https://github.com/renoise/pattrns
6•bwidlar•3d ago•1 comments

A better Ghidra MCP server – GhidrAssistMCP

https://github.com/jtang613/GhidrAssistMCP
79•jtang613•13h ago•13 comments

Show HN: I made a JSFiddle-style playground to test and share prompts fast

https://langfa.st/
28•eugenegusarov•13h ago•3 comments

Second Variety, by Philip K. Dick (1953)

https://www.gutenberg.org/files/32032/32032-h/32032-h.htm
58•djoldman•3d ago•17 comments

HNSW as abstract data structure: video intro to Redis vector sets

https://www.youtube.com/watch?v=kVApsFUeuEA
22•antirez•2d ago•0 comments

Malware found in official gravityforms plugin indicating supply chain breach

https://patchstack.com/articles/critical-malware-found-in-gravityforms-official-plugin-site/
210•taubek•1d ago•43 comments

New Date("wtf") – How well do you know JavaScript's Date class?

https://jsdate.wtf
317•OuterVale•23h ago•180 comments

Working through 'Writing A C Compiler'

https://jollygoodsw.wordpress.com/2025/03/13/working-through-writing-a-c-compiler/
141•AlexeyBrin•19h ago•33 comments

Supreme Court's ruling practically wipes out free speech for sex writing online

https://ellsberg.substack.com/p/free-speech
563•macawfish•13h ago•724 comments

Exposing a web service with Cloudflare Tunnel (2022)

https://erisa.dev/exposing-a-web-service-with-cloudflare-tunnel/
96•sturza•3d ago•39 comments

Proposed NOAA Budget Kills Program Designed to Prevent Satellite Collisions

https://skyandtelescope.org/astronomy-news/proposed-noaa-budget-kills-program-to-prevent-satellite-collisions/
335•bikenaga•15h ago•195 comments

Vibe-Coding a PCB – surprisingly good

https://atomic14.substack.com/p/vibe-coding-a-pcb-surprisingly-good
143•iamflimflam1•15h ago•60 comments

Turns out you can just hack any train in the USA

https://twitter.com/midwestneil/status/1943708133421101446
4•lyu07282•34m ago•1 comments