Just to clarify, ROSplat isn’t generating the Gaussians, it’s not a SLAM algorithm or a reconstruction tool. It’s purely a visualizer that uses ROS for message passing. The idea is that if you already have a system producing Gaussians (either live or precomputed), ROSplat lets you stream and view them in real time (as the ROS messages arrive).
So in your example, yes, you could upload a pre-baked GSplat, calibrate it to the robot’s frame, and use it for navigation or visualization. Or, if your ROS device is running something like SLAM, it can publish Gaussians as it goes. In both cases, ROSplat is just making them available for visualization, nothing more.
And I completely agree with you on your last comment. VR Gaussians are the way to go, I know that a company Varjo is currently working on them. Not sure if there's anything else that's available tho :/
[1] https://www.gracia.ai/ [2] https://github.com/playcanvas/supersplat
arijun•6h ago
inhumantsar•6h ago
shadygm•6h ago
The main idea behind ROSplat is to make it easier to send and visualize Gaussians over the network, especially in robotics applications. For instance, imagine you're running a SLAM algorithm on a mobile robot and generating Gaussians as part of the mapping or localization process. With ROSplat, you can stream those Gaussians via ROS messages and visualize them live on another machine. It’s mostly a visualization tool that usess ROS for communication, making it accessible and convenient for robotics engineers and researchers already working within that ecosystem.
Just to clarify, ROSplat isn’t aiming to be faster than state-of-the-art rendering methods. The actual rendering is done with OpenGL, not ROS, so there’s no performance claim there. ROS is just used for the messaging, which does introduce a bit of overhead, but the benefit is in the ease of integration and live data sharing in robotics setups.
Also, I wrote a simple technical report explaining some things in more detail, you can find it in the repo!
Hope that clears things up a bit!
hirako2000•5h ago
Today generating a static point cloud with gaussians involves:
- offline, far from realtime process to generate spacial information off 2D captures. LiDar captures may help but doesn't drastically cut down the this heavy step. - "train" generate gaussian information off 2D captures and geospatial data.
Unless I'm already referring to an antique flow, or that my RTX GPU is too consumer grade, how would all of this perform on embedded systems to make fast communication of gaussian relevant ?
shadygm•5h ago
The offline method still generates significantly higher resolution scenes of course, but as time goes on, real-time Gaussian Splatting will become more common and will be close to offline methods.
This means that in the near future, we will be able to generate highly realistic scenes using Gaussian Splats on a smart edge + mobile robot in real-time and pass the splats via ROS onto another device running ROSplat (or other) and perform the visualisation there.
hirako2000•4h ago
I generate on GPU I can barely fit a large scene on 12GB of memory, and it takes many hours to produce 30k steps gaussians.
I'm sure the tech will evolve, hardware too. We are just 5y away.
I respect you open sourcing your work, it is innovative. Feels like a trophy splash, I suggest putting a link to something substantial, perhaps a page explaining where the tech will land and how this project fits that future, rather than a link to some LinkedIn.
shadygm•3h ago
I did not put a LinkedIn link in the post or repo, but I totally get your point about wanting something more substantial to explain the bigger picture.
A lot of the motivation and reasoning behind the project is already included in the technical report PDF attached in the repository, I tried to make it as self-contained as possible for those curious about the background and use cases.
That said, if I find some time, I’ll definitely consider putting together a separate page to outline where I think this kind of tool fits into the broader future of GS and robotics.
Thanks again!
markisus•2h ago
I believe the quality of realtime Gaussian splatting will improve with time. The OPs project could help ROS2 users take advantage of those new techniques. Someone might need to make a Gaussian splat video codec to bring down the bandwidth cost of streaming Gaussians.
Another application could be for visualizing your robot inside a pre-built map, or for providing visual models for known objects that the robot needs to interact with. Photometric losses could then be used to optimize the poses of these known objects.
[1] https://www.reddit.com/r/GaussianSplatting/comments/1iyz4si/...