frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Building a Procedural Hex Map with Wave Function Collapse

https://felixturner.github.io/hex-map-wfc/article/
266•imadr•4h ago•38 comments

JSLinux Now Supports x86_64

https://bellard.org/jslinux/
148•TechTechTech•4h ago•33 comments

Thomas Selfridge: The First Airplane Fatality

https://www.amusingplanet.com/2026/03/thomas-selfridge-first-airplane-fatality.html
13•Hooke•57m ago•2 comments

Is legal the same as legitimate: AI reimplementation and the erosion of copyleft

https://writings.hongminhee.org/2026/03/legal-vs-legitimate/
196•dahlia•6h ago•191 comments

Show HN: The Mog Programming Language

https://moglang.org
77•belisarius222•3h ago•34 comments

Things I've Done with AI

https://sjer.red/blog/2026/built-with-ai/
28•shepherdjerred•2h ago•18 comments

DARPA's new X-76

https://www.darpa.mil/news/2026/darpa-new-x-76-speed-of-jet-freedom-of-helicopter
98•newer_vienna•4h ago•94 comments

Bluesky CEO Jay Graber is stepping down

https://bsky.social/about/blog/03-09-2026-a-new-chapter-for-bluesky
167•minimaxir•2h ago•167 comments

Launch HN: Terminal Use (YC W26) – Vercel for filesystem-based agents

56•filipbalucha•4h ago•43 comments

Florida judge rules red light camera tickets are unconstitutional

https://cbs12.com/news/local/florida-news-judge-rules-red-light-camera-tickets-unconstitutional
186•1970-01-01•4h ago•284 comments

Fixfest is a global gathering of repairers, tinkerers, and activists

https://fixfest.therestartproject.org/
113•robtherobber•3h ago•11 comments

Fontcrafter: Turn Your Handwriting into a Real Font

https://arcade.pirillo.com/fontcrafter.html
383•rendx•12h ago•127 comments

Rethinking Syntax: Binding by Adjacency

https://github.com/manifold-systems/manifold/blob/master/docs/articles/binding_exprs.md
29•owlstuffing•1d ago•10 comments

Show HN: DenchClaw – Local CRM on Top of OpenClaw

https://github.com/DenchHQ/DenchClaw
62•kumar_abhirup•6h ago•63 comments

Oracle is building yesterday's data centers with tomorrow's debt

https://www.cnbc.com/2026/03/09/oracle-is-building-yesterdays-data-centers-with-tomorrows-debt.html
31•spenvo•53m ago•6 comments

Restoring a Sun SPARCstation IPX part 1: PSU and NVRAM (2020)

https://www.rs-online.com/designspark/restoring-a-sun-sparcstation-ipx-part-1-psu-and-nvram
77•ibobev•6h ago•44 comments

Velxio, Arduino Emulator

https://velxio.dev/
23•dmonterocrespo•1d ago•7 comments

Flash media longevity testing – 6 years later

https://old.reddit.com/r/DataHoarder/comments/1q6xnun/flash_media_longevity_testing_6_years_later/
107•1970-01-01•1d ago•52 comments

The Most Beautiful Freezer in the World: Notes on Baking at the South Pole

https://www.newyorker.com/culture/the-weekend-essay/the-most-beautiful-freezer-in-the-world
12•mitchbob•2h ago•2 comments

So you want to write an "app" (2025)

https://arcanenibble.github.io/so-you-want-to-write-an-app.html
3•jmusall•39m ago•0 comments

Workers report watching Ray-Ban Meta-shot footage of people using the bathroom

https://arstechnica.com/gadgets/2026/03/workers-report-watching-ray-ban-meta-shot-footage-of-peop...
103•randycupertino•2h ago•33 comments

Durdraw – ANSI art editor for Unix-like systems

https://durdraw.org/
17•caminanteblanco•2h ago•9 comments

An opinionated take on how to do important research that matters

https://nicholas.carlini.com/writing/2026/how-to-win-a-best-paper-award.html
46•mad•5h ago•6 comments

Ireland shuts last coal plant, becomes 15th coal-free country in Europe (2025)

https://www.pv-magazine.com/2025/06/20/ireland-coal-free-ends-coal-power-generation-moneypoint/
777•robin_reala•11h ago•486 comments

Reverse-engineering the UniFi inform protocol

https://tamarack.cloud/blog/reverse-engineering-unifi-inform-protocol
126•baconomatic•8h ago•54 comments

No leap second will be introduced at the end of June 2026

https://lists.iana.org/hyperkitty/list/tz@iana.org/thread/P6D36VZSZBUSSTSMZKFXKF4T4IXWN23P/
53•speckx•9h ago•59 comments

Jolla on track to ship new phone with Sailfish OS, user-replaceable battery

https://liliputing.com/the-new-jolla-phone-with-sailfish-os-is-on-track-to-start-shipping-in-the-...
152•heresie-dabord•4h ago•98 comments

FreeBSD Capsicum vs. Linux Seccomp Process Sandboxing

https://vivianvoss.net/blog/capsicum-vs-seccomp
102•vermaden•8h ago•39 comments

Teenagers report for duty as Croatia reinstates conscription

https://www.bbc.com/news/articles/c93j2l32lzgo
7•tartoran•2h ago•1 comments

US Court of Appeals: TOS may be updated by email, use can imply consent [pdf]

https://cdn.ca9.uscourts.gov/datastore/memoranda/2026/03/03/25-403.pdf
500•dryadin•15h ago•393 comments
Open in hackernews

Gaussian Splatting Meets ROS2

https://github.com/shadygm/ROSplat
61•shadygm•10mo ago

Comments

arijun•10mo ago
This page is pretty light on the what and why. I gather it’s using ROS (which I had to look up to confirm means robot operating system) to render Gaussian splatting. And that’s faster than a dedicated GPU renderer? Doesn’t ROS add overhead in the form of message passing?
inhumantsar•10mo ago
it's for visualizing a robot's camera data in 3d space
shadygm•10mo ago
Hey! Great question, and thanks for taking a look!

The main idea behind ROSplat is to make it easier to send and visualize Gaussians over the network, especially in robotics applications. For instance, imagine you're running a SLAM algorithm on a mobile robot and generating Gaussians as part of the mapping or localization process. With ROSplat, you can stream those Gaussians via ROS messages and visualize them live on another machine. It’s mostly a visualization tool that usess ROS for communication, making it accessible and convenient for robotics engineers and researchers already working within that ecosystem.

Just to clarify, ROSplat isn’t aiming to be faster than state-of-the-art rendering methods. The actual rendering is done with OpenGL, not ROS, so there’s no performance claim there. ROS is just used for the messaging, which does introduce a bit of overhead, but the benefit is in the ease of integration and live data sharing in robotics setups.

Also, I wrote a simple technical report explaining some things in more detail, you can find it in the repo!

Hope that clears things up a bit!

hirako2000•10mo ago
Confused here despite the detailed explanation on the user case.

Today generating a static point cloud with gaussians involves:

- offline, far from realtime process to generate spacial information off 2D captures. LiDar captures may help but doesn't drastically cut down the this heavy step. - "train" generate gaussian information off 2D captures and geospatial data.

Unless I'm already referring to an antique flow, or that my RTX GPU is too consumer grade, how would all of this perform on embedded systems to make fast communication of gaussian relevant ?

shadygm•10mo ago
There's some algorithms, such as Photo-SLAM and Gaussian Splatting SLAM (although far heavier and slower), that show that it is indeed possible to be able to estimate position and generate Gaussians in real-time. These are definitely still the early days for these techniques tho.

The offline method still generates significantly higher resolution scenes of course, but as time goes on, real-time Gaussian Splatting will become more common and will be close to offline methods.

This means that in the near future, we will be able to generate highly realistic scenes using Gaussian Splats on a smart edge + mobile robot in real-time and pass the splats via ROS onto another device running ROSplat (or other) and perform the visualisation there.

hirako2000•10mo ago
OK. Thanks for your projections.

I generate on GPU I can barely fit a large scene on 12GB of memory, and it takes many hours to produce 30k steps gaussians.

I'm sure the tech will evolve, hardware too. We are just 5y away.

I respect you open sourcing your work, it is innovative. Feels like a trophy splash, I suggest putting a link to something substantial, perhaps a page explaining where the tech will land and how this project fits that future, rather than a link to some LinkedIn.

shadygm•10mo ago
Hey, I appreciate the feedback.

I did not put a LinkedIn link in the post or repo, but I totally get your point about wanting something more substantial to explain the bigger picture.

A lot of the motivation and reasoning behind the project is already included in the technical report PDF attached in the repository, I tried to make it as self-contained as possible for those curious about the background and use cases.

That said, if I find some time, I’ll definitely consider putting together a separate page to outline where I think this kind of tool fits into the broader future of GS and robotics.

Thanks again!

somethingsome•10mo ago
Il very curious of that.. My mean training with ~25-30 high quality cameras takes around 20 minutes and some Gb of memory on a single GPU, what is the size of your large scale scenes? I see many possible optimizations to lower that number of Gb and time
markisus•10mo ago
I have done a recent proof of concept to generate Gaussian splats from depth cameras in real-time. The intended application is for robotics and teleoperation. I made a post on reddit [1] a while back if you're interested.

I believe the quality of realtime Gaussian splatting will improve with time. The OPs project could help ROS2 users take advantage of those new techniques. Someone might need to make a Gaussian splat video codec to bring down the bandwidth cost of streaming Gaussians.

Another application could be for visualizing your robot inside a pre-built map, or for providing visual models for known objects that the robot needs to interact with. Photometric losses could then be used to optimize the poses of these known objects.

[1] https://www.reddit.com/r/GaussianSplatting/comments/1iyz4si/...

jimmySixDOF•10mo ago
So I upload a pre-baked GSplat of the ground state physical space, presumably there is some kind of calibration, then I can navigate the ROS device spatially using the GSplat to reflect position details instead of, or in addition to, actual camera feeds ? Or are they producing the splats somehow on the ROS device with limited camera poses ? Whatever the case may be, I still think the human controller side is where Splats are more useful so add a VR headset into the loop and I think this could open up real opportunities for example spatial minimaps, decoupled points of view, etc.
shadygm•10mo ago
Thanks for taking a look!

Just to clarify, ROSplat isn’t generating the Gaussians, it’s not a SLAM algorithm or a reconstruction tool. It’s purely a visualizer that uses ROS for message passing. The idea is that if you already have a system producing Gaussians (either live or precomputed), ROSplat lets you stream and view them in real time (as the ROS messages arrive).

So in your example, yes, you could upload a pre-baked GSplat, calibrate it to the robot’s frame, and use it for navigation or visualization. Or, if your ROS device is running something like SLAM, it can publish Gaussians as it goes. In both cases, ROSplat is just making them available for visualization, nothing more.

And I completely agree with you on your last comment. VR Gaussians are the way to go, I know that a company Varjo is currently working on them. Not sure if there's anything else that's available tho :/

dheera•10mo ago
I've actually been pondering using Gaussian splats for localization, I think it could be done. The idea would be looking for the pose that minimizes the MSE in density (rather than feature points or RGB similarity which are both vulnerable to lighting changes)
jimmySixDOF•10mo ago
Varjo are good at whatever they do but also check out @gracia_vr [1] they focus on Spalts in XR and playcanvas has supersplat which lets you view immersive mode for 3DGS [2].

[1] https://www.gracia.ai/ [2] https://github.com/playcanvas/supersplat

gitroom•10mo ago
Nice, these back and forths always remind me how much cool stuff is brewing behind the scenes. Tbh I'd love seeing more live demos of things like this, helps my brain get what's really happening.
shadygm•10mo ago
Yeah I agree, lack of visuals sometimes makes it hard to tell what's happening when a field moves as fast as it does in GS. There's a github page called Awesome3DGS [1] that is updated whenever there is a new paper in GS. It helped me a lot when I was getting started.

Most papers also have their own project page that showcases their contributions or demo their project as well (:

[1] https://github.com/MrNeRF/awesome-3D-gaussian-splatting