frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Will Not Be Divided

https://notdivided.org
804•BloondAndDoom•3h ago•320 comments

Statement on the comments from Secretary of War Pete Hegseth

https://www.anthropic.com/news/statement-comments-secretary-war
587•surprisetalk•3h ago•202 comments

Don't use passkeys for encrypting user data

https://blog.timcappalli.me/p/passkeys-prf-warning/
39•zdw•1h ago•14 comments

OpenAI agrees with Dept. of War to deploy models in their classified network

https://twitter.com/sama/status/2027578652477821175
164•eoskx•1h ago•102 comments

Smallest transformer that can add two 10-digit numbers

https://github.com/anadim/AdderBoard
96•ks2048•1d ago•33 comments

OpenAI raises $110B on $730B pre-money valuation

https://techcrunch.com/2026/02/27/openai-raises-110b-in-one-of-the-largest-private-funding-rounds...
425•zlatkov•13h ago•484 comments

A new California law says all operating systems need to have age verification

https://www.pcgamer.com/software/operating-systems/a-new-california-law-says-all-operating-system...
463•WalterSobchak•13h ago•439 comments

Qt45: A small polymerase ribozyme that can synthesize itself

https://www.science.org/doi/10.1126/science.adt2760
62•ppnpm•4h ago•11 comments

President Trump bans Anthropic from use in government systems

https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban
136•pkress2•6h ago•156 comments

A Chinese official’s use of ChatGPT revealed an intimidation operation

https://www.cnn.com/2026/02/25/politics/chatgpt-china-intimidation-operation
173•cwwc•12h ago•108 comments

Eschewing Zshell for Emacs Shell (2014)

https://www.howardism.org/Technical/Emacs/eshell-fun.html
16•pvdebbe•3d ago•0 comments

NASA announces overhaul of Artemis program amid safety concerns, delays

https://www.cbsnews.com/news/nasa-artemis-moon-program-overhaul/
226•voxadam•11h ago•249 comments

GitHub Copilot CLI downloads and executes malware

https://www.promptarmor.com/resources/github-copilot-cli-downloads-and-executes-malware
14•sarelta•9h ago•2 comments

OpenAI reaches deal to deploy AI models on U.S. DoW classified network

https://www.reuters.com/business/openai-reaches-deal-deploy-ai-models-us-department-war-classifie...
27•erhuve•1h ago•9 comments

A better streams API is possible for JavaScript

https://blog.cloudflare.com/a-better-web-streams-api/
386•nnx•14h ago•134 comments

Emuko: Fast RISC-V emulator written in Rust, boots Linux

https://github.com/wkoszek/emuko
55•felipap•5h ago•4 comments

Croatia declared free of landmines after 31 years

https://glashrvatske.hrt.hr/en/domestic/croatia-declared-free-of-landmines-after-31-years-12593533
8•toomuchtodo•1h ago•1 comments

I am directing the Department of War to designate Anthropic a supply-chain risk

https://twitter.com/secwar/status/2027507717469049070
1176•jacobedawson•5h ago•970 comments

Show HN: Claude-File-Recovery, recover files from your ~/.claude sessions

https://github.com/hjtenklooster/claude-file-recovery
65•rikk3rt•11h ago•20 comments

Show HN: I ported Manim to TypeScript (run 3b1B math animations in the browser)

https://github.com/maloyan/manim-web
4•maloyan•2d ago•0 comments

Get free Claude max 20x for open-source maintainers

https://claude.com/contact-sales/claude-for-oss
489•zhisme•19h ago•205 comments

Open source calculator firmware DB48X forbids CA/CO use due to age verification

https://github.com/c3d/db48x/commit/7819972b641ac808d46c54d3f5d1df70d706d286
153•iamnothere•12h ago•75 comments

Let's discuss sandbox isolation

https://www.shayon.dev/post/2026/52/lets-discuss-sandbox-isolation/
115•shayonj•9h ago•33 comments

Show HN: Unfucked – version every change between commits - local-first

https://www.unfudged.io/
72•cyrusradfar•1d ago•41 comments

Inventing the Lisa user interface – Interactions

https://dl.acm.org/doi/10.1145/242388.242405
21•rbanffy•2d ago•2 comments

Kyber (YC W23) Is Hiring an Enterprise Account Executive

https://www.ycombinator.com/companies/kyber/jobs/59yPaCs-enterprise-account-executive-ae
1•asontha•9h ago

Writing a Guide to SDF Fonts

https://www.redblobgames.com/blog/2026-02-26-writing-a-guide-to-sdf-fonts/
85•chunkles•10h ago•5 comments

Implementing a Z80 / ZX Spectrum emulator with Claude Code

https://antirez.com/news/160
132•antirez•2d ago•64 comments

Distributed Systems for Fun and Profit

https://book.mixu.net/distsys/single-page.html
38•vinhnx•3d ago•0 comments

Allocating on the Stack

https://go.dev/blog/allocation-optimizations
132•spacey•11h ago•50 comments
Open in hackernews

Show HN: Real-Time Gaussian Splatting

https://github.com/axbycc/LiveSplat
144•markisus•9mo ago
LiveSplat is a system for turning RGBD camera streams into Gaussian splat scenes in real-time. The system works by passing all the RGBD frames into a feed forward neural net that outputs the current scene as Gaussian splats. These splats are then rendered in real-time. I've put together a demo video at the link above.

Comments

sreekotay•9mo ago
This is realtime capture/display? Presumable (at this stage) for local viewing? Is that right?
markisus•9mo ago
Yes realtime capture and display. Locality is not required. You can send the source RGBD video streams over IP and in fact I have that component working in the larger codebase that this was split off from. For that use case, you need to do some sort of compression. The RGB stream compression is a pretty solved problem, but the depth channel needs special consideration since "perceptual loss" in the depth space is not a well researched area.
patrick4urcloud•9mo ago
nice
echelon•9mo ago
OP, this is incredible. I worry that people might see a "glitchy 3D video" and might not understand the significance of this.

This is getting unreal. They're becoming fast and high fidelity. Once we get better editing capabilities and can shape the Gaussian fields, this will become the prevailing means of creating and distributing media.

Turning any source into something 4D volumetric that you can easily mold as clay, relight, reshape. A fully interactable and playable 4D canvas.

Imagine if the work being done with diffusion models could read and write from Gaussian fields instead of just pixels. It could look like anything: real life, Ghibli, Pixar, whatever.

I can't imagine where this tech will be in five years.

markisus•9mo ago
Thanks so much! Even when I was putting together the demo video I was getting a little self-critical about the visual glitches. But I agree the tech will get better over time. I imagine we will be able to have virtual front row seats at any live event, and many other applications we haven't thought of yet.
echelon•9mo ago
> I imagine we will be able to have virtual front row seats at any live event, and many other applications we haven't thought of yet.

100%. And style-transfer it into steam punk or H.R. Giger or cartoons or anime. Or dream up new fantasy worlds instantaneously. Explore them, play them, shape them like Minecraft-becomes-holodeck. With physics and tactile responses.

I'm so excited for everything happening in graphics right now.

Keep it up! You're at the forefront!

_verandaguy•9mo ago
I know enough about 3D rendering to know that Gaussian splatting's one of the Big New Things in high-performance rendering, so I understand that this is a big deal -- but I can't quantify why, or how big a deal it is.

Could you or someone else wise in the ways of graphics give me a layperson's rundown of how this works, why it's considered so important, and what the technical challenges are given that an RGB+D(epth?) stream is the input?

markisus•9mo ago
Gaussian Splatting allows you to create a photorealistic representation of an environment from just a collection of images. Philosophically, this is a form of geometric scene understanding from raw pixels, which has been a holy grail of computer vision since the beginning.

Usually creating a Gaussian splat representation takes a long time and uses an iterative gradient-based optimization procedure. Using RGBD helps me sidestep this optimization, as much of the geometry is already present in the depth channel and so it enables the real-time aspect of my technique.

When you say "big deal", I imagine you are also asking about business or societal implications. I can't really speak on those, but I'm open to licensing this IP to any companies which know about big business applications :)

corysama•9mo ago
So, is there some amount of gradient-based optimization going on here? I see RGBD input, transmission, RGBD output. But, other than multi-camera registration, it's difficult to determine what processing took place between input and transmission. What makes this different from RGBD camera visualizations from 10 years ago?
markisus•9mo ago
There is no gradient-based optimization. It's (RGBD input, Current Camera Pose) -> Neural Net -> Gaussian Splat output.

I'm not aware of other live RGBD visualizations except for direct pointcloud rendering. Compared to pointclouds, splats are better able to render textures, view-dependent effects, and occlusions.

rsp1984•9mo ago
Except that no view-dependent effects that would benefit multi-view consistency are present in your splats.

So yes, it's very much like the RGB-D visualizations from 10 years ago, just with splats instead of points.

markisus•9mo ago
Here is an example of a view dependent effect produced by LiveSplat [1]. Look closely at the wooden chair handle as the view changes.

I'll concede that ten years ago, someone could have done this. But no one did, as far as I know.

[1] https://imgur.com/a/2yA7eMU

_verandaguy•9mo ago
Thanks! That makes a lot of sense, I might dig into this after work some more.

By "big deal," I meant more for people specializing around computer graphics, computer vision, or even narrower subfields of either of those two -- a big deal from an academic interest perspective.

Sure, this might also have implications in society and business, but I'm a nerd, and I appreciate a good nerding out over something cool, niche, and technically impressive.

sendfoods•9mo ago
Please excuse my naive question - isn't Gaussian Splatting usually used to create 3D imagery from 2D? How does providing 3D input data make sense in this context?
ttoinou•9mo ago
Well if you have the D channel you might as well benefit from it and have better output
markisus•9mo ago
Yes, the normal case uses 2D input, but it can take hours to create the scene. Using the depth channel allows me to create the scene in 33 milliseconds, from scratch, every frame. You could conceptualize this as a compromise between raw pointcloud rendering and fully precomputed Gaussian splat rendering. With pointclouds, you have a lot visual artifacts due to sparsity (low texture information, seeing "through" objects"). With Gaussian splatting, you can transfer a lot more of the 2D texture information into 3D space and render occlusion and view-dependent effects better.
Retr0id•9mo ago
How do the view-dependent effects get "discovered" from only a single source camera angle?
markisus•9mo ago
Actually there are multiple source cameras. The neural net learns to interpolate the source camera colors based on where the virtual camera is. Under the hood it's hard to say exactly what's going on in the mind of the neural net, but I think it's something like "If I'm closer to camera A, take most of the color from camera A."
ttoinou•9mo ago
So we’re not sure how it works exactly ?
markisus•9mo ago
Yup, this is the case for all neural nets.
BSVogler•9mo ago
Gaussian splatting does not use neural nets. It runs an optimizer on the Gaussian splattering parameters. I think in your comment you are talking about Neural Radiance Fields (Nerfs).
Retr0id•9mo ago
Traditionally you'd use an optimizer, but OP isn't doing it traditionally, which is what makes it interesting. NeRFs work differently.
sendfoods•9mo ago
That makes things clearer, thanks!
jayd16•9mo ago
Splatting is about building a scene that supports synthetic view angles.

The depth is helpful to properly handle the parallaxing of the scene as the view angle changes. The system should then ideally "in-paint" the areas that are occluded from the input.

You can either guess the input depth from matching multiple RGB inputs or just use depth inputs along with RGB inputs if you have them. It's not fundamental to the process of building the splats either way.

yuchi•9mo ago
The output looks terribly similar to what sci-fi movies envisioned as 3D reconstruction of scenes. It is absolutely awesome. Now, if we could project them in 3D… :)
tough•9mo ago
Apple Vision maybe?
drac89•9mo ago
maybe
mandeepj•9mo ago
Another implementation of splat https://github.com/NVlabs/InstantSplat
jasonjmcghee•9mo ago
The quality is better, no doubt, but this method (from the paper) takes on the order of 10-45s depending on input from their table. Which is much better than 10 minutes etc.

That being said, afaict OP's method is 1000x faster, at 33ms.

markisus•9mo ago
Note that the method you linked is "Splatting in Seconds" where as real-time requires splatting in tens of milliseconds.

I'm also following this work https://guanjunwu.github.io/4dgs/ which produces temporal Gaussian splats but takes at least half an hour to learn the scene.

metalrain•9mo ago
How did you train this? I'm thinking there isn't reference output for live video frame to splats so supervised learning doesn't work.

Is there some temporal accumulation?

markisus•9mo ago
There is no temporal accumulation, but I think that's the next logical step.

Supervised learning actually does work. Suppose you have four cameras. You input the three of them into the net and use the fourth as the ground truth. The live video aspect just emerges from re-running the neural net every frame.

corysama•9mo ago
So, I see livesplat_realsense.py imports livesplat. Where’s livesplat?
IshKebab•9mo ago
The README says it's closed source.
markisus•9mo ago
I've tried to make it clear in the link that the actual application is closed source. I'm distributing it as a .whl full of binaries (see the installation instructions).

I've considered publishing the source but the source code is is dependent on some proprietary utility libraries from my bigger project and it's hard to fully disentangle it and I'm not sure if this project has some business applications but I'd like to keep that door open at this time.

armchairhacker•9mo ago
Gaussian Splatting looks pretty and realistic in a way unlike any other 3D render, except UE5 and some hyper-realistic not-realtime renders.

I wonder if one can go the opposite route and use gaussian splatting or (more likely) some other method to generate 3D/4D scenes from cartoons. Cartoons are famously hard to emulate in 3D even entirely manually; like with traditional realistic renders (polygons, shaders, lighting, post-processing) vs gaussian splats, maybe we need a fundamentally different approach.

spyder•9mo ago
Correct me if I'm wrong but looking at the video this just looks like a 3D point cloud using equal-sized "gaussians" (soft spheres) for each pixel, that's why it looks still pixelated especially at the edges. Even when it's low resolution the real gaussian splatting artifacts look different with spikes an soft blobs at the lower resolution parts. So this is not really doing the same as a real gaussian splatting of combining different sized view-dependent elliptic gaussians splats to reconstruct the scene and also this doesn't seem to reproduce the radiance field as the real gaussian splatting does.
markisus•9mo ago
I had to make a lot of concessions to make this work in real-time. There is no way that I know to replicate the fidelity of "actual" Gaussian splatting training process within the 33ms frame budget.

However, I have not baked in the size or orientation into the system. Those are "chosen" by the neural net based on the input RGBD frames. The view dependent effects are also "chosen" by the neural net, but not through an explicit radiance field. If you run the application and zoom in, you will be able to see the splats of different sizes pointing in different directions. The system as limited ability to re-adjust the positions and sizes due to the compute budget leading to the pixelated effect.

markisus•9mo ago
I've uploaded a screenshot from LiveSplat where I zoomed in a lot on a piece of fabric. You can see that there is actually a lot of diversity in the shape, orientation, and opacity of the Gaussians produced [1].

[1] https://imgur.com/a/QXxCakM

kookamamie•9mo ago
[flagged]
dang•9mo ago
Whoa—please don't be a jerk on HN and especially not when discussing other people's work.

You broke the site guidelines badly here (https://news.ycombinator.com/newsguidelines.html), and the Show HN guidelines even more so (https://news.ycombinator.com/showhn.html).

If you wouldn't mind reviewing those links and sticking to the rules when posting to HN, we'd appreciate it.

whywhywhywhy•9mo ago
Would be good to see how it's different from just the depth channel applied to the Z of the RGB pixels. Because it looks very similar to that.
markisus•9mo ago
The application has this feature and lets you switch back and forth. What you are talking about is the standard pointcloud rendering algorithm. I have an older video where I display the corresponding pointcloud [1] in a small picture in picture frame so you can compare.

I actually started with pointclouds for my VR teleoperation system but I hated how ugly it looked. You end up seeing through objects and objects becoming unparseable if you get too close. Textures present in the RGB frame also become very hard to make out because everything becomes "pointilized". In the linked video you can make out the wood grain direction in the splat rendering, but not in the pointcloud rendering.

[1] https://youtu.be/-u-e8YTt8R8?si=qBjYlvdOsUwAl5_r&t=14

badmonster•9mo ago
What is the expected frame rate and latency when running on a typical setup with one Realsense camera and an RTX 3060?
markisus•9mo ago
I don't have a 3060 at hand so I'm not sure. Ideally someone with that setup will try it out and report back. There is no noticeable latency when comparing visually with standard pointcloud rendering.

With framerate, there are two different frame rates that are important. One is the splat construction framerate, which the speed that an entirely new set of Gaussian's can be constructed. LiveSplat can usually maintain 30fps in this case.

The second important splat rendering framerate. In VR this is important to prevent motion sickness. Even if you have a static set of splats, you need the rendering to react to the user's minor head movements at around 90fps for the best in-headset experience.

All these figures are on my setup with a 4090 but I have gotten close results with a 3080 (maybe 70fps splat rendering instead of 90fps).

smusamashah•9mo ago
The demo video does not show constructing 3d from input. Is it possible to do something like that with this? Take a continus feed of a static scene and keep improving the 3D view?

This is what I thought from the title, but the demo video is just a conitnuously changing stream of points/splats with the video.

markisus•9mo ago
If the scene is static, the normal Gaussian splatting pipeline will give much better results. You take a bunch of photos and then let the optimizer run for a while to create the scene.
drewbeck•9mo ago
imo this is a key component of a successful VR future for live events. Many cameras at a venue, viewers strap on a headset at home and get to sit/stand anywhere in the room and see the show.

Also I love the example video. Folks could make some killer music videos with this tech.

asadm•9mo ago
This is amazing! Video calls of the future (this + vision pro) would be lovely.
hi_hi•9mo ago
While undoubtedly technically impressive, this left me a little confused. Let me explain.

What I think I'm seeing is like one of those social media posts where someone has physically printing out a tweet, taken a photo of them holding the printout, and then posted another social media post of the photo.

Is the video showing me a different camera perspective than what was originally captured, or is this taking a video feed, doing technical magic to convert to gaussian splats, and then converting it back into a (lower quality) video of the same view?

Again, congratulations, this is amazing from a technical perspective, I'm just trying to understand some of the potential applications it might have.

markisus•9mo ago
Yes this converts video stream (plus depth) into Gaussian splats on the fly. While the system is running you can move the camera around to view the splats at different angles.

I took a screen recording of this system as it was running and cut it into clips to make the demo video.

I hope that makes sense?

donclark•9mo ago
Holodeck coming soon?