frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Your Agents can now run Ralph using skills!

https://github.com/davidkimai/ralph-zero
1•davidkimai•1m ago•1 comments

Nvidia's CUDA libraries can be generic and not optimized for LLM inference

https://github.com/Venkat2811/yali
1•venkat_2811•3m ago•1 comments

Evolution Unleashed (2018)

https://aeon.co/essays/science-in-flux-is-a-revolution-brewing-in-evolutionary-theory
2•DiabloD3•9m ago•0 comments

Show HN: Zpace – See which node_modules, venvs, and caches are eating your disk

https://github.com/AzisK/Zpace
1•azisk1•10m ago•0 comments

Digg.com Is Back

https://about.digg.com/
4•howToTestFE•10m ago•1 comments

Breaking the Zimmermann Telegram (2018)

https://medium.com/lapsed-historian/breaking-the-zimmermann-telegram-b34ed1d73614
3•tony-allan•11m ago•0 comments

ttl: traceroute with MTU discovery, NAT/IX detection, route flap alerts & more

https://github.com/lance0/ttl
1•indigodaddy•11m ago•0 comments

Crow: Crobots robotic combat for training World Model AIs

https://github.com/dcgrigsby/crow
1•todsacerdoti•11m ago•0 comments

Show HN: I wrote an implementation of the game Hitori using Claude Code

https://senthil.learntosolveit.com/posts/2026/01/18/hitori.html
1•orsenthil•11m ago•1 comments

They Quit Their Day Jobs to Bet on Current Events

https://www.npr.org/2026/01/17/nx-s1-5672615/kalshi-polymarket-prediction-market-boom-traders-sla...
2•backpackerBMW•15m ago•0 comments

Experimenting with 4D Gaussian Splatting via one-click Python Plugins

https://github.com/shadygm/Lichtfeld-ml-sharp-Plugin
1•shadygm•19m ago•0 comments

Around 1,500 soldiers on standby for deployment to Minneapolis

https://www.bbc.co.uk/news/articles/c74v0pxg2nvo
1•treadump•21m ago•0 comments

Show HN: TalkThrough – Screen annotation and voice capture for Linear (macOS)

https://talkthrough.app/
1•leek•23m ago•0 comments

Building a whiteboard out of glass

https://nisa.la/glass-whiteboard/
2•nkalupahana•24m ago•2 comments

Show HN: Sinew – infrastructure patterns I got tired of re-writing

https://sinew.marquis.codes
1•greatnessinabox•24m ago•0 comments

Tired of AI, people are committing to the analog lifestyle in 2026

https://www.cnn.com/2026/01/18/business/crafting-soars-ai-analog-wellness
4•andy99•26m ago•0 comments

Show HN: I built a cash flow forecasting tool because YNAB is a rearview mirror

https://bountisphere.com/blog/traditional-budgeting-fails-cash-flow-forecasting
1•jondavidhague•27m ago•3 comments

The dos and don'ts of cooking with a cast-iron skillet

https://www.ft.com/content/edc50e83-2e91-42bb-ba4e-766e2cc698bd
1•mikhael•28m ago•0 comments

I built a package manager for WordPress

1•thelovekesh•30m ago•0 comments

Rust's Culture of Semantic Precision

https://www.alilleybrinker.com/mini/rusts-culture-of-semantic-precision/
1•birdculture•33m ago•0 comments

I Tried to Be the Government. It Did Not Go Well

https://www.theatlantic.com/magazine/2026/02/individual-federal-services-replacement/685333/
1•janandonly•35m ago•0 comments

QB options for Broncos after Bo Nix injury

https://www.cbssports.com/nfl/news/broncos-qb-options-bo-nix-injury-tom-brady-cant-play-but-drew-...
1•mooreds•35m ago•0 comments

The Age of Academic Slop Is Upon Us

https://hegemon.substack.com/p/the-age-of-academic-slop-is-upon
2•twapi•35m ago•0 comments

Five Foreign Policy Trends to Watch

https://www.cfr.org/article/visualizing-2026-five-foreign-policy-trends-watch
2•mooreds•35m ago•0 comments

Claude Agent Skill for Terraform and OpenTofu

https://github.com/antonbabenko/terraform-skill
1•mooreds•36m ago•0 comments

AI is everywhere, but nowhere in recent productivity data

https://www.theregister.com/2026/01/15/forrester_ai_jobs_impact/
4•gmays•37m ago•0 comments

Copilot Studio Extension for Visual Studio Code Is Now Generally Available

https://devblogs.microsoft.com/microsoft365dev/copilot-studio-extension-for-visual-studio-code-is...
1•rbanffy•37m ago•0 comments

Benchmarking my parser generator against LLVM: I have a new target

https://modulovalue.com/blog/benchmarking-against-llvm-parser/
2•modulovalue•38m ago•0 comments

Shouting In The Data Center (2009) [video]

https://www.youtube.com/watch?v=tDacjrSCeq4
1•nice_byte•38m ago•0 comments

Show HN: Jensen – Deus Ex cyberpunk aesthetic for your dev tools

https://tomaytotomato.github.io/jensen/
1•tomaytotomato•39m ago•2 comments
Open in hackernews

Gaussian Splatting – A$AP Rocky Helicopter Music Video

https://radiancefields.com/a-ap-rocky-releases-helicopter-music-video-featuring-gaussian-splatting
124•ChrisArchitect•1h ago

Comments

nodra•1h ago
Never did I think I would ever see anything close to related to A$AP on HN. I love this place.
keiferski•49m ago
Hah, for the past day, I've been trying to somehow submit the Helicopter music video / album as a whole to HN. Glad someone figured out the angle was Gaussian.
MuffinFlavored•43m ago
How did Rhianna look him in the eyes and say "yes babe, good album, release it, this is what the people wanted after 7 years, it is pleasing to listen to and enjoyable"?
larsmaxfield•37m ago
I prefer when artists make music they intrinsically want to make — not what others want them to make.
b00ty4breakfast•14m ago
the real question is how much of the art is their own and how much is outside expectations and their reactions to it.

And it's not always giving in to those voices, sometimes it's going in the opposite direction specifically to subvert those voices and expectations even if that ends up going against your initial instincts as an artist.

With someone like A$AP Rocky, there is a lot of money on the line wrt the record execs but even small indie artists playing to only a hundred people a night have to contend with audience expectation and how that can exert an influence on their creativity.

stickfigure•32m ago
Is he wearing... hair curlers?
nodra•22m ago
That's what one does when they want some fiyah curls.
wahnfrieden•12m ago
And nearly a Carti post at the top of HN
daveofiveo•1h ago
Direct link to the music video: https://www.youtube.com/watch?v=g1-46Nu3HxQ
roughly•1h ago
Be sure to watch the video itself* - it’s really a great piece of work. The energy is frenetic and it’s got this beautiful balance of surrealism from the effects and groundedness from the human performances.

* (Mute it if you don’t like the music, just like the rest of us will if you complain about the music)

yieldcrv•1h ago
so basically despite the higher resource requirements like 10TB of data for 30 minutes of footage, the compositing is so much faster and more flexible and those resources can be deleted or moved to long term storage in the cloud very quickly and the project can move on

fascinating

I wouldn't have normally read this and watched the video, but my Claude sessions were already executing a plan

the tl;dr is that all the actors were scanned in a 3D point cloud system and then "NeRF"'d which means to extrapolate any missing data about their transposed 3D model

this was then more easily placed into the video than trying to compose and place 2D actors layer by layer

andybak•34m ago
> and then "NeRF"'d which means to extrapolate any missing data about their transposed 3D model

Not sure if it's you or the original article but that's a slightly misleading summary of NeRFs.

yieldcrv•19m ago
I'm all for the better summary
darhodester•27m ago
Gaussian splatting is not NeRF (neural radiance field), but it is a type of radiance field, and supports novel view synthesis. The difference is in an explicit point cloud representation (Gaussian splatting), versus a process that needs to be inferred by a neural network.
rjh29•1h ago
To be honest it looks like it was rendered in an old version of Unreal Engine. That may be an intentional choice - I wonder how realistic guassian splatting can look? Can you redo lights, shadows, remove or move parts of the scene, while preserving the original fidelity and realism?

The way TV/movie production is going (record 100s of hours of footage from multiple angles and edit it all in post) I wonder if this is the end state. Gaussian splatting for the humans and green screens for the rest?

moi2388•47m ago
Yes, they talk about this in the article and that’s exactly what they did.
darhodester•32m ago
The aesthetic here is at least partially an intentional choice to lean into the artifacts produced by Gaussian splatting, particularly dynamic (4DGS) splatting. There is temporal inconsistency when capturing performances like this, which are exacerbated by relighting.

That said, the technology is rapidly advancing and this type of volumetric capture is definitely sticking around.

The quality can also be really good, especially for static environments: https://www.linkedin.com/posts/christoph-schindelar-79515351....

noman-land•58m ago
Really amazing video. Unfortunately this article is like 60% over my head. Regardless, I actually love reading jargon-filled statements like this that are totally normal to the initiated but are completely inscrutable to outsiders.

    "That data was then brought into Houdini, where the post production team used CG Nomads GSOPs for manipulation and sequencing, and OTOY’s OctaneRender for final rendering. Thanks to this combination, the production team was also able to relight the splats."
darhodester•38m ago
Hi, I'm one of the creators of GSOPs for SideFX Houdini.

The gist is that Gaussian splats can replicate reality quite effectively with many 3D ellipsoids (stored as a type of point cloud). Houdini is software that excels at manipulating vast numbers of points, and renderers (such as Octane) can now leverage this type of data to integrate with traditional computer graphics primitives, lights, and techniques.

suzzer99•17m ago
Can you put "Gaussing splats" in some kind of real world metaphor so I can understand what it means? Either that or explain why "Gaussian" and why "splat".

I am vaguely aware of stuff like Gaussian blur on Photoshop. But I never really knew what it does.

darhodester•12m ago
Sure!

Gaussian splatting is a bit like photogrammetry. That is, you can record video or take photos of an object or environment from many angles and reproduce it in 3D. Gaussians have the capability to "fade" their opacity based on a Gaussian distribution. This allows them to blend together in a seamless fashion.

The splatting process is achieved by using gradient descent from each camera/image pair to optimize these ellipsoids (Gaussians) such that the reproduce the original inputs as closely as possible. Given enough imagery and sufficient camera alignment, performed using Structure from Motion, you can faithfully reproduce the entire space.

Read more here: https://towardsdatascience.com/a-comprehensive-overview-of-g....

shwaj•1m ago
How can you expect someone to tailor a custom explanation, when they don’t know your level of mathematical understanding, or even your level of curiosity. You don’t know what a Gaussian blur does; do you know what a Gaussian is? How deeply do you want to understand?

If you’re curious start with the Wikipedia article and use an LLM to help you understand the parts that don’t make sense. Or just ask the LLM to provide a summary at the desired level of detail.

pants2•34m ago
Corridor has done some great stuff with Gaussian Splats, I recommend this video for a primer!

https://youtube.com/watch?v=cetf0qTZ04Y

pleurotus•57m ago
Super cool to read but can someone eli5 what Gaussian splatting is (and/or radiance fields?) specifically to how the article talks about it finally being "mature enough"? What's changed that this is now possible?
rkuykendall-com•51m ago
I found this VFX breakdown of the recent Superman movie to have a great explanation of what it is and what it makes possible: https://youtu.be/eyAVWH61R8E?t=232

tl;dr eli5: Instead of capturing spots of color as they would appear to a camera, they capture spots of color and where they exist in the world. By combining multiple cameras doing this, you can make a 3D works from footage that you can then zoom a virtual camera round.

arcfour•28m ago
So it's the 3D equivalent of a camera (using many cameras) that you can edit in 3D space too?
djeastm•51m ago
For the ELI5, Gaussian splatting represents the scene as millions of tiny, blurry colored blobs in 3D space and renders by quickly "splatting" them onto the screen, making it much faster than computing an image by querying a neural net model like radiance fields.

I'm not up on how things have changed recently

tel•40m ago
Gaussian splatting is a way to record 3-dimensional video. You capture a scene from many angles simultaneously and then combine all of those into a single representation. Ideally, that representation is good enough that you can then, post-production, simulate camera angles you didn't originally record.

For example, the camera orbits around the performers in this music video are difficult to imagine in real space. Even if you could pull it off using robotic motion control arms, it would require that the entire choreography is fixed in place before filming. This video clearly takes advantage of being able to direct whatever camera motion the artist wanted in the 3d virtual space of the final composed scene.

To do this, the representation needs to estimate the radiance field, i.e. the amount and color of light visible at every point in your 3d volume, viewed from every angle. It's not possible to do this at high resolution by breaking that space up into voxels, those scale badly, O(n^3). You could attempt to guess at some mesh geometry and paint textures on to it compatible with the camera views, but that's difficult to automate.

Gaussian splatting estimates these radiance fields by assuming that the radiance is build from millions of fuzzy, colored balls positioned, stretched, and rotated in space. These are the Gaussian splats.

Once you have that representation, constructing a novel camera angle is as simple as positioning and angling your virtual camera and then recording the colors and positions of all the splats that are visible.

It turns out that this approach is pretty amenable to techniques similar to modern deep learning. You basically train the positions/shapes/rotations of the splats via gradient descent. It's mostly been explored in research labs but lately production-oriented tools have been built for popular 3d motion graphics tools like Houdini, making it more available.

dmarcos•24m ago
It’s a point cloud where each point is a semitransparent blob that can have a view dependent color: color changes depending on direction you look at them. Allowing to capture reflections, iridescence…

You generate the point clouds from multiple images of a scene or an object and some machine learning magic

londons_explore•55m ago
Pretty sure most of this could be filmed with a camera drone and preprogrammed flight path...

Did the Gaussian splatting actually make it any cheaper? Especially considering that it needed 50+ fixed camera angles to splat properly, and extensive post-processing work both computationally and human labour, a camera drone just seems easier.

echelon•53m ago
It's fucking cool. That's why.

This tech is moving along at breakneck pace and now we're all talking about it. A drone video wouldn't have done that.

ThouYS•52m ago
it gives you flexibility, options
larsmaxfield•43m ago
Flying a camera drone with such proximity and acceleration would be a safety nightmare.
nebezb•40m ago
If it was achievable, cheaper, and of equal quality then it would have been done that way. Surely it would’ve been done that way a long time ago too. Drone paths have been around a lot longer than this technology.

There’s no proof of your claim and this video is proof of the opposite.

ex-aws-dude•37m ago
I think you’re missing the point

Volumetric capture like this allows you to decide on the camera angles in post-production

darhodester•29m ago
A drone path would not allow for such seamless transitions, never mind the planning required to nail all that choreography, effects, etc.

This approach is 100% flexible, and I'm sure at least part of the magic came from the process of play and experimentation in post.

hamburglar•28m ago
> Pretty sure most of this could be filmed with a camera drone and preprogrammed flight path

This is a “Dropbox is just ftp and rsync” level comment. There’s a shot in there where Rocky is sitting on top of the spinning blades of a helicopter and the camera smoothly transitions from flying around the room to solidly rotating along with the blades, so it’s fixed relative to rocky. Not only would programming a camera drone to follow this path be extremely difficult (and wouldn’t look as good), but just setting up the stunt would be cost prohibitive.

This is just one example of the hundreds you could come up with.

drdirk•51m ago
Can somebody explain to me what was actually scanned? Only the actors doing movements like push ups, or whole scenes / rooms?
sneak•41m ago
> One recurring reaction to the video has been confusion. Viewers assume the imagery is AI-generated. According to Evercoast, that couldn’t be further from the truth. Every stunt, every swing, every fall was physically performed and captured in real space. What makes it feel synthetic is the freedom volumetric capture affords.

No, it’s simply the framerate.

moribvndvs•27m ago
In another setting, it looks like ass, but lo-fi, glitchy shit is perfectly compatible with hip-hop aesthetic. Good track though.
darhodester•16m ago
Hi,

I'm David Rhodes, Co-founder of CG Nomads, developer of GSOPs (Gaussian Splatting Operators) for SideFX Houdini. GSOPs was used in combination with OTOY OctaneRender to produce this music video.

If you're interested in the technology and its capabilities, learn more at https://www.cgnomads.com/ or AMA.

Try GSOPs yourself: https://github.com/cgnomads/GSOPs (example content included).

sbierwagen•9m ago
From the article:

>Evercoast deployed a 56 camera RGB-D array

Do you know which depth cameras they used?

brcmthrowaway•7m ago
Kinect Azure
darhodester•7m ago
I was not involved in the capture process with Evercoast, but I may have heard somewhere they used RealSense cameras.

I recommend asking https://www.linkedin.com/in/benschwartzxr/ for accuracy.

darhodester•5m ago
Aha: https://www.red.com/stories/evercoast-komodo-rig

So likely RealSense D455.

Fiveplus•2m ago
Fantastic work, David. Exposing splat attributes to Houdini’s procedural context feels like the critical step to move this tech beyond static fly-throughs.

I'm curious about the relighting pipeline with Octane...are you deriving surface normals from the splat covariance/densities to drive a standard BRDF or are you mathematically manipulating the spherical harmonics coefficients directly to "fake" the lighting changes?

Also, given the massive 1TB footprint mentioned, how heavy is the attribute overhead when passing those PLY sequences through the solver?

narrator•9m ago
This reminds me about how Soulja Boy just used a cracked copy of Fruity Loops and a cheap microphone and recorded all his songs that made him millions.[1] No big studio, or anything much required like the pre-digital music days. Now we got the same thing for music videos and soon movies. The people with no money who make culture are going to be some of the biggest beneficiaries from AI.

[1] https://www.youtube.com/watch?v=f1rjhVe59ek

jtolmar•2m ago
Dang, it's been cool watching gaussian splats go from tech demo to real workflow.