frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•20s ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•2m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•3m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•3m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•3m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
1•juujian•5m ago•0 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•6m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•9m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•11m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•11m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•12m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•15m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
4•sakanakana00•18m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•20m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•21m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•22m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•22m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•26m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•29m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•32m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•33m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•38m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•40m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•42m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•42m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•43m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•48m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•54m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•56m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•1h ago•1 comments
Open in hackernews

Macro Splats 2025

https://danybittel.ch/macro.html
425•danybittel•3mo ago

Comments

hmry•3mo ago
Amazing work, I especially love that you put all of them online to view. The bumblebee is my favorite, so fuzzy
kreelman•3mo ago
I agree. The fine detail on the insects skin/shell is amazing.

I'd love to know the compute hardware he used and the time it took to produce.

danybittel•3mo ago
Nothing fancy. Postshot does need a nvidia card though, I have a 3060Ti. A single insect, with around 5 million splats takes about 3 hours to train in high quality.
smokel•3mo ago
That's quite the improvement over Stars/NoooN [1] showing off real-time rendering of (supposedly) 23,806 triangles on a 486.

[1] https://youtu.be/wEiBxHOGYps

pbronez•3mo ago
When was that made? The YouTube video is 14 years old but it feels at least a decade older than that.
ku1ik•3mo ago
1995
mkl•3mo ago
The results are incredibly clean! Feathers and flowers could be interesting.

Black text on a dark grey background is nearly unreadable - I used Reader Mode.

1gn15•3mo ago
This looks amazing, and never thought to combine macro photography and Gaussian splatting.

I'd also like to show my gratitude for you releasing this as a free culture file! (CC BY)

Scene_Cast2•3mo ago
I wonder if there's research into fitting gaussian splats that are dependent on focus distance? Basically as a way of modeling bokeh - you'd feed the raw, unstacked shots and get a sharp-everywhere model back.
yorwba•3mo ago
Multiple groups working on this:

https://dof-gs.github.io/

https://dof-gaussian.github.io/

danybittel•3mo ago
Thanks for the links, that is great to know. I'm not quite sold if it's the better approach. You'd need to do SfM (tracking) on the out of focus images, which with macro subject can be really blurry, I don't know how well that works.. and a lot more of images too. You'd have group them somehow or preprocess.. then you're back to focus stacking first :-)
pbronez•3mo ago
The linked paper describes a pipeline that starts with “point cloud from SfM” so they’re assuming away this problem at the moment.

Is it possible to handle SfM out of band? For example, by precisely measuring the location and orientation of the camera?

The paper’s pipeline includes a stage that identifies the in-focus area of an image. Perhaps you could use that to partition the input images. Exclusively use the in-focus areas for SfM, perhaps supplemented by out of band POV information, then leverage the whole image for training the splat.

Overall this seems like a slow journey to building end-to-end model pipelines. We’ve seen that in a few other domains, such as translation. It’s interesting to see when specialized algorithms are appropriate and when a unified neural pipeline works better. I think the main determinant is how much benefit there is to sharing information between stages.

danybittel•3mo ago
You can definitely feed camera intrinsic (lens, sensor size..) and extrinsic (position, rotation..) into the SfM. While the intrinsic are very useful the extrinsic not actually that much. In no way can you measure the rotation good enough, to get subpixel accuracy. The position can be useful as an initial guess, but I found it more hassle than worth it. If the images track well, have enough overlap, you can get exact tracking out of them without dealing with extrinsic. If they don't track well, extrinsic won't save you. That was at least my experience.
etskinner•3mo ago
How does it capture the reflection (the iridescence of the fly's body)? It's almost as if I can see the background through the reflection.

I would have thought that since that reflection has a different color in different directions, gaussian splat generation would have a hard time coming to a solution that satisfies all of the rays. Or at the very least, that a reflective surface would turn out muddy rather than properly reflective-looking.

Is there some clever trickery that's happening here, or am I misunderstanding something about gaussian splats?

abainbridge•3mo ago
FTA, "A Gaussian splat is essentially a bunch of blurry ellipsoids. Each one has a view-dependent color". Does that explain it?
Klaus23•3mo ago
Gaussian splats can have colour components that depend on the viewing direction. As far as I know, they are implemented as spherical harmonics. The angular resolution is determined by the number of spherical harmonic components. If this is too low, all reflection changes will be slow and smooth, and any reflection will be blurred.
ricardobeat•3mo ago
The color is view-dependent, which also means the lighting is baked in and results in them not being usable directly for 3D animation/environments (though I’m sure there must be research happening on dynamic lighting).

Sometimes it will “go wrong”, you can see in some of the fly models that if you get too close, body parts start looking a bit transparent as some of the specular highlights are actually splats on the back of an internal surface. This is very evident with mirrors - they are just an inverted projection which you can walk right into.

blincoln•3mo ago
Feels like there must be some way to use "variability of colour by viewing angle" for tiny clusters of volumes in the object as a way to generate material settings when converting the Gaussian splat model to a traditional 3D model.

E.g. if you have a cluster of tiny adjacent volumes that have high variability based on viewing angle, but the difference between each of those volumes is small, handle it as a smooth, reflective surface, like chrome.

ricardobeat•3mo ago
You can’t easily convert a gaussian splat to a polygon based model, the representation through blurry splats is the breakthrough.
meindnoch•3mo ago
See the section titled "View-dependant colors with SH" here: https://towardsdatascience.com/a-comprehensive-overview-of-g...
Feuilles_Mortes•3mo ago
Wow this would be lovely for my Drosophila lab.
zokier•3mo ago
It is remarkable that this is accomplished with relatively modest setup and effort, and the results are already great. Makes me wonder what you could get with high-end gear (e.g. 61mp sony a7rv and the new 100mm 1.4x macro) and capturing more frames. I also imagine that the web versions lose some detail to reduce size.

I presume these would look great on good vr headset?

cssinate•3mo ago
Cool! It looks awesome. I did see some "ghost legs" on the bumblebee. How does that sort of artifact happen?
danybittel•3mo ago
The bumblebee was my first attempt, the tracking didn't quite work, so you get ghosting. Others too have ghosting, usually happens when part of the insect moves, while shooting (which takes 4h). They dry and crumble after a while.
iamflimflam1•3mo ago
Looks amazing. Some feedback on the website - black text on a dark grey background? I had to use reader mode.
kaptainscarlet•3mo ago
I have the opposite experience to you. This website is one of the few websites I can read clearly without any blurred edges with my glasses on.
Alejandro9R•3mo ago
Same, I love it
crazygringo•3mo ago
Then you need to turn down brightness of your screen. You obviously have it set way too high.

This is objectively violating accessibility guidelines for contrast.

wittjeff•3mo ago
Right. Now try background color #767676 on the body element and see how much better it is.
Waterluvian•3mo ago
Yeah. Even with very low brightness it works well for me.

The best thing about reader mode is that there’s now always an escape hatch for those who it doesn’t work for.

sethammons•3mo ago
The page saturation made me think something was highlighted in the foreground that I simply couldn't see, leaving the whole page as shaded "in the background."
Aardwolf•3mo ago
The interactive rotatable demos work in realtime on my phone in browser! I guess gaussian spats aren't that expensive to render then, only to compute
gdubs•3mo ago
The file sizes are impressive (as in small). I don't have the link right now but there are recent 4D splats that include motion (like videos but you can move around the scene) and they're in the megabytes.
fidotron•3mo ago
That wasp is one of the single most impressive pieces of computer graphics I have ever seen, and seemingly in contradiction also a fantastic piece of macro photography. The fact it renders in real time is amazing.

There was a discussion on here the other day about the PS6, and honestly were I involved in consoles/games production anymore I'd be looking seriously about how to incorporate assets like this.

redox99•3mo ago
Gaussian splats don't offer the flexibility required for your typical videogame. Since it isn't true PBR its lighting is kind of hardcoded. Rigging doesn't work well with it. And editing would be very hard.

It's good for visualizing something by itself, but not for building a scene out of it.

jayd16•3mo ago
Yeah no animation is a pretty big blocker. The tech can handle video clips tho.

I wonder if it's possible to do some kind of blendshape style animation, where you blend between multiple recorded poses.

kridsdale1•3mo ago
Early 3D engines and of course all the 16 bit 2D games had “canned animation”. Half Life was an early example I can think of that used real IK rigging. Unreal 1 did not.
redox99•3mo ago
For half life it would be FK (forward kinematics). IK I assume was introduced in HL2 (but I don't know for a fact)
account42•3mo ago
Even HL2 is mostly just normal (FK) animations. IK is just used for limited cases, namely making sure feet touch the ground on sloped surfaces.
redox99•3mo ago
Yeah that applies to any modern game as well. IK is used for touchups and procedural stuff. Everything else is FK (obviously during authoring IK is used).
fidotron•3mo ago
It would next extension and extra parameters, but plenty of AAA assets have had their shaders produced by cameras with fancy lighting rigs for many years.
jtolmar•3mo ago
People are working on recovering PBR properties, rigging, and editing. I think those are all solveable over time. I wouldn't start a big project with it today, but maybe in a couple years.

If you want a real cursed problem for Gaussian splats though: global illumination. People have decomposed splat models into separate global and PBR colors, but I have no clue how you'd figure out where that global illumination came from, let alone recompute it for a new lighting situation.

btown•3mo ago
Some intrepid souls are trying to tackle the global illumination problem! https://arxiv.org/abs/2410.02619
vanderZwan•3mo ago
Wow!

Also, since it's slightly hidden in a comment underneath the abstract and easy to miss, here's the link to the paper's project page: https://stopaimme.github.io/GI-GS-site/

tobwen•3mo ago
Has recently been used to visit "The Matrix" again: https://www.youtube.com/watch?v=iq5JaG53dho&t=1412
bix6•3mo ago
Love it!

https://superspl.at/view?id=ac0acb0e

I believe this one is misnamed

danybittel•3mo ago
Thanks for pointing that out, fixed it.
blincoln•3mo ago
Really amazing results.

I wonder if one could capture each angle in a single shot with a Lytro Illum instead of focus-stacking? Or is the output of an Illum not of sufficient resolution?

danybittel•3mo ago
That would be awesome if it worked, from a curious look I can't say why not. I'll have to investigate a bit more. Thanks for bringing it up.
two_handfuls•3mo ago
This is awesome, thank you for sharing!
arduinomancer•3mo ago
Educational visualization seems like a really good use case for GS
iandanforth•3mo ago
Very cool, unfortunately I find the 3D completely unusable on mobile. The moment I touch it in orbit mode it locks to a southern pole view and whips about like crazy however I try rotate it.
slimbuck•3mo ago
Hello, playcanvas developer here. May I ask what phone/device you're on? Might be a bug. (No pun intended).
miclill•3mo ago
I experience the same thing on Fennec F-Droid 143.0.3 (Firefox) on Android 14.
slimbuck•3mo ago
Right, thanks for confirming. It seems firefox related. We'll get this patched asap!
uneekname•3mo ago
Also experiencing this issue in Fennec F-Droid
Moosdijk•3mo ago
No issues here on iPhone 12 running iOS 18.6.2 and Firefox 143.2 (62218)
Moosdijk•3mo ago
The orbiting sensitivity is a bit high when zoomed in a lot, which can lead to the model spinning out of control, as the other user mentioned.

Still manageable though, just very sensitive.

OrangeMusic•3mo ago
Hey I'm on Firefox too so I can't see how it's supposed to work without the bug.

But I just wanted to say the best way to interact with Gaussian Splats on mobile I've seen is with the Scaniverse app. Really, the UX is great there.

jchanimal•3mo ago
It’d be amazing to see a collab with the Exquisite Creatures Revealed artist. He preserves all kinds of insects and presents them in a way that highlights the color and iridescent effects nature offers. I was so blown away by the exhibit I went back. Artist: https://christophermarley.com/
petters•3mo ago
> Unfortunately, the extremely shallow depth of field in macro photography completely throws this process off. If you feed unsharp photos into it, the resulting model will contain unsharp areas as well.

Should be possible to model the focal depth of the camera directly. But perhaps that is not done in standard software. You still want several images with different focus settings

stuckkeys•3mo ago
Your fluid simulation was pretty rad.
whiterook6•3mo ago
I still don't get the point of Gaussian Splats. How are they better than triangles?
patcon•3mo ago
It's just a simpler primitive I assume. Blurs and colors and angles are simpler than 3D geometries, so it's probably more aligned with working/thinking with other very low-level primitives with minimal dimensions (like the math of neural networks). I dunno, I'm kinda vibing a response here -- maybe someone else can give you a more authoritative answer
poslathian•3mo ago
They are differentiable which allows for image based rendering via solving the inverse of the rendering function via gradient decent
jayd16•3mo ago
It's really not a splat vs triangle thing. You're basically comparing points cloud data to triangles.

Likely triangles are used to render the image in a traditional pipeline.

danwills•3mo ago
I'm not an expert and have not yet worked with splats, however I understood that unlike super-sharp-edged triangles they can represent complicatedly-transparent 'soft' phenomena like fur or clouds or similar that would ordinarily need to be rendered using possibly semi-transparent curves/sheathes (for fur/grass) or voxels for cloudy things like steam/mist. I gather splats can also represent and reproduce a limited amount of view-dependent specularity, as other commenters have said this is not dynamic and cannot easily deal with changing scene geometry or light sources.. still sounds like a fun research-project I make it do more in terms of illumination though!
cma•3mo ago
Pinhole lens + high light/long exposures to get sharp focus may help avoid some of the extra processing steps, he does mention he shot small aperture and that can cause diffraction effects and I guess that might be worse with pinhole though.
danybittel•3mo ago
It all kind of depends on each other. More light, means longer recycle times on the speedlights or higher iso, more noise. Longer exposure isn't an option with speedlights, using continuous also has it's downsides, things may start to shake..
singularity2001•3mo ago
You can view the models in your browser:

https://superspl.at/view?id=1eacd61c wasp!

https://superspl.at/view?id=23a16d0e fly!

singularity2001•3mo ago
Does anyone know if triangle splatting will revolutionize the field? https://trianglesplatting2.github.io/trianglesplatting2/