frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
1•ykdojo•1m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
1•gmays•1m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
1•dhruv3006•3m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
1•mariuz•3m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
1•RyanMu•6m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•10m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
2•rcarmo•11m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•11m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•11m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•12m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•15m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•15m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•17m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•18m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•19m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•19m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•19m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•19m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•20m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•21m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•22m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•28m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•29m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•29m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
40•bookofjoe•29m ago•13 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•30m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•31m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•32m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•32m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•32m ago•0 comments
Open in hackernews

GPU-Driven Clustered Forward Renderer

https://logdahl.net/p/gpu-driven
116•logdahl•8mo ago

Comments

zeristor•8mo ago
Apostrophe as a number separator?

Where’s that from?

dahart•8mo ago
Switzerland and Italy for two. https://en.wikipedia.org/wiki/Decimal_separator#

Also note C++14 introduced the apostrophe in numeric literals! https://en.cppreference.com/w/cpp/language/integer_literal

lacoolj•8mo ago
Learn somethin new every day.

And I would never have known this existed without hackernews

logdahl•8mo ago
Interesting that Sweden explicitly do NOT use it... Not sure where i picked it up! :-)
qingcharles•8mo ago
I've started using the underscore in my code since that is becoming the (non-localized) standard and trendy:

https://en.wikipedia.org/wiki/Integer_literal#Digit_separato...

m-schuetz•8mo ago
Apostrophe are nice because they are not ambiguous. Started using them myself after getting used to them from C++ and learning that they are used in switzerland.
unclad5968•8mo ago
This is awesome! At the end you mention the 27k dragons and 10k lights just barely fits in 16ms. Do you see any paths to improve performance? I've seen some demos on with tens/hundreds of thousands of moving lights, but hard to tell if they're legit or highly constrained. I'm not a graphics programmer by trade.

I need a renderer for a personal project and after some research decided I'll implement a forward clustered renderer as well.

logdahl•8mo ago
Well, the core issue is still drawing. I took another look at some profiles again and seems like its not the renderer limiting this to 27k! I still had some stupid scene-graph traversal... But clustering and culling is 53us and 33us respectively, but the draw is 7ms. So a frame (on the GPU-side) is like 7ms, and some 100-200 us on the CPU side.

Should really dive deeper and update the measurements for final results...

godelski•8mo ago
I haven't look at the post in the detail it deserves, but given your graphs the workload looks pretty bursty. I'd suspect there are some good I/O optimizations or some predication. Definitely that last void main block looks ripe for that. But I'd listen to Knuth, premature optimization and all, so grab a profiler. I wouldn't be surprised if you're nearing peak performance. Also NVIDIA GPUs have a lot of special tricks that can be exploited but are buried in documentation... if you haven't already seen it (I suspect you have), you'd be interested in "GPU Gems". Gems 2 has some good stuff on predication.

But also, really good work! You should be proud of this! Squeezing that much out of that hardware is no easy feat.

gmueckl•8mo ago
This seems fairly well optimized. There's probably room to squeeze out some more perf, but not dramatic improvements. Maybe preventing overdraw of shaded pixels by doing a depth prepass would help.

Without digging into the detailed breakdown, I would assume that the sheer amount of teeny tiny triangles is the main bottleneck in this benchmark scene. When triangles become smaller than about 4x4 pixels, GPU utilization for raterization starts to diminish. And with the scaled down dragons, there's a lot of then in the frame.

spookie•8mo ago
This is by far the biggest culprit OP, look into this.

You can try to come up with imposters representing these far away dragons, or simple LoD levels. Some games do use particles to represent far away and repeated "meshes" (Ghost of Tsushima does these for soldiers far away).

Lot's of techniques in this area ranging from simple to bananas. LoD levels alone can get you pretty far! Of course, this is at the cost of having more different draw calls, so it is a balancing game.

Think about the topology too, hope these old gems helps getting a grasp on the cost of this:

https://www.humus.name/index.php?page=Comments&ID=228

https://www.g-truc.net/post-0662.html

logdahl•8mo ago
Yeah, I use LODs already but as you say, even my lowest lod far away is too many vertices. Imposter rendering seems very interesting but also completely bonkers (viewing angle, lighting)!
corysama•8mo ago
I've not sat down and watched this yet, but you might appreciate it. https://www.youtube.com/watch?v=DZfhbMc9w0Q Apparently Doom: The Dark Ages switched to Visibility Buffer rendering. Likely because it reduces issues with quad utilization. http://filmicworlds.com/blog/visibility-buffer-rendering-wit...
undefuser•8mo ago
have you considered using meshlets technique like Unreal Engine Nanite or Assassin's Creed? It could potentially open the door for better culling and more effective depth prepass.
logdahl•8mo ago
Absolutely! I think this would likely be the next step.
zokier•8mo ago
Worth noting that the GTX 1070 is nearly 10 year old "mainstream" GPU. I'd imagine a 5090 or something could push the numbers fair bit more higher.
cullingculling•8mo ago
(GPU-driven) occlusion culling with meshlet rendering would help a lot while being relatively straightforward to implement if you already have a GPU-driven engine like OP does. Occlusion culling techniques cull objects that are completely hidden behind other objects. Meshlets break up objects (at asset build time) into tiny meshlets of around 64 to 128 triangles, such that these meshlets can be individually occlusion culled. This would help a lot by allowing the renderer to skip not just individual parts of the dragons that are hidden behind other dragons, but even parts of each dragon that are occluded by the rest of the dragon itself! There's a talk on YouTube about the Alan Wake 2 team implementing these techniques and being able to cull complex outdoor scenes of (iirc) hundreds of millions of triangles down to around 10-20 million.

The basic idea is to first render as normal some meshes that you either know are visible, or are likely to occlude objects in the scene (say the N closest objects, or some large terrain feature in a real game). Then you can take the resulting depth buffer and downsample it into something resembling a mipmap chain, but with each level holding the max depth of the contributing pixels, rather than the average. This is called a hierarchical Z (depth) buffer, or HZB for short. This can be used to very quickly, with just a few samples of the HZB, test if an object's bounding box is behind the all the pixels in a given area and thus definitely not visible. The hierarchical nature of the HZB allows both small and large meshes to be tested at the same performance cost.

Typically, a game would track which meshlets were known to be visible last frame, and start by rendering all of those (with updated positions and camera orientation, of course). This will make up most of what is drawn to the scene, because typically objects and the camera change very little from frame to frame. Then all the meshlets that weren't known to be visible get tested against the HZB, and just the few that were revealed by changes in the scene will need to be rendered. Lastly, at some point the known visible meshlet set should be re-tested, so that it does not grow indefinitely with meshlets that are no longer visible.

The result is that the first frame rendered after a major camera change (like the player respawning) will be slow, as all the meshlets in the frustum need to be rendered. But after that, the scene can be narrowed down to just the meshlets that actually contributed to the frame, and performance improves significantly. I think this would be more than enough for a demo, but for a real game you would probably want to explore methods to speed up that first frame's rendering, like sorting objects and picking the N closest/largest ones so you can at least get some occlusion culling working.

fabiensanglard•8mo ago
This website has a beautiful layout ;) !
logdahl•8mo ago
Fun to see you ;) Love your site!
rezmason•8mo ago
Ten thousand lights! Your utility bill must be enormous
Flex247A•8mo ago
Lights in games use real electricity :)
amelius•8mo ago
Even the stars use real electricity.
cluckindan•8mo ago
Not really, nuclear fusion doesn’t run on electrons.
DiabloD3•8mo ago
So where does the magnetic field come from? ;) ;) ;)
cluckindan•8mo ago
Nuclear fusion produces a million times more energy from proton and neutron collisions than is produced by electron shells during the same event.
amelius•8mo ago
The energy leaves the star in the form of EM energy. This is also the energy that is responsible for electricity.
monster_truck•8mo ago
Am I missing a link somewhere or is there no way to build/run this myself? Interested to see what a modern flagship gpu is good for
wizzwizz4•8mo ago
> As some other renderers do, we share a single GPU buffer for all vertex data. Instead, we use a simple allocator which manages this contigous buffer automatically.

I'm not sure what this part is supposed to say, but it doesn't look right. "Instead" usually follows differences, not similarities.