frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
3•sakanakana00•3m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•5m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•6m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•7m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
3•Nive11•8m ago•4 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•11m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•14m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•17m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•18m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•23m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•25m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•27m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•27m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•28m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•34m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•39m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•41m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•45m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•47m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
4•tosh•53m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•57m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•57m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
4•goranmoomin•1h ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

4•throwaw12•1h ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
3•senekor•1h ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
2•myk-e•1h ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
4•myk-e•1h ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•1h ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
5•1vuio0pswjnm7•1h ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
5•1vuio0pswjnm7•1h ago•0 comments
Open in hackernews

Simulating a Planet on the GPU: Part 1 (2022)

https://www.patrickcelentano.com/blog/planet-sim-part-1
128•Doches•2mo ago

Comments

montebicyclelo•2mo ago
As a hobbyist, shaders is up there as one of the most fun types of programming.. Low-level / relatively simple language, often tied to a satisfying visual result. Once it clicks, it's a cool paradigm to be working in, e.g. "I am coding from the perspective of a single pixel".
lukan•2mo ago
I found them fun once they work, but if something did not work, debugging them I did not enjoy so much.
jangxx•2mo ago
Nothing like outputting specific colors to see what branch the current pixel is currently running through. It's like printf debugging but colorful and with only three floats of output.
FormFollowsFunc•2mo ago
I agree it’s very difficult to debug them. I sometimes rewrite my shaders in Vex and debug them in that. It’s a shader language that runs on the CPU in Houdini. You can output a value at each pixel which is useful for values outside the range of 0 to 1 or you can use printf(). I’m still looking for something that will transpile shaders into JavaScript.
thegrim33•2mo ago
Well in the GL/Vulkan world there's finally functionality now in recent years for printf output from shaders, which finally fixes the issue. I'd assume DirectX probably also has something similar but I don't work with it so I don't know.
lukan•2mo ago
Hm .. I just have limited experience with WebGPU so far, but since that is still highly unstable and I really would like a printf functionality and all of the performance possible, I guess I should rather invest all my learning efforts towards Vulkan. Thanks for the hint.
bathtub365•2mo ago
It’s interesting that this hasn’t been solved for pixel shaders. With HIP in the GPGPU world I’m able to set breakpoints in a GPU kernel and step through line by line. I can also add printf statements to output values to the console.
0xf00ff00f•2mo ago
You can do all that with Vulkan and RenderDoc.
lukan•2mo ago
Ah yes, I dreamed about doing something like this, just with even more details ages ago, but concluded, I won't get even close to what I want, without having a big team at disposal and a supercomputer and/or a couple of universities collaborating interdisciplinary. But so far I was buisy with other things and reading about his experience unsurprisingly kind of confirms the challenge there is - mainly performance. But GPUs are on the rise and I am optimistic for the future. If the AI bubble bursts, I suppose lots of cheap GPU power will be avaiable for experiments like these and more elaborate ones. And if not, compute power/money will likely rise anyway.
janpmz•2mo ago
I wish I had an intuitive understanding of how much I can do with a GPU. E.g. how many points can I move around? A simulation like this would be great for that.
lukan•2mo ago
Well, to get that intuition, I guess you have to start experimenting. WebGPU is quite easy to get started with the concept. But in general it obviously depends what kind of GPU you have.
GistNoesis•2mo ago
TLDR : 1B particles ~ 3s per iterations

For examples like particle simulations, on a single node with a 4090 GPU everything running on GPU without memory transfer to the CPU:

-The main bottleneck is memory usage : available 24GB, Storing the particles 3 position coordinates, + 3 velocity coordinates, 4 bytes by number (float32) = Max 1B particles

-Then GPU memory bandwidth : if everything is on the GPU you get between 1000GB/s of global memory access and 10000GB/s when shared memory caches are hit. The number of memory access is roughly proportional to the number of effective collisions between your particles which is proportional to the number of particles so around 12-30 times ( see optimal sphere packing number of neighbors in 3d, and multiply by your overlap factor). All in all for 1B particles, you can collision them all and move them in 1 to 10s.

If you have to transfer things to the CPU, you become limited by the PCI-express 4.0 bandwidth of 16GB/s. So you can at most move 1B particles to and from the GPU, 0.7 times per second.

Then if you want to store the particle on disk, instead of RAM because your system is bigger, then you can either use a M2 ssd (but you will burn them quickly) which has a theoretical bandwidth of 20GB/s so not a bottleneck, or use a network storage over 100Gb/s (= 12.5GB/s) ethernet, via two interfaces to your parameter server which can be as big as you can afford.

So to summarize so far : 1B particles takes 1 to 10s per iteration per GPU. If you want to do smarter integration schemes like Rk4, you divide by 6. If you need 64 bits precisions you divide by 2. If you only need 16bits precisions you can multiply by 2.

The number of particle you need : Volume of the box / h^3 with h the diameter of the particle = finest details you want to be able to resolve.

If you use an adaptive scheme most of your particles are close to the surface of objects so O( surface of objects / h^2 ) with h=average resolution of the surface of the mesh. But adaptive scheme is 10 times slower.

The precision of the approximation can be bounded by Taylor formula. SPH is typically order 2, but has issues with boundaries, so to represent a sharp boundary the h must be small.

If you want higher order and sharp boundaries, you can do Finite Element Method, instead. But you'll need to tessellate the space with things like Delaunay/Voronoi, and update them as they move.

dahart•2mo ago
Might be worth starting with a baseline where there’s no collision, only advection, and assume higher than 1fps just because this gives higher particles per second but still fits in 24GB? I wouldn’t be too surprised if you can advection 100M particles at interactive rates.
GistNoesis•2mo ago
The theoretical maximum rate for 1B particle advection (Just doing p[] += v[]dt), is 1000GB/s / 24GB = 42 iteration per second. If you only have 100M you can have 10 times more iteration.

But that's without any rendering, and non interacting particles which are extremely boring unless you like fireworks. (You can add a term like v[] += g

dt for free.) And you don't need to store colors for your particles if you can compute the colors from the particle number with a function.

Rasterizing is slower, because each pixel of the image might get touched by multiple particles (which mean concurrent accesses in the GPU to the same memory address which they don't like).

Obtaining the screen coordinates is just a matrix multiply, but rendering the particles in the correct depth order requires multiple pass, atomic operations, or z-sorting. Alternatively you can slice your point clouds, by mixing them up with a peak-shaped weight function around the desired depth value, and use an order independent reduction like sum, but memory accesses are still concurrent.

For the rasterizing, you can also use the space partitioning indices of the particle to render to a part of the screen independently without concurrent access problems. That's called "tile rendering". Each tile render the subset of particles which may fall in it. (There are plenty of literature in the Gaussian Splatting community).

dahart•2mo ago
> The theoretical maximum rate for 1B particle advection (Just doing p[] += v[]ddt), is 1000GB/s / 24GB =41.667/s 42 iteration per second.

Just to clarify, the 24GB comes from multiplying 1B particles by 24 bytes? Why 24 bytes? If we used float3 particle positions, the rate would presumably be mem_bandwidth / particle_footprint. If we use a 5090, then the rate would be 1790GB/s / 12B = 146B particles / second (or 146fps of 1B particles).

> non interacting particles which are extremely boring

You assumed particle-particle collision above, which is expensive and might be over-kill. The top comment asked simply about the maximum rate of moving particles. Since interesting things take time & space, the correct accurate answer to that question is likely to be less interesting than trading away some time to get the features you proposed; your first answer is definitely interesting, but didn’t quite answer the question asked, right?

Anyway, I’m talking about other possibilities, for example interaction with a field, or collision against large objects. Those are still physically interesting, and when you have a field or large objects (as long as they’re significantly smaller footprint than the particle data) they can be engineered to have high cache coherency, and thus not count significantly against your bandwidth budget. You can get significantly more interesting than pure advection for a small fraction of the cost of particle-particle collisions.

Yes if you need rendering, that will take time out of your budget, true and good point. Getting into the billions of primitives is where ray tracing can sometimes pay off over raster. The BVH update is a O(N) algorithm that replaces the O(N) raster algorithm, but the BVH update is simpler than the rasterization process you described, and BVH update doesn’t have the scatter problem (write to multiple pixels) that you mentioned, it’s write once. BVH update on clustered triangles can now be done at pretty close to memory bandwidth. Particles aren’t quite as fast yet, AFAIK, but we might get there soon.

yeoyeo42•2mo ago
the answer is a big depends. but I can give you some ballpark intuition.

perhaps it's easiest to think about regular image processing because it uses the same hardware. you can think about each pixel as a particle.

a typical 4k (3840 x 2160 at 16:9) image contains about 8 million pixels. a trivial compute shader that just writes 4 bytes per pixel of some trivial value (e.g. the compute shader thread ids) will take you anywhere from roughly speaking 0.05ms - 0.5ms on modern-ish GPUs. this is a wide spread to represent a wide hardware spread. on current high end GPUs you will be very close to the 0.05ms, or maybe even a bit faster.

but real world programs like video games do a whole lot more than just write a trivial value. they read and write a lot more data (usually there are many passes - so it's not done just once - in the end maybe a few hundred bytes per pixel), and usually run many thousands of instructions per pixel. I work on a video game everyone's probably heard about and the one of the main material shaders is too large to fit into my work GPUs instruction cache (of 32kb) to give you an idea how many instructions are in there (not all executed of course - some branching involved).

and you can still easily do this all at 100+ frames per second on high end GPUs.

so you can in principle simulate a lot of particles. of course, the algorithm scaling matters. most of rendering is somewhere in O(n). anything involving physics will probably involve some kind of interaction between objects which immediately implies O(n log n) at the very least but usually more.

dahart•2mo ago
Here’s a datapoint: this project simulates ~100K rigid body blocks per second will full collision in 10 milliseconds (or roughly ~10M blocks per second). https://graphics.cs.utah.edu/research/projects/avbd/

They mention it’s 3x faster when turning collision off. I don’t know what the memory footprint of a block is, but I’d speculate that small round particles (sphere plus radius) are an order of magnitude faster.

Modern GPUs are insanely fast. A higher end consumer GPU like a 5090 can do over 100 teraflops of fp32 computation if your cache is perfectly utilized and memory access isn’t the bottleneck. Normally, memory is the bottleneck, and at a minimum you need to read and write your particles every frame of a sim, which is why the sibling comments are using memory bandwidth to estimate the number of particles per second. I’d guess that if you were only adverting particles without collision, or colliding against only a small number of big objects (like the particles collide against the planet and not each other) then you could move multiple billions of particles per second, which you would might divide by your desired frame rate to see how many particles per frame you can do.

jkhdigital•2mo ago
The tectonics.js blog has some really incredible write-ups on how to do proper simulation of plate tectonics: https://davidson16807.github.io/tectonics.js/blog/news.html
indigoabstract•2mo ago
This looks very ambitious, it's really starting from the basics, simulating tectonic plates.

Sadly, there never was a Part 2, was it?

I guess life just got in the way, as usual.