frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We built another object storage

https://fractalbits.com/blog/why-we-built-another-object-storage/
60•fractalbits•2h ago•10 comments

Java FFM zero-copy transport using io_uring

https://www.mvp.express/
25•mands•5d ago•6 comments

How exchanges turn order books into distributed logs

https://quant.engineering/exchange-order-book-distributed-logs.html
49•rundef•5d ago•17 comments

macOS 26.2 enables fast AI clusters with RDMA over Thunderbolt

https://developer.apple.com/documentation/macos-release-notes/macos-26_2-release-notes#RDMA-over-...
467•guiand•18h ago•237 comments

AI is bringing old nuclear plants out of retirement

https://www.wbur.org/hereandnow/2025/12/09/nuclear-power-ai
34•geox•1h ago•26 comments

Sick of smart TVs? Here are your best options

https://arstechnica.com/gadgets/2025/12/the-ars-technica-guide-to-dumb-tvs/
434•fleahunter•1d ago•362 comments

Photographer built a medium-format rangefinder, and so can you

https://petapixel.com/2025/12/06/this-photographer-built-an-awesome-medium-format-rangefinder-and...
78•shinryuu•6d ago•10 comments

Apple has locked my Apple ID, and I have no recourse. A plea for help

https://hey.paris/posts/appleid/
867•parisidau•10h ago•445 comments

GNU Unifont

https://unifoundry.com/unifont/index.html
287•remywang•18h ago•68 comments

A 'toaster with a lens': The story behind the first handheld digital camera

https://www.bbc.com/future/article/20251205-how-the-handheld-digital-camera-was-born
42•selvan•5d ago•19 comments

Beautiful Abelian Sandpiles

https://eavan.blog/posts/beautiful-sandpiles.html
83•eavan0•3d ago•16 comments

Rats Play DOOM

https://ratsplaydoom.com/
334•ano-ther•18h ago•123 comments

Show HN: Tiny VM sandbox in C with apps in Rust, C and Zig

https://github.com/ringtailsoftware/uvm32
167•trj•17h ago•11 comments

OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI

https://simonwillison.net/2025/Dec/12/openai-skills/
481•simonw•15h ago•272 comments

Computer Animator and Amiga fanatic Dick Van Dyke turns 100

110•ggm•6h ago•23 comments

Will West Coast Jazz Get Some Respect?

https://www.honest-broker.com/p/will-west-coast-jazz-finally-get
10•paulpauper•6d ago•2 comments

Formula One Handovers and Handovers From Surgery to Intensive Care (2008) [pdf]

https://gwern.net/doc/technology/2008-sower.pdf
82•bookofjoe•6d ago•33 comments

Show HN: I made a spreadsheet where formulas also update backwards

https://victorpoughon.github.io/bidicalc/
179•fouronnes3•1d ago•85 comments

Freeing a Xiaomi humidifier from the cloud

https://0l.de/blog/2025/11/xiaomi-humidifier/
126•stv0g•1d ago•51 comments

Obscuring P2P Nodes with Dandelion

https://www.johndcook.com/blog/2025/12/08/dandelion/
57•ColinWright•4d ago•1 comments

Go is portable, until it isn't

https://simpleobservability.com/blog/go-portable-until-isnt
119•khazit•6d ago•101 comments

Ensuring a National Policy Framework for Artificial Intelligence

https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-nati...
169•andsoitis•1d ago•217 comments

Poor Johnny still won't encrypt

https://bfswa.substack.com/p/poor-johnny-still-wont-encrypt
52•zdw•10h ago•65 comments

YouTube's CEO limits his kids' social media use – other tech bosses do the same

https://www.cnbc.com/2025/12/13/youtubes-ceo-is-latest-tech-boss-limiting-his-kids-social-media-u...
85•pseudolus•3h ago•67 comments

Slax: Live Pocket Linux

https://www.slax.org/
41•Ulf950•5d ago•5 comments

50 years of proof assistants

https://lawrencecpaulson.github.io//2025/12/05/History_of_Proof_Assistants.html
107•baruchel•15h ago•17 comments

Gild Just One Lily

https://www.smashingmagazine.com/2025/04/gild-just-one-lily/
29•serialx•5d ago•5 comments

Capsudo: Rethinking sudo with object capabilities

https://ariadne.space/2025/12/12/rethinking-sudo-with-object-capabilities.html
75•fanf2•17h ago•44 comments

Google removes Sci-Hub domains from U.S. search results due to dated court order

https://torrentfreak.com/google-removes-sci-hub-domains-from-u-s-search-results-due-to-dated-cour...
193•t-3•11h ago•34 comments

String theory inspires a brilliant, baffling new math proof

https://www.quantamagazine.org/string-theory-inspires-a-brilliant-baffling-new-math-proof-20251212/
167•ArmageddonIt•22h ago•154 comments
Open in hackernews

Writing N-body gravity simulations code in Python

https://alvinng4.github.io/grav_sim/5_steps_to_n_body_simulation/
153•dargscisyhp•7mo ago

Comments

mclau157•7mo ago
Very well done!
antognini•7mo ago
Once you have the matrix implementation in Step 2 (Implementation 3) it's rather straightforward to extend your N-body simulator to run on a GPU with Jax --- you can just add `import jax.numpy as jnp` and replace all the `np.`s with `jnp`s.

For a few-body system (e.g., the Solar System) this probably won't provide any speedup. But once you get to ~100 bodies you should start to see substantial speedups by running the simulator on a GPU.

tantalor•7mo ago
> rather straightforward

For the programmer, yes it is easy enough.

But there is a lot of complexity hidden behind that change. If you care about how your tools work that might be a problem (I'm not judging).

almostgotcaught•7mo ago
People think high-level libraries are magic fairy dust - "abstractions! abstractions! abstractions!". But there's no such thing as an abstraction in real life, only heuristics.
antognini•7mo ago
Oh for sure there is a ton that is going on behind the scenes when you substitute `np` --> `jnp`. But as someone who worked on gravitational dynamics a decade ago, it's really incredible that so much work has been done to make running numerical calculations on a GPU so straightforward for the programmer.

Obviously if you want maximum performance you'll probably still have to roll up your sleeves and write CUDA yourself, but there are a lot of situations where you can still get most of the benefit of using a GPU and never have to worry yourself over how to write CUDA.

munch0r•7mo ago
Not true for regular GPUs like RTX5090. They have atrocious float64 performance compared to CPU. You need a special GPU designed for scientific computations (many float64 cores)
the__alchemist•7mo ago
This is confusing: GPUs seem great for scientific computations. You often want f64 for scientific computation. GPUs aren't good with f64.

I'm trying to evaluate if I can get away with f32 for GPU use for my molecular docking software. Might be OK, but I've hit cases broadly where f64 is fine, but f32 is not.

I suppose this is because the dominant uses games and AI/ML use f32, or for AI even less‽

oivey•7mo ago
You need cards like the A100 / H100 / H200 / B200. GPUs aren’t fundamentally worse at f64. Nvidia just makes it worse in many cards for market segmentation.
umpalumpaaa•7mo ago
Every time I see anything on the N-Body problem I am reminded by my final high school project... I had 2-3 weeks of time to write an n-body simulation. Back then I used C++ and my hardware was really bad (2 GHz single core CPU or so…). The result was not really impressive because it did not really work. :D But I was able to show that my code correctly predicted that the moon and the earth would eventually collide without any initial velocity given to both bodies. I went into this totally naive and blind but it was a ton of fun.
taneq•7mo ago
> my hardware was really bad (2 GHz single core CPU or so…)

laughs in 32kb of RAM

Sounds like a great project, though. There's a lot of fundamental concepts involved in something like this, so it's a great learning exercise (and fun as well!) :)

munchler•7mo ago
In Step 2 (Gravity), why are we summing over the cube of the distance between the bodies in the denominator?

Edit: To answer myself, I think this is because one of the factors is to normalize the vector between the two bodies to length 1, and the other two factors are the standard inverse square relationship.

itishappy•7mo ago
You got it. The familiar inverse square formula uses a unit vector:

    a = G * m1 * m2 / |r|^2 * r_unit

    r_unit = r / |r|

    a = G * m1 * m2 / |r|^3 * r
quantadev•7mo ago
The force itself is `G * m1 * m2 / (r^2)`. That's a pure magnitude. The direction of the force is just the unit vector going from m1 to m2. You need it to be a unit vector or else you're multiplying up to something higher than that force. However, I don't get why you'd ever cube the 'r'. Never seen that. I don't think it's right, tbh.
itishappy•7mo ago
> I don't get why you'd ever cube the 'r'.

It's pulled out of the unit vector. Might be more clear if I notated the vector bits a bit:

    old    : new
    r      : r_vec
    |r|    : r_mag
    r_unit : r_dir
As you know, a vector is a magnitude and direction:

    r_dir = r_vec / r_mag
So the formulas from before become (also correctly labeled as `F` per my other comment):

    F = G * m1 * m2 / r_mag^2 * r_dir
    F = G * m1 * m2 / r_mag^2 * r_vec / r_mag
    F = G * m1 * m2 / r_mag^3 * r_vec
quantadev•7mo ago
Ok, I see what you're doing. Your multiplying the force vector by a non unit-vector, and then dividing back out the linear amount to correct for it. You never see this in a physics book because it's a computational hack, probably because it saves you the CPU cost of not having to do the 3 division operations it takes to get each component (X,Y,Z) of the unit vector.

This makes sense to do in computer code also because if you were going to raise r_mag to a power, you might as well raise it to 3 instead of 2, because it's not extra cost, but you do avoid the three divisions, by never calculating a unit vector. Back when I was doing this work, was decades ago and I had no idea about cost of floating points. Thanks for explaining!

itishappy•7mo ago
Glad I could help!

Also fun is that taking the magnitude involves a square root that can sometimes be avoided, but that doesn't really help us here because of the power of three. If the denominator were squared we could just use `r_mag^2 = r_x^2 + r_y^2`, but we still need the root to get the direction. It is kinda interesting though that in 2d it expands to a power of `3/2`:

    F_vec = G * m1 * m2 / (r_x^2 + r_y^2) ^ (3/2) * r_vec
quantadev•7mo ago
Yeah, on paper (or mathematical symbolics) it comes down to what's more clear and representing reality. That's why I initially said I know there's no cubic relations in the physics of this, which was correct.

But that doesn't mean that therefore there's no correct physics equations (for gravity) involving the cube of a distance, even when there's only squares in these "laws" of physics.

In both cases the power of 2, as well as 3/2, is there merely to "cancel out" the fact that you didn't use a unit vector (in the numerator) and therefore need to divide that out in the denominator, to end up scaling the force magnitude against a unit vector.

itishappy•7mo ago
Oops, pretty big mistake on my part. I gave the formula for force and labeled it acceleration. Way too late to edit, but the correctly labeled formulas should have been:

    F = G * m1 * m2 / |r|^2 * r_unit
    F = G * m1 * m2 / |r|^3 * r
Or as acceleration, by dividing out an `m` using `F=ma`:

    a = G * m / |r|^2 * r_unit
    a = G * m / |r|^3 * r
gitroom•7mo ago
Man I tried doing this once and my brain nearly melted lol props for actually making it work
mkoubaa•7mo ago
My favorite thing about this kind of code is that people are constantly inventing new techniques to do time integration. It's the sort of thing you'd think was a solved problem but then when you read about time integration strategies you realize how rich the space of solutions are. And they are more like engineering problems than pure math, with tradeoffs and better fits based on the kind of system.
JKCalhoun•7mo ago
Decades ago ... I think it was Computer Recreations column in "Scientific American" ... the strategy, since computers were less-abled then, was to advance all the bodies by some amount of time — call it a "tick" — when bodies got close, the "tick" got smaller: therefore the calculations more nuanced, precise. Further apart and you could run the solar system on generalities.
halfcat•7mo ago
So in this kind of simulation, as we look closer, uncertainty is reduced.

But in our simulation (reality, or whatever), uncertainty increases the closer we look.

Does that suggest we don’t live in a simulation? Or is “looking closer increases uncertainty” something that would emerge from a nested simulation?

quantadev•7mo ago
The reason uncertainty goes up the closer we try to observe something in physics, is because everything is ultimately waves of specific wavelengths. So if you try to "zoom in" on one of the 'humps' of a sine wave, for example you don't see anything but a line that gets straighter and straighter, which is basically a loss of information. This is what Heisenberg Principle is about. Not the "Say my Name" one, the other one. hahaha.

And yes that's a dramatic over-simplification of uncertainty principle, but it conveys the concept perfectly.

leumassuehtam•7mo ago
One way of doing that is by each time step delta perform two integrations, one which give you x_1, and another x_2, with different accuracies. The error is estimated by the difference |x_1 - x_2| and you make the error match your tolerance by adjusting the time step.

Naturally, this difference becomes big when two objects are close together, since the acceleration will induce a large change of velocity, and a lower time step is needed to keep the error under control.

hermitcrab•7mo ago
I have run into the problem where a constant time step can suddenly result in bodies getting flung out of the simulation because they go very close.

Your solution sounds interesting, but isn't it only practical when you have a small number of bodies?

markstock•7mo ago
Yes, the author uses a globally-adaptive time stepper, which is only efficient for very small N. There are adaptive time step methods that are local, and those are used for large systems.

If you see bodies flung out after close passes, three solutions are available: reduce the time step, use a higher order time integrator, and (the most common method) add regularization. Regularization (often called "softening") removes the singularity by adding a constant to the squared distance. So 1 over zero becomes one over a small-ish and finite number.

hermitcrab•7mo ago
>Regularization (often called "softening") removes the singularity by adding a constant to the squared distance. So 1 over zero becomes one over a small-ish and finite number.

IIRC that is what I did in the end. It is fudge, but it works.

markstock•7mo ago
It is a fudge if you really are trying to simulate true point masses. Mathematically, it's solving for the force between fuzzy blobs of mass.
mkoubaa•7mo ago
You are never simulating pure anything. All computational models are wrong. Some are useful
HelloNurse•7mo ago
The suggested method of trying two time steps and comparing the accelerations does about twice as many calculations per simulation step, which don't require twice the time thanks to coherent memory access: a reasonable price to pay to ensure that every time step is small enough.

Optimistic fixed time steps are just going to work well almost always and almost everywhere, accumulating errors behind your back at every episode of close approach.

sampo•7mo ago
Adaptive time step RKF algorithm is explained in section 5:

https://alvinng4.github.io/grav_sim/5_steps_to_n_body_simula...

kbelder•7mo ago
Cut the time interval shorter and shorter, and then BAM you've independently discovered calculus.
mkoubaa•7mo ago
You need to invent calculus to get partial differential equations. You need invent PDEs to get approximate solutions to PDEs. You need to invent approximate solutions to PDEs to get computational methods. You need to invent computational methods to explore time integration techniques.
odyssey7•7mo ago
They're more like engineering than pure math because pure math hasn't solved the n-body problem.
mkoubaa•7mo ago
There are countless applications other than the n-body problem for which my point holds
the__alchemist•7mo ago
Next logical optimization: Barnes Hut? Groups source bodies using a recursive tree of cubes. Gives huge speedups with high body counts. FMM is a step after, which also groups target bodies. Much more complicated to implement.
RpFLCL•7mo ago
That's mentioned on the "Conclusions" page of TFA:

> Large-scale simulation: So far, we have only focused on systems with a few objects. What about large-scale systems with thousands or millions of objects? Turns out it is not so easy because the computation of gravity scales as . Have a look at Barnes-Hut algorithm to see how to speed up the simulation. In fact, we have documentations about it on this website as well. You may try to implement it in some low-level language like C or C++.

kkylin•7mo ago
C or C++? Ha! I've implemented FMM in Fortran 77. (Not my choice; it was a summer internship and the "boss" wanted it that way.) It was a little painful.
the__alchemist•7mo ago
Oh wow! I implemented BH in rust not long ago. Was straightforward. Set it up with grpahics so I could see the cubes etc. Then looked into FMM... I couldn't figure out where to start! Looked very formidable. multipole seems to be coming up in everything scientific I look at these days...
markstock•7mo ago
Supercomputers will simulate trillions of masses. The HACC code, commonly used to verify the performance of these machines, uses a uniform grid (interpolation and a 3D FFT) and local corrections to compute the motion of ~8 trillion bodies.
quantadev•7mo ago
I was fascinated with doing particle simulations in "C" language as a teenager in the late 1980s on my VGA monitor! I would do both gravity and charged particles.

Once particles accelerate they'll just 'jump by' each other of course rather than collide, if you have no repulsive forces. I realized you had to take smaller and smaller time-slices to get things to slow down and keep running when the singularities "got near". I had a theory even back then that this is what nature is doing with General Relativity making time slow down near masses. Slowing down to avoid singularities. It's a legitimate theory. Maybe this difficulty is why "God" (if there's a creator) decided to make everything actually "waves" as much as particles, because waves don't have the problem of singularities.

kristel100•7mo ago
N-body problems are the gateway drug into numerical physics. Writing one from scratch is a rite of passage. Bonus points if you hit that sweet spot where performance doesn’t completely tank.
tanepiper•7mo ago
I built https://teskooano.space/ - so far I've not seen any big performance issues, and it's multiple bodies running happily at 120fps.

Now you're making me worry I'm not "physicsing" hard enough

eesmith•7mo ago
Back in the early 1990s I was going through books in the college library and found one on numerical results of different 3-body starting configurations.

I vividly remember the Pythagorean three-body problem example, and how it required special treatment for the close interactions.

Which made me very pleased to see that configuration used as the example here.

bgwalter•7mo ago
For comparison, the famous Debian language benchmark competition:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

igouy•7mo ago
And for those who have access

2022 "N-Body Performance With a kD-Tree: Comparing Rust to Other Languages"

https://ieeexplore.ieee.org/document/10216574

2025 "Parallel N-Body Performance Comparison: Julia, Rust, and More"

https://link.springer.com/chapter/10.1007/978-3-031-85638-9_...

igouy•7mo ago
Also:

ParallelNBodyPerformance Public

https://github.com/MarkCLewis/ParallelNBodyPerformance