frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
125•ColinWright•1h ago•93 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
24•surprisetalk•1h ago•26 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
121•AlexeyBrin•7h ago•24 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
62•vinhnx•5h ago•7 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
124•alephnerd•2h ago•81 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
829•klaussilveira•21h ago•249 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
55•thelok•3h ago•8 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
109•1vuio0pswjnm7•8h ago•139 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
4•gnufx•41m ago•1 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1060•xnx•1d ago•611 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
76•onurkanbkrc•6h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
484•theblazehen•2d ago•175 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
10•valyala•2h ago•1 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
210•jesperordrup•12h ago•70 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
9•valyala•2h ago•0 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
559•nar001•6h ago•257 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
222•alainrk•6h ago•343 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
37•rbanffy•4d ago•7 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•2 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•31 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
76•speckx•4d ago•75 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
6•momciloo•2h ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
273•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•111 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•11 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
286•dmpetrov•22h ago•153 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
71•mellosouls•4h ago•75 comments
Open in hackernews

Batmobile: 10-20x Faster CUDA Kernels for Equivariant Graph Neural Networks

https://elliotarledge.com/blog/batmobile
93•ipnon•2w ago

Comments

shihab•2w ago
Hi, I just wanted to note that e3nn is more of an academic software that's a bit high-level by design. A better baseline for comparison would be Nvidia's cuEquivariance, which does pretty much the same thing as you did- take e3nn and optimize it for GPU.

As a HPC developer, it breaks my heart how worse academic software performance is compared to vendor libraries (from Intel or Nvidia). We need to start aiming much higher.

bee_rider•2w ago
I took a lot longer than I should have to finish my PhD because I wanted to beat well written/properly used vendor code. I wouldn’t recommend it, TBH.

It did make my defense a lot easier because I could just point at the graphs and say “see I beat MKL, whatever I did must work.” But I did a lot of little MPI tricks and tuning, which doesn’t add much to the scientific record. It was fun though.

I don’t know. Mixed feelings. To some extent I don’t really see how somebody could put all the effort into getting a PhD and not go on a little “I want to tune the heck out of these MPI routines” jaunt.

shihab•2w ago
To be practically useful, we don't need to beat vendors, just getting close would be enough, by the virtue of being open-source (and often portable). But I found, as an example, PETSc to be ~10x slower than MKL on CPU and CUDA on GPU; It still doesn't have native shared memory parallelism support on CPU etc.
bee_rider•2w ago
Oh dang, thanks for the heads up. I was looking at them for the “next version” of my code.

The lack of a “blas/lapack/sparse equivalents that can dispatch to GPU or CPU” is really annoying. You’d think this would be somewhat “easy” (lol, nothing is easy), in the sense that we’ve got a bunch of big chunky operations…

shihab•2w ago
I should note PETSc is a big piece of software that does a lot of things. It also wraps many libraries, and those might ultimately dictate actual performance depending on what you plan on doing.
PerryStyle•2w ago
I would love to do this in the future, but knowing me I’d get caught up making sure I’m benchmarking properly then actually writing code.
geremiiah•2w ago
cuEquivariance is unfortunately close sourced (the acutal .cu kernels), but OP's work is targetting a consumer GPU and also a very small particle system so its hard to compare, anyway.
teddykoker•2w ago
OpenEquivariance [1] is another good baseline for with kernels for the Clebsch-Gordon tensor product and convolution, and it is fully open source. Both kernel implementations have been successfully implemented into existing machine learning interatomic potentials, e.g. [2,3].

[1] https://github.com/PASSIONLab/OpenEquivariance

[2] https://arxiv.org/abs/2504.16068

[3] https://arxiv.org/abs/2508.16067

rapatel0•2w ago
I think this is the difference between research and industry. Industry should try to grind out obvious improvements through brute force iteration. I really wish the culture of academia was more of an aim towards moonshots (high risk, high reward).
physicsguy•2w ago
> As a HPC developer, it breaks my heart how worse academic software performance is compared to vendor libraries (from Intel or Nvidia). We need to start aiming much higher.

They're optimising for different things really.

Intel/Nvidia have the resources to (a) optimise across a wide range of hardware in their libraries (b) often use less well documented things (c) don't have to make their source code publicly accessible.

Take MKL for example - it's a great library, but implementing dynamic dispatch for all the different processor types is why it gets such good performance across x86-64 machines, it's not running the same code on each processor. No academic team can really compete with that.

shihab•2w ago
I'm not asking an academic program first published 8 year ago (e3nn) to beat actively developed CuEquivariance library. An academic proposing new algorithms doesn't need to worry too much about performance. But any new work which focuses on performance, that includes this blog and a huge number of academic papers published every year, should absolutely use latest vendor libraries as baseline.