frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
85•valyala•4h ago•16 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•14 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
35•zdw•3d ago•4 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
89•mellosouls•6h ago•166 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
131•valyala•4h ago•99 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
47•surprisetalk•3h ago•52 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
143•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
96•vinhnx•7h ago•13 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
850•klaussilveira•23h ago•256 comments

First Proof

https://arxiv.org/abs/2602.05192
66•samasblack•6h ago•51 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1092•xnx•1d ago•618 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•5h ago•9 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
4•mbitsnbites•3d ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
232•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
516•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
93•onurkanbkrc•8h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
333•ColinWright•3h ago•400 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
254•alainrk•8h ago•412 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
182•1vuio0pswjnm7•10h ago•251 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
611•nar001•8h ago•269 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
35•marklit•5d ago•6 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
27•momciloo•4h ago•5 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
47•rbanffy•4d ago•9 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
124•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
96•speckx•4d ago•108 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•117 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
32•sandGorgon•2d ago•15 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
287•isitcontent•1d ago•38 comments
Open in hackernews

Show HN: Autograd.c – A tiny ML framework built from scratch

https://github.com/sueszli/autograd.c
85•sueszli•1mo ago
built a tiny pytorch clone in c after going through prof. vijay janapa reddi's mlsys book: mlsysbook.ai/tinytorch/

perfect for learning how ml frameworks work under the hood :)

Comments

sueszli•1mo ago
woah, this got way more attention than i expected. thanks a lot.

if you are interested in the technical details, the design specs are here: https://github.com/sueszli/autograd.c/blob/main/docs/design....

if you are working on similar mlsys or compiler-style projects and think there could be overlap, please reach out: https://sueszli.github.io/

spwa4•1mo ago
Cool. But this makes me wonder. This negates most of the advantages of C. Is there a compiler-autograd "library"? Something that would compile into C specifically to execute as fast as possible on CPUs with no indirection at all.
thechao•1mo ago
At best you'd be restricted to the forward mode, which would still double stack pressure. If you needed reverse mode you'd need 2x stack, and the back sweep over the stack based tape would have the nearly perfectly unoptimal "grain". If you allows the higher order operators (both push out and pull back), you're going to end up with Jacobians & Hessians over nontrivial blocks. That's going to need the heap. It's still better than an unbounded loop tape, though.

We had all these issues back in 2006 when my group was implementing autograd for C++ and, later, a computer algebra system called Axiom. We knew it'd be ideal for NN; I was trying to build this out for my brother who was porting AI models to GPUs. (This did not work in 2006 for both HW & math reasons.)

spwa4•1mo ago
Why not recompile every iteration? Weights are only updated at the end of the batch size at the earliest, and for distributed training, n batch sizes at the fastest, and generally only at the end of an iteration. In either case the cost of recompiling would be negligeable, no?
thechao•1mo ago
You'd pay the cost of the core computation O(n) times. Matrix products under the derivative fibration (jet; whatever your algebra calls it) are just more matrix products. A good sized NN is already in the heap. Also, the hard part is finding the ideal combination of fwd vs rev transforms (it's NP hard). This is similar to the complexity of finding the ideal subblock matrix multiply orchestration.

So, the killer cost is at compile time, not runtime, which is fundamental to the underlying autograd operation.

On the flip side, it's 2025, not 2006, so pro modern algorithms & heuristics can change this story quite a bit.

All of this is spelled out in Griewank's work (the book).

spwa4•1mo ago
This one? https://epubs.siam.org/doi/book/10.1137/1.9780898717761
thechao•1mo ago
Yep. You can find used copies at some online places? Powell's in Portland (online store) sometimes has it for 25 or 30 $s.
sueszli•1mo ago
a heap-free implementation could be a really cool direction to explore. thanks!

i think you might be interested in MLIR/IREE: https://github.com/openxla/iree

attractivechaos•1mo ago
> Is there a compiler-autograd "library"?

Do you mean the method theano is using? Anyway, the performance bottleneck often lies in matrix multiplication or 2D-CNN (which can be reduced to matmul). Compiler autograd wouldn't save much time.

marcthe12•1mo ago
We would need to mirror jax architecture more. Since the jax is sort of jit arch wise. Basically you somehow need a good way to convert computational graph to machine code while at compile time also perform a set of operations on the graph.
justinnk•1mo ago
I believe Enzyme comes close to what you describe. It works on the LLVM IR level.

https://enzyme.mit.edu

PartiallyTyped•1mo ago
Any reason for creating a new tensor when accumulating grads over updating the existing one?

Edit: I asked this before I read the design decisions. Reasoning is, as far as I understand, that for simplificity no in-place operations hence accumulating it done on a new tensor.

sueszli•1mo ago
yeah, exactly. it's for explicit ownership transfer. you always own what you receive, sum it, release both inputs, done. no mutation tracking, no aliasing concerns.

https://github.com/sueszli/autograd.c/blob/main/src/autograd...

i wonder whether there is a more clever way to do this without sacrificing simplicity.