frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
115•valyala•4h ago•19 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
52•zdw•3d ago•17 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
28•gnufx•3h ago•22 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
62•surprisetalk•4h ago•72 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
103•mellosouls•7h ago•186 comments

Tiny C Compiler

https://bellard.org/tcc/
3•guerrilla•36m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
146•AlexeyBrin•10h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
104•vinhnx•7h ago•14 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
855•klaussilveira•1d ago•261 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1097•xnx•1d ago•620 comments

First Proof

https://arxiv.org/abs/2602.05192
71•samasblack•6h ago•51 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
9•mbitsnbites•3d ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
16•vedantnair•38m ago•9 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
65•thelok•6h ago•12 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
143•valyala•4h ago•119 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
242•jesperordrup•14h ago•81 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
522•theblazehen•3d ago•194 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
34•momciloo•4h ago•5 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
95•onurkanbkrc•9h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
15•languid-photic•3d ago•5 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
39•marklit•5d ago•6 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
51•rbanffy•4d ago•10 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
193•1vuio0pswjnm7•11h ago•282 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
261•alainrk•9h ago•434 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
619•nar001•8h ago•277 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
125•videotopia•4d ago•40 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
102•speckx•4d ago•124 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
35•sandGorgon•2d ago•16 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
213•limoce•4d ago•119 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
361•ColinWright•3h ago•436 comments
Open in hackernews

Writing an eigenvalue solver in Rust for WebAssembly

https://abstractnonsense.xyz/blog/2025-12-31-eigenvalue-solver-in-rust-for-webassembly/
30•subset•1mo ago

Comments

threeducks•1mo ago
A while ago, I also implemented a dense eigenvalue solver in Python following a similar approach, but found that it did not converge in O(n^3) as sometimes claimed in literature. I then read about the Divide-and-conquer eigenvalue algorithm, which did the trick. It seems to have a reasonable Wikipedia page these days: https://en.wikipedia.org/wiki/Divide-and-conquer_eigenvalue_...
subset•1mo ago
Ooh, thanks for sharing that algorithm! Somehow, I didn't come across this and jumped straight into using the QR algorithm cited everywhere.

I found it hard to find a good reference that had a clean implementation end to end (without calling BLAS/LAPACK subroutines under the hood). It also wasn't easy to find proper convergence properties for different classes of matrices, but I fear I likely wasn't looking in the right places.

jcranmer•1mo ago
> I found the hypot method on f64 interesting. It computes the complex norm in a numerically stable way. Supposedly. The docstring gives some very hand-wavy caveats around the numerical precision and platform-dependence.

What it does is call the libm hypot function. The reason why everything is hand-wavy is because everything about the results of libm functions is incredibly hand-wavy, and there's nothing Rust can do about that except maybe provide its own implementation, which is generally inadvisable for any numerical function. (It's not even a case of "generates the LLVM intrinsic", because LLVM doesn't have an intrinsic for libm, although LLVM intrinsics are even more hand-wavy than external libm calls because oh boy is that a rabbit hole).

> But methods like scalar_mul are just screaming for a functional refactor. I assume there’s a trait similar to Haskell’s Functor typeclass that would allow me to fmap over the entries of the Matrix and apply an arbitrary function.

The way I've solved this in my own attempts to write my own sparse BLAS routines is to boil everything down to, at the very bottom, an Entry-style class where the hard part is implementing set:

    pub fn set(&mut self, value: Scalar) {
        self.entry = match (self.entry, is_zero(value)) {
            // Change zero entry to zero entry -> do nothing.
            (None, true) => { None },
            // Nonzero to nonzero entry -> set the value.
            (Some(e), false) => {
                self.vector.values[e] = value;
                Some(e)
            },
            // Nonzero to zero entry -> delete the entry.
            (Some(e), true) => {
                self.vector.soa_vecs_mut().swap_remove(e);
                None
            },
            // Zero to nonzero entry -> add the entry.
            (None, false) => {
                self.vector.soa_vecs_mut().push((self.index, value));
                self.vector.indexes.max_index()
            }
        };
    }
All of the helper methods on vectors and matrices boil down to calling that kind of matrix, and the end result is that in my sparse LU factorization routine, the core update for pivot is actually dead simple (eliding updating all of the ancillary data structures for figuring out which element to pivot on next, which is actually most of the code):

        // When you pivot a step of LU, the matrix looks like this:
        // [1 0] * [p ǔ] = [p u]
        // [ľ I]   [0 ǎ]   [l a]
        // Solving for the unknowns in L and U, we have:
        // ľ = l / p, ǔ = u, ǎ = a - outer(ľ, ǔ)
        let pivot_val = self.values.get(row, column);
        let pivot_row = self.values.get_row(row)
            .filter(|idx, _| self.is_column_unsolved(idx))
            .to_sparse_vector();
        let pivot_column = self.values.get_column(column)
            .filter(|idx, _| self.is_row_unsolved(idx))
            .scale(pivot_val.recip())
            .to_sparse_vector();

        // Compute the outer product of l and u and update A.
        for (row, li) in pivot_column.view() {
            // We didn't update A yet, so update the requisite value in A for
            // the column.
            self.values.set(row, column, li);
            for (column, uj) in pivot_row.view() {
                self.values.entry(row, column).map(|a| li.mul_add(-uj, a));
            }
        }
> I’m sure there are good technical reasons why it’s not the case, but I was surprised there was no #[derive(AddAssign)] macro that could be added to an implementation of std::ops::Add to automatically derive the AddAssign trait.

You'd want the reverse, implementing std::ops::Add on top of std::ops::AddAssign + Clone (or maybe Copy?). The advantage of op-assignment is that you don't have to any extra allocation, so it's usually the form you want if you can do it. In my sparse BLAS lib, I only implemented AddAssign and MulAssign for sparse vectors (although I did do an impl Mul<SparseVectorView> for SparseVectorView to implement dot product). Although truth be told, I'm usually trying to a += b * c in my code anyways, and I want to generate FMAs, so I can't use the math expressions anyways.