frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Life at the Edge

https://asadk.com/p/edge
1•tosh•1m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•5m ago•0 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•6m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•9m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•10m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•12m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•15m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•17m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•18m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
2•1vuio0pswjnm7•20m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•22m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•24m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•27m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•31m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•33m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•36m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•48m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•50m ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•51m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•3 comments
Open in hackernews

Writing an eigenvalue solver in Rust for WebAssembly

https://abstractnonsense.xyz/blog/2025-12-31-eigenvalue-solver-in-rust-for-webassembly/
30•subset•1mo ago

Comments

threeducks•1mo ago
A while ago, I also implemented a dense eigenvalue solver in Python following a similar approach, but found that it did not converge in O(n^3) as sometimes claimed in literature. I then read about the Divide-and-conquer eigenvalue algorithm, which did the trick. It seems to have a reasonable Wikipedia page these days: https://en.wikipedia.org/wiki/Divide-and-conquer_eigenvalue_...
subset•1mo ago
Ooh, thanks for sharing that algorithm! Somehow, I didn't come across this and jumped straight into using the QR algorithm cited everywhere.

I found it hard to find a good reference that had a clean implementation end to end (without calling BLAS/LAPACK subroutines under the hood). It also wasn't easy to find proper convergence properties for different classes of matrices, but I fear I likely wasn't looking in the right places.

jcranmer•1mo ago
> I found the hypot method on f64 interesting. It computes the complex norm in a numerically stable way. Supposedly. The docstring gives some very hand-wavy caveats around the numerical precision and platform-dependence.

What it does is call the libm hypot function. The reason why everything is hand-wavy is because everything about the results of libm functions is incredibly hand-wavy, and there's nothing Rust can do about that except maybe provide its own implementation, which is generally inadvisable for any numerical function. (It's not even a case of "generates the LLVM intrinsic", because LLVM doesn't have an intrinsic for libm, although LLVM intrinsics are even more hand-wavy than external libm calls because oh boy is that a rabbit hole).

> But methods like scalar_mul are just screaming for a functional refactor. I assume there’s a trait similar to Haskell’s Functor typeclass that would allow me to fmap over the entries of the Matrix and apply an arbitrary function.

The way I've solved this in my own attempts to write my own sparse BLAS routines is to boil everything down to, at the very bottom, an Entry-style class where the hard part is implementing set:

    pub fn set(&mut self, value: Scalar) {
        self.entry = match (self.entry, is_zero(value)) {
            // Change zero entry to zero entry -> do nothing.
            (None, true) => { None },
            // Nonzero to nonzero entry -> set the value.
            (Some(e), false) => {
                self.vector.values[e] = value;
                Some(e)
            },
            // Nonzero to zero entry -> delete the entry.
            (Some(e), true) => {
                self.vector.soa_vecs_mut().swap_remove(e);
                None
            },
            // Zero to nonzero entry -> add the entry.
            (None, false) => {
                self.vector.soa_vecs_mut().push((self.index, value));
                self.vector.indexes.max_index()
            }
        };
    }
All of the helper methods on vectors and matrices boil down to calling that kind of matrix, and the end result is that in my sparse LU factorization routine, the core update for pivot is actually dead simple (eliding updating all of the ancillary data structures for figuring out which element to pivot on next, which is actually most of the code):

        // When you pivot a step of LU, the matrix looks like this:
        // [1 0] * [p ǔ] = [p u]
        // [ľ I]   [0 ǎ]   [l a]
        // Solving for the unknowns in L and U, we have:
        // ľ = l / p, ǔ = u, ǎ = a - outer(ľ, ǔ)
        let pivot_val = self.values.get(row, column);
        let pivot_row = self.values.get_row(row)
            .filter(|idx, _| self.is_column_unsolved(idx))
            .to_sparse_vector();
        let pivot_column = self.values.get_column(column)
            .filter(|idx, _| self.is_row_unsolved(idx))
            .scale(pivot_val.recip())
            .to_sparse_vector();

        // Compute the outer product of l and u and update A.
        for (row, li) in pivot_column.view() {
            // We didn't update A yet, so update the requisite value in A for
            // the column.
            self.values.set(row, column, li);
            for (column, uj) in pivot_row.view() {
                self.values.entry(row, column).map(|a| li.mul_add(-uj, a));
            }
        }
> I’m sure there are good technical reasons why it’s not the case, but I was surprised there was no #[derive(AddAssign)] macro that could be added to an implementation of std::ops::Add to automatically derive the AddAssign trait.

You'd want the reverse, implementing std::ops::Add on top of std::ops::AddAssign + Clone (or maybe Copy?). The advantage of op-assignment is that you don't have to any extra allocation, so it's usually the form you want if you can do it. In my sparse BLAS lib, I only implemented AddAssign and MulAssign for sparse vectors (although I did do an impl Mul<SparseVectorView> for SparseVectorView to implement dot product). Although truth be told, I'm usually trying to a += b * c in my code anyways, and I want to generate FMAs, so I can't use the math expressions anyways.