frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•40s ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
1•Osiris30•1m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•7m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•7m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•7m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•9m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•14m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•26m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•30m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•37m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•38m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
1•akagusu•38m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•40m ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•45m ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•49m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
5•DesoPK•53m ago•2 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•54m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
33•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
4•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments
Open in hackernews

Understanding neural networks through sparse circuits

https://openai.com/index/understanding-neural-networks-through-sparse-circuits/
13•gmays•2mo ago

Comments

mike_hearn•2mo ago
Must admit, I found the circuit diagram harder to interpret than the textual description of what the circuit doing.

It's an interesting approach. I can see it being really useful for networks that are inherently smaller than an LLM, maybe recommendation systems, fraud detection models etc. For LLMs I guess the most important followup line of research would be to ask whether a network trained in this special manner can then be distilled or densified in some way that retains the underlying decision making of the interpretable network with a more efficient runtime representation. Or alternatively, whether super sparse networks can be made efficient to inference.

There's also a question of expected outcomes. Mechanistic interpretability seems hard not only because of the density and superposition but also because a lot of the deep concepts being represented are just inherently difficult to express in words. There are going to be a lot of groups of neurons encoding fuzzy intuitions that might take an entire essay to crudely put into words, at best.

Starting from product goals and working backwards definitely seems like the best way to keep this stuff focused but the product goal is going to depend heavily on the network being analyzed. Like, the goal of interpretability for a recommender is going to look very different to the interpretability goal for a general chat focused LLM.

yorwba•2mo ago
In theory, multiplying a matrix and a highly sparse vector should be much faster than the dense equivalent, because you only need to read the columns of the matrix that correspond to nonzero elements of the vector. But in this paper, the vectors are much less sparse than the matrices: "Our sparsest models have approximately 1 in 1000 nonzero weights. We also enforce mild activation sparsity at all node locations, with 1 in 4 nonzero activations. Note that this does not directly enforce sparsity of the residual stream, only of residual reads and writes." In addition, they're comparing to highly optimized dense matrix multiplication kernels on GPUs, which have dedicated hardware support (Tensor Cores) that isn't useful for sparse matmul.
mike_hearn•2mo ago
Right. It's super interesting to me because some years ago I got dinner with a director of AI research at Google and he told me the LLMs at that time were super sparse. Not sure if something got lost in translation or stuff just changed, but it doesn't seem to be true anymore.

In theory NVIDIA and others could optimize for sparse matrices, right? If the operands are that sparse I wonder if whole tiles could be trivially zeroed without ever executing a matmul at all. The problem feels more like RAM and how you efficiently encode such a sparse entity without wasting lots of memory and bandwidth transferring zeros around. You can use RLE but if you have to unpack into memory to use the hardware anyway maybe it's not a win in the end.