frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
1•ArtemZ•58s ago•0 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•1m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
1•LiamPowell•3m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
2•duxup•6m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•7m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•19m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•21m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
2•savrajsingh•22m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•24m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•28m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•32m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
1•g1raffe•35m ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•40m ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
2•rolph•45m ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•47m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•52m ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•52m ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
1•cedel2k1•56m ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
34•chwtutha•56m ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•57m ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•58m ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•1h ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•1h ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
4•thread_id•1h ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•1h ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
3•cwwc•1h ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
2•paladin314159•1h ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•1h ago•0 comments
Open in hackernews

Graphical Linear Algebra

https://graphicallinearalgebra.net/
304•hyperbrainer•7mo ago

Comments

lorenzo_medici•7mo ago
Appreciate the Claude Makelele praise
rurban•7mo ago
But nowadays we are calling him the 6. And everybody praises a good 6
Xmd5a•7mo ago
Generalized Transformers from Applicative Functors

>Transformers are a machine-learning model at the foundation of many state-of-the-art systems in modern AI, originally proposed in [arXiv:1706.03762]. In this post, we are going to build a generalization of Transformer models that can operate on (almost) arbitrary structures such as functions, graphs, probability distributions, not just matrices and vectors.

>[...]

>This work is part of a series of similar ideas exploring machine learning through abstract diagrammatical means.

https://cybercat.institute/2025/02/12/transformers-applicati...

MarkusQ•7mo ago
I really enjoyed that when it was coming out, and used to follow it with some students. It's a shame it seems to have been abandoned.
Iwan-Zotow•7mo ago
Who wrote that? Do you know?

pawel ... ?

mattkrause•7mo ago
Pawel Sobocinski, in collaboration with Filippo Bonchi and Fabio Zanasi

https://graphicallinearalgebra.net/about/

theZilber•7mo ago
When I read the first meaty chapter about graphs and commutativity I initially thought he just spends too long explaining simple concepts.

But then ai realized I would always forget the names for all the mathy c' words - commutativity commutativity, qssociativity... and for the first time I could actually remember commutativity and what it means, just because he tied it into a graphical representation (which actually made me laugh out loud because, initially, I thought it was a joke). So the concept of "x + y = y + x" always made sense to me but never really stuck like the graphical representation, which also made me remember its name for the first time.

I am sold.

gowld•7mo ago
Which chapter is that? It's not in the ToC
memoryfault•7mo ago
3!
HappMacDonald•7mo ago
Chapter 6, got it
vanderZwan•6mo ago
It's because the graphs are visual metaphors that encode privileged information[0]. Which is an often overlooked aspect of teaching imo. Your own initial dismissive reaction kind of shows why: people don't really get the point until they realize it works, and even then they're not sure why.

[0] https://web.archive.org/web/20140402025221/http://m.nautil.u...

phforms•7mo ago
Years ago when I was reading this (just a couple of chapters, not all of it), it opened my eyes to the power of diagrammatic representation in formal reasoning unlike anything before. I never did anything useful with string diagrams, but it was so fun to see what is possible with this system!
elric•7mo ago
I had a similar revelation when watching 3Blue1Brown's Calculus series. Had they included those kinds of visual representations in school when I was first learning about Calculus, my understanding (and interest) would have been greatly expanded.

Very impressive how some people can create visual representations that enhance understanding.

dclowd9901•7mo ago
> If the internet has taught us anything, it’s that humans + anonymity = unpleasantness.

Aka one of my favorite axioms: https://www.penny-arcade.com/comic/2004/03/19/green-blackboa...

marvinborner•7mo ago
It's interesting how some of these diagrams are almost equivalent in the context of encoding computation in interaction nets using symmetric interaction combinators [1].

From the perspective of the lambda calculus for example, the duplication of the addition node in "When Adding met Copying" [2] mirrors exactly the iterative duplication of lambda terms - ie. something like (λx.x x) M!

[1]: https://ezb.io/thoughts/interaction_nets/lambda_calculus/202...

[2]: https://graphicallinearalgebra.net/2015/05/12/when-adding-me...

russfink•7mo ago
It reads as if Chuck Lorre (The Big Bang Theory) wrote it. Especially chapter two. I love the humor!
webprofusion•7mo ago
This is nice, my main criticism would be that it uses the language "easy" and "simple" regularly which is a classic mistake in any instructive text (including docs etc).

If the reader was feeling a bit dumb and/or embarrassed that they didn't yet get the concept being explained then this will only make them feel worse and give up.

Language like that is often used to make things feel approachable and worry-free, but can have the opposite effect.

And never ever, ever write "obvious" in a doc explaining something, because if obviousness was at play they wouldn't be reading your doc.

Nevermark•7mo ago
Excellent point.

I think about wording like that, like the extraneously explicit meta-content that dumbs down so many story plots. A character explicitly says "That makes me angry". When a better written story would make the anger implicitly obvious.

Stories should show not tell.

Make a point, make it clear make it concise, and it will be simple for most readers. Don't talk about making a point, or say a point is clear.

That is projecting attributes or experiences onto readers. But even a very well written point may not appear simple for some readers. Assume (optimistically!) that there will always be some unusually under-prepared but motivated reader. Hooray if you get them! They can handle a challenge every so often.

"Simple" communication is a high priority target, but rarely completely achievable for the total self-selected, beyond intended, audience.

RamblingCTO•7mo ago
The good ol' "this proof is trivial so we'll skip it" move.
rurban•7mo ago
He should have really used the good ol' QED instead, lol
seanhunter•7mo ago
Oh man. The variant I see so infuriatingly often at the moment is “It is clear that these form a Lie algebra/finite abelian group/Hilbert space/bijective map/<whatever other thing that is long-winded or complex to prove> and I encourage the reader to satisfy themselves that this is the case”.
programjames•7mo ago
This looks pretty similar to interaction combinators:

1. https://en.wikipedia.org/wiki/Interaction_nets#Interaction_c...

2. https://github.com/HigherOrderCO/Bend

vismit2000•7mo ago
Immersive Linear Algebra: https://immersivemath.com/ila/index.html HN: https://news.ycombinator.com/item?id=19264048
dtj1123•7mo ago
I was never able to get my head around it, but this reminds me somewhat of the zx-calculus:

https://en.wikipedia.org/wiki/ZX-calculus

AntonioL•7mo ago
Reminds me of the work from Bob Coecke at the University of Oxford. He came up with a pictorial language for quantum processes.
gowld•7mo ago
ZX-calculus mentioned in https://news.ycombinator.com/item?id=44532535

https://en.wikipedia.org/wiki/ZX-calculus