frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Watch Ukraine's Minigun-Firing, Drone-Hunting Turboprop in Action

https://www.twz.com/air/watch-ukraines-minigun-firing-drone-hunting-turboprop-in-action
1•breve•37s ago•0 comments

Free Trial: AI Interviewer

https://ai-interviewer.nuvoice.ai/
1•sijain2•41s ago•0 comments

FDA Intends to Take Action Against Non-FDA-Approved GLP-1 Drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
1•randycupertino•2m ago•0 comments

Supernote e-ink devices for writing like paper

https://supernote.eu/choose-your-product/
1•janandonly•4m ago•0 comments

We are QA Engineers now

https://serce.me/posts/2026-02-05-we-are-qa-engineers-now
1•SerCe•4m ago•0 comments

Show HN: Measuring how AI agent teams improve issue resolution on SWE-Verified

https://arxiv.org/abs/2602.01465
2•NBenkovich•4m ago•0 comments

Adversarial Reasoning: Multiagent World Models for Closing the Simulation Gap

https://www.latent.space/p/adversarial-reasoning
1•swyx•5m ago•0 comments

Show HN: Poddley.com – Follow people, not podcasts

https://poddley.com/guests/ana-kasparian/episodes
1•onesandofgrain•13m ago•0 comments

Layoffs Surge 118% in January – The Highest Since 2009

https://www.cnbc.com/2026/02/05/layoff-and-hiring-announcements-hit-their-worst-january-levels-si...
7•karakoram•13m ago•0 comments

Papyrus 114: Homer's Iliad

https://p114.homemade.systems/
1•mwenge•13m ago•1 comments

DicePit – Real-time multiplayer Knucklebones in the browser

https://dicepit.pages.dev/
1•r1z4•13m ago•1 comments

Turn-Based Structural Triggers: Prompt-Free Backdoors in Multi-Turn LLMs

https://arxiv.org/abs/2601.14340
2•PaulHoule•15m ago•0 comments

Show HN: AI Agent Tool That Keeps You in the Loop

https://github.com/dshearer/misatay
2•dshearer•16m ago•0 comments

Why Every R Package Wrapping External Tools Needs a Sitrep() Function

https://drmowinckels.io/blog/2026/sitrep-functions/
1•todsacerdoti•16m ago•0 comments

Achieving Ultra-Fast AI Chat Widgets

https://www.cjroth.com/blog/2026-02-06-chat-widgets
1•thoughtfulchris•18m ago•0 comments

Show HN: Runtime Fence – Kill switch for AI agents

https://github.com/RunTimeAdmin/ai-agent-killswitch
1•ccie14019•21m ago•1 comments

Researchers surprised by the brain benefits of cannabis usage in adults over 40

https://nypost.com/2026/02/07/health/cannabis-may-benefit-aging-brains-study-finds/
1•SirLJ•22m ago•0 comments

Peter Thiel warns the Antichrist, apocalypse linked to the 'end of modernity'

https://fortune.com/2026/02/04/peter-thiel-antichrist-greta-thunberg-end-of-modernity-billionaires/
3•randycupertino•23m ago•2 comments

USS Preble Used Helios Laser to Zap Four Drones in Expanding Testing

https://www.twz.com/sea/uss-preble-used-helios-laser-to-zap-four-drones-in-expanding-testing
3•breve•29m ago•0 comments

Show HN: Animated beach scene, made with CSS

https://ahmed-machine.github.io/beach-scene/
1•ahmedoo•29m ago•0 comments

An update on unredacting select Epstein files – DBC12.pdf liberated

https://neosmart.net/blog/efta00400459-has-been-cracked-dbc12-pdf-liberated/
3•ks2048•29m ago•0 comments

Was going to share my work

1•hiddenarchitect•33m ago•0 comments

Pitchfork: A devilishly good process manager for developers

https://pitchfork.jdx.dev/
1•ahamez•33m ago•0 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
3•mltvc•37m ago•1 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•38m ago•1 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•38m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
3•SchwKatze•38m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•39m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
6•guerrilla•41m ago•1 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
4•hidden80•41m ago•4 comments
Open in hackernews

TransMLA: Multi-head latent attention is all you need

https://arxiv.org/abs/2502.07864
123•ocean_moist•9mo ago

Comments

olq_plo•9mo ago
Very cool idea. Can't wait for converted models on HF.
MichaelMoser123•9mo ago
deepseek-v2,v3,r1 are all using multi-headed attention.
kavalg•9mo ago
My (possibly wrong) TLDR: TransMLA is a method to "compress" an already trained GQA model, with the additional option to further fine tune it. Shall make inference faster.
freeqaz•9mo ago
Also makes models smarter ("expressive")
yorwba•9mo ago
It is not a method to compress a Grouped-Query Attention model, but to expand it into an equivalent Multi-head Latent Attention model with the same key-value cache size but larger effective key/value vectors and a correspondingly larger number of trainable parameters. With additional training, you can then obtain a better model that only uses a little bit more memory.
kavalg•8mo ago
Thanks for the clarification.
wiz21c•9mo ago
Not quite related, but do the mamba models gain ground ?

Answering my own question: https://www.reddit.com/r/MachineLearning/comments/1hpg91o/d_...

EGreg•9mo ago
All you need to stop posting titles like that !
jbellis•9mo ago
[abstract] This approach significantly reduces the KV cache size relative to traditional multi-head attention

[3.3] For saving the KV cache, only the intermediate latent representations need to be stored: [latex] where r is much smaller than nh · dh [n-sub-h, d-sub-h]

[background] In traditional multi-head attention you must cache full key and value matrices of size T x (nh · dh) where T is the token length, nh is the number of attention heads, dh is the dimensionality of each individual head

sounds like a big win for memory constrained environments like local inference

magicalhippo•9mo ago
I'm just following the field from the sidelines, but this looks interesting to me. Especially the increase in expressiveness that the new model allows for over GQA, at the cost of just ~10% more memory, and the fact that you can convert existing GQA models like LLaMA, Qwen etc with just a bit of fine-tuning.

Perhaps a trivial insight but I feel a lot of progress often comes in the form of generalizations, where existing approaches can be seen as special cases. Here the authors show that Group Query Attention (GQA) and Multi-Query Attention (MQA) falls out as special cases of their new model.

edit:

Adding my own summary, as I understand it.

The key to what they're doing, no pun intended, is to rely on the fact that large, high-dimensional, matrices may contain a lot of redundant information. Thus one may be able to find an good approximation which has less redundant information, by going through an intermediary stage which has fewer dimensions.

A n-by-m matrix M takes n-dimensional vectors and transforms them to m-dimensional vectors. The trick here is to replace matrix A by two matrices, L and R, which are n-by-r and r-by-m respectively, where r is smaller than n and m. This is called a low-rank approximation.

In a sense you're "straining the matrix", by forcing the information to pass through an intermediary, low-dimensional vector.

The memory savings come from the fact that matrix A has n*m entries, while L and R have n*r and r*m entries respectively. Say n = m = 100 and r = 20, that means A has 100*100 = 10k entries, while L and R have just 100*20 + 20*100 = 4k entries in total.

The trick itself is not new, for example it is also used in LoRA where an additional low-rank approximation matrix is used to tweak the output of an existing model. The low rank means there's far fewer the matrix entries, aka parameters, to train than if one had used a regular fully dense matrix.

The extra expressiveness of MLA comes from the fact that in GQA, in order to save memory, some of the matrices are actually built by gluing copies of a narrower matrix together. This means the information in the glued-up matrices are very redundant and fixed in a certain way, and thus are restricted in how they can transform the inputs.

By using the low-rank approximation instead, the information in the full, reconstructed matrices are not fixed in the same way compared to the glued-up result. Thus the inputs can be transformed in a less restrictive way, leading to the increase in expressiveness.

The GQA method saves a bit more memory compared to MLA as the narrower matrices are even smaller than the low-rank matrices in MLA, but at the cost of expressiveness.

killerstorm•9mo ago
Another paper related to attention distillation, although doing something far more radical: transformer attention is distilled onto RWKV-like model: https://huggingface.co/papers/2505.03005
karmakaze•9mo ago
I'm not "in the field" though I like to read about and use LLMs. This video "How DeepSeek Rewrote the Transformer [MLA]"[0] is really good at explaining MHA, MQA, GQA, and MLA with clear visuals/animations and how DeepSeek MLA is 57x more efficient.

[0] https://www.youtube.com/watch?v=0VLAoVGf_74&t=960s