frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Voxtral Transcribe 2

https://mistral.ai/news/voxtral-transcribe-2
155•meetpateltech•1h ago•38 comments

Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation

https://arxiv.org/abs/2602.00294
74•fheinsen•2h ago•33 comments

A sane but bull case on Clawdbot / OpenClaw

https://brandon.wang/2026/clawdbot
126•brdd•1d ago•201 comments

Tractor

https://incoherency.co.uk/blog/stories/tractor.html
36•surprisetalk•19h ago•11 comments

A case study in PDF forensics: The Epstein PDFs

https://pdfa.org/a-case-study-in-pdf-forensics-the-epstein-pdfs/
107•DuffJohnson•2h ago•33 comments

Converge (YC S23) Is Hiring Product Engineers (NYC, In-Person)

https://www.runconverge.com/careers/product-engineer
1•thomashlvt•1m ago

Data centers in space makes no sense

https://civai.org/blog/space-data-centers
923•ajyoon•21h ago•1044 comments

Guinea worm on track to be 2nd eradicated human disease; only 10 cases in 2025

https://arstechnica.com/health/2026/02/guinea-worm-on-track-to-be-2nd-eradicated-human-disease-on...
95•bookofjoe•2h ago•36 comments

Procedures for Repair of Potholes in Asphalt-Surfaced Pavements

https://highways.dot.gov/media/7941
14•treebrained•3d ago•13 comments

Lessons learned shipping 500 units of my first hardware product

https://www.simonberens.com/p/lessons-learned-shipping-500-units
737•sberens•2d ago•354 comments

Old Insurance Maps – Georeferencing Sanborn Fire Insurance Maps on Modern Maps

https://oldinsurancemaps.net/
46•lapetitejort•1w ago•11 comments

FBI couldn't get into WaPo reporter's iPhone because Lockdown Mode enabled

https://www.404media.co/fbi-couldnt-get-into-wapo-reporters-iphone-because-it-had-lockdown-mode-e...
325•robin_reala•2h ago•268 comments

Coding Agent VMs on NixOS with Microvm.nix

https://michael.stapelberg.ch/posts/2026-02-01-coding-agent-microvm-nix/
27•secure•3d ago•11 comments

Show HN: Ghidra MCP Server – 110 tools for AI-assisted reverse engineering

https://github.com/bethington/ghidra-mcp
205•xerzes•10h ago•52 comments

Show HN: Craftplan – I built my wife a production management tool for her bakery

https://github.com/puemos/craftplan
478•deofoo•2d ago•142 comments

Brazilian Micro-SaaS Map

https://saas-map.ssr.trapiche.cloud/
68•acfilho•3d ago•3 comments

I miss thinking hard

https://www.jernesto.com/articles/thinking_hard
1051•jernestomg•13h ago•575 comments

New York’s budget bill would require “blocking technology” on all 3D printers

https://blog.adafruit.com/2026/02/03/new-york-wants-to-ctrlaltdelete-your-3d-printer/
594•ptorrone•1d ago•691 comments

Microsoft's Pivotal AI Product Is Running into Big Problems

https://www.wsj.com/tech/ai/microsofts-pivotal-ai-product-is-running-into-big-problems-ce235b28
17•fortran77•54m ago•5 comments

Thatcher Effect – Optical Illusion and Explanation

https://optical.toys/thatcher-effect/
34•robin_reala•3h ago•10 comments

Deno Sandbox

https://deno.com/blog/introducing-deno-sandbox
497•johnspurlock•23h ago•152 comments

Agent Skills

https://agentskills.io/home
501•mooreds•1d ago•241 comments

The fax numbers of the beast, and other mathematical sports

https://cabinetmagazine.org/issues/57/wertheim.php
20•marysminefnuf•1d ago•8 comments

Broken Proofs and Broken Provers

https://lawrencecpaulson.github.io/2026/01/15/Broken_proofs.html
44•RebelPotato•8h ago•8 comments

X offices raided in France as UK opens fresh investigation into Grok

https://www.bbc.com/news/articles/ce3ex92557jo
525•vikaveri•1d ago•997 comments

High-Altitude Adventure with a DIY Pico Balloon

https://spectrum.ieee.org/explore-stratosphere-diy-pico-balloon
85•jnord•3d ago•42 comments

Goblins: Distributed, Transactional Programming with Racket and Guile

https://spritely.institute/goblins/
96•alhazrod•4d ago•15 comments

AliSQL: Alibaba's open-source MySQL with vector and DuckDB engines

https://github.com/alibaba/AliSQL
269•baotiao•22h ago•40 comments

Xcode 26.3 – Developers can leverage coding agents directly in Xcode

https://www.apple.com/newsroom/2026/02/xcode-26-point-3-unlocks-the-power-of-agentic-coding/
351•davidbarker•22h ago•301 comments

The Mathematics of Tuning Systems

https://math.ucr.edu/home/baez/tuning_talk/
65•u1hcw9nx•4d ago•12 comments
Open in hackernews

Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation

https://arxiv.org/abs/2602.00294
74•fheinsen•2h ago

Comments

spacewhales•1h ago
Github here: https://github.com/glassroom/sata_attention
yanosh_kunsh•1h ago
So does that mean that LLM inference could go down significantly in price and/or context length would dramatically increase?
bluecoconut•1h ago
I almost feel like this goes opposite to what attention is good at. This would be good at approximating all the places where attention is low / not sharp. Where attention/the exponential is key is when it selects out / needle-in-haystack / winner-takes-all focus (the word "attention" itself sorta implies this), and this is where the taylor expression would fail to represent the values well. This just... softens attentions ability to attend?

(I'm imagining that if in the context there's ~4-8 "similar" attention-targets that should be sharp, and regular attention learns to select the correct one, this taylor approximation version would wash out any difference and they'd all loosly be attended to, and it'd fail to isolate the correct signal)

Really wish this had some downstream tests -- apply it to a pretrained model and see how performance degrades, train a fresh one, etc. The tests are worth doing, but I somehow don't feel that hopeful this is the unlock required for sub-quadratic attention. It's possible that a freshly trained model with this learns to attend without the sharp attention signals, but that seems a bit dubious to me.

But also, maybe this combined with some other selective (sparse attention) trick, means that the hybrid model gets the "fuzzy long tail" of attention well represented as well as the sharpness well represented, and all together it could actually be a part of the larger solution.

energy123•1h ago
> this is where the taylor expression would fail to represent the values well.

"In practice, we find that four Taylor terms (P = 4) suffice for recovering conventional attention with elementwise errors of approximately the same magnitude as Float16 resolution"

seanhunter•1h ago
I read that too, but I wondered whether elementwise error is the right metric. Surely the actual error metric should be to evaluate model performance for a conventional transformer model and then the same model with the attention mechanism replaced by this 4th order Taylor approximation?
vlovich123•35m ago
Bounded error weights by definition is a more strict evaluation criterion than “performance” metrics through running the model.
mapontosevenths•1h ago
> This just... softens attentions ability to attend?

I think this does soften, but not linearly. That is to say the fixed state size limitation means that it softens more as it gets further into the past.

tehsauce•1h ago
Right, and when they compare to floating point accuracy they seem to be using the number of decimals supported by the mantissa, but the exponent is important no?
seanhunter•1h ago
When someone says the error is of a certain magnitude they mean the absolute value of the difference between the the two things, so what they're saying is that the values they produced with their approximation are about as wrong as the difference between the actual values and those values cast to float16. The exponent is most definitely important and would be included in that.
mapontosevenths•1h ago
This uses the Taylor approximation to approximate softmax, but that IS only an approximation. I wonder exactly how much that trade-off costs in terms of accuracy vs performance? I note that they say it's close to float16 with four Taylor terms.

My other concern would be that Taylor itself is fairly complex. I wonder how well GPU's handle this in comparison to good old fashioned softmax? The last time I used Taylor with a custom Triton kernel it was still very slow. That could just have been my own jank vibe-coded implementation though.

rvz•1h ago
> Our work enables unbounded token generation at modest fixed cost, substantially reducing the infrastructure and energy demands of large-scale Transformer models. The mathematical techniques we introduce are of independent interest.

Now this is a very interesting paper, which hopefully should address the chronic inefficiencies of the AI lack of efficient methods and approaches in reducing their significant computational and energy demands which are off the charts.

> These factors penalize performance relative to what a fused, hardware-optimized implementation could achieve, and the reported runtime results should therefore be interpreted conservatively.

It's still early with several limitations, but the need for wasting billions on GPUs will begin to not make any sense soon.

thomasahle•1h ago
There's a graveyard of 100s of papers with "approximate near linear time attention."

They always hope the speed increase makes up for the lower quality, but it never does. The quadratic time seems inherent to the problem.

Indeed, there are lower bounds showing that sub n^2 algorithms can't work: https://arxiv.org/pdf/2302.13214

cubefox•1h ago
I think DeepSeek V3.2 is sub n^2, but it clearly performs quite well, refuting the alleged lower bounds in the paper.
andy12_•49m ago
It really isn't sub N^2. The main attention is only O(Nk), but only thanks to a lightning indexer that still has complexity O(N^2). So overall it still has the same complexity; just with a smaller constant factor [1]

> DSA reduces the core attention complexity of the main model from O(L^2) to O(Lk), where k (<< L) is the number of selected tokens. Although the lightning indexer still has a complexity of O(L^2), it requires much less computation compared with MLA in DeepSeek-V3.1-Terminus

[1] https://arxiv.org/pdf/2512.02556

fheinsen•1h ago
As linear approximation error approaches similar magnitude as quadratic numerical error, don’t the two start becoming comparable in practice?

I ask because in practice, for inference, attention is typically computed with low-precision (4-bit, 8-bit, 16-bit) floats.

Numerical error, in fact, may be a key factor as to why quadratic attention, in practice, exhibits context rot as context gets longer, analogous to an RNN:

https://www.anthropic.com/engineering/effective-context-engi...

cobolexpert•57m ago
Dumb question: is the quadratic time complexity for training, inference, or both?
omneity•55m ago
Attention is calculated during the forward pass of the model, which happens in both inference (forward only) and training (forward & backward).
SubiculumCode•21m ago
Dumb question: Can inference be done in a reverse pass? Outputs predicting inputs?
root_axis•15m ago
Sounds like a great premise for a sci-fi short story.
kristjansson•47m ago
> self-attention is efficiently computable to arbitrary precision with constant cost per token

This paper at least aspires to reproduce 'true' attention, which distinguishes it from many of the others. TBD if its successful in that.

energy123•41m ago
It's like claims of room temperature superconductors or millenium prize solutions. Earth shattering if true. It'd be such a black swan. Terrible for Nvidia.
SeanAnderson•13m ago
Well, we solved one of the Millennium Prize problems (honestly kinda quickly) so maybe there's hope :)
logicchains•15m ago
It can't be successful at that any more than 1+1 can equal 3. Fundamentally, if every token wants to be able to look at every previous token without loss of information, it must be O(n^2); N tokens looking at N tokens is quadratic. Any sub-quadratic attention must hence necessarily lose some information and be unable to support perfect recall on longer sequences.
naasking•43m ago
I think any kind of innovation here will have to take advantage of some structure inherent to the problem, like eliminating attention in favour of geometric structures like Grassman flows [1].

[1] Attention Is Not What You Need, https://arxiv.org/abs/2512.19428

findalex•5m ago
Right - e.g., if you're modeling a physical system it makes sense to bake in some physics - like symmetry.
jcarreiro•15m ago
The paper says that:

> In practice, we find that four Taylor terms (P = 4) suffice for recovering conventional attention with elementwise errors of approximately the same magnitude as Float16 resolution, acceptable for many AI applications.

ie., the claim is that this method reproduces the results of conventional attention, up to float16 numerical precision.

observationist•1h ago
This could turbocharge ByT5 and other tokenless architectures, whose big downside was the increase in compute over longer sequences. It's easy to imagine a bunch of strategies with variable levels of "focus" and so on with a fixed compute budget assigned on the fly with learned optimizers informing the distribution.
andes314•1h ago
Linear time attention doesn’t work, by principle. Dead end pursuit. Much great research on more efficient quadratic time inference
smokel•5m ago
What about n log n?
abeppu•1h ago
I haven't tried to follow the math closely but should there not be some concern about the region of convergence? It looks like they don't specifically discuss it. Or is there some reason this isn't a problem in this context?
reactordev•1h ago
I fear they have completely overlooked it.
alyxya•52m ago
The best and proven linear attention is the Gated DeltaNet or variations of it, used by Kimi and Qwen. Anyone who thinks linear attention can't work is forgetting that models are a fixed size so attention should always be compressable to be linear. Another way to think of the feasibility of linear attention is that the standard attention mechanism can be made linear simply by removing the softmax so the kv cache stores the kv product as a constant size matrix instead. Softmax just normalizes attention, but it's not theoretically required.