frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
56•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
637•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
935•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•30 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•11 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
478•todsacerdoti•21h ago•237 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
278•eljojo•16h ago•165 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
57•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•64 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

Breaking Quadratic Barriers: A Non-Attention LLM for Ultra-Long Context Horizons

https://arxiv.org/abs/2506.01963
70•PaulHoule•7mo ago

Comments

zoklet-enjoyer•7mo ago
I don't know what those words mean, but I am excited for the possibilities.
PaulHoule•7mo ago
LLMs can look back over a certain number (N) of tokens, which roughly correspond to words. For instance if you want to summarize or answer questions about a document accurately the length of the document has to be less than N.

Conventionally they use an attention mechanism that compares every token to every other token which has a cost of N*N or N squared which is quadratic. If you want LLMs to chew over a huge amount of context (all the source code for your project) it’s a problem so people are looking for ways around this.

zoklet-enjoyer•7mo ago
Thank you for that explanation
rybosome•7mo ago
Adding to that excellent high level explanation of what the attention mechanism is, I’d add (from my reading of the abstract of this paper);

This work builds a model that has the ability to “remember” parts of its previous input when generating and processing new input, and has part of its intelligence devoted to determining what is relevant to remember.

This is in lieu of kind of saying “I need to keep re-reading what I’ve already read and said to keep going”.

I’d welcome better explanations. :)

Icko_•7mo ago
Not even that. With KV-caching, it's linear with the size of the context; and if someone figured out a way to have e.g. NlogN complexity, I imagine with KV-caching it may go down to logN complexity. (If the new algorithm permits that.)
yorwba•7mo ago
When people say that attention is quadratic, they mean that the cost to process n tokens is O(n²), so the amortized cost per token is indeed O(n). KV-caching is a way to maintain that amortized cost when appending tokens one at a time instead of ingesting the whole sequence at once. But in the end people want to be able to generate multiple tokens, so we're back at O(n²) total time again.

IIRC there are some FFT-based attention alternatives where encoding has complexity O(n log n), but there's no feasible way to cache anything and after appending a single token it costs O(n log n) again, so if you generate n tokens in sequence, the cost is actually O(n² log n).

imranq•7mo ago
I like the idea of removing quadratic scaling for attention, this paper has thin experimental support. No real tasks tested beyond perplexity. Nothing on reasoning, retrieval QA, or summarization quality. Even in perplexity the gains are marginal.

However it removes attention so I think its worth watching that space of non-attention models

yorwba•7mo ago
This paper seems rather unfocused, explaining their architecture three times with slight variations while managing to omit crucial details like how exactly they compute gradients for their "External Retrieval Memory."

Also, the section on DeepSeek is really weird: "While the precise architectural details of DeepSeek LLM are still emerging, early discussions suggest that it relies on an extended Transformer backbone or a "hybrid" approach that likely incorporates some form of attention-based mechanism, potentially at specific layers or across chunk boundaries, to facilitate information flow across large contexts." It makes it sound like a mystery, even though there have been multiple papers published on it (they cite the R1 one) so that there's really no need to guess whether attention is involved.

Overall I'm not convinced the authors know what they're doing.

roxolotl•7mo ago
Would you say they aren’t paying attention?
cubefox•7mo ago
I think it's fair to say they are explicitly avoiding attention.
NitpickLawyer•7mo ago
Hate to be that guy, but this screams LLM-generated to me. Between the titles, the vague explanations, the vague concepts, and the overall amount of fluff to data, I'd bet good money that this was generated with an LLM.

It's not inherently bad to use an LLM for consistency, language and overall sprucing up, but this is taking it a bit too far. It seems like they've prompted it to explain some notes, but it's unsure how well it did, since the notes themselves (i.e. data, experiments, etc) are missing. And it seems poorly prompted in that it consists of lots of fluff paragraphs, devoid of core knowledge, going round and round explaining the same concepts with different words.

In the end the responsibility for the end product is alsways on the submitter. This whole paper could have been a prompt, and it's worrying that this is accepted at such a prestigious school.

albertzeyer•7mo ago
"hundreds of thousands to potentially millions of tokens" - that's the same order as current commercial LLMs.

Also note, if the sequence length is not really much larger than the model dimension (at least two orders of magnitude more), the quadratic complexity of the self-attention is really not such a big issue - the matrix multiplication in the feed-forward layers will be usually 8x the model dimension squared, and thus that part will usually dominate.

Also note that there has been so much research on this already. While this particular approach might be novel, there has been attempts to avoid the O(n^2) complexity in self-attention basically almost since the original transformer paper came out in 2017. I wonder a bit that this paper does not cite xLSTM, or Block-Recurrent Transformers.

Also, this paper comes very short in experiments. There is basically only table 2. There is no study on length extrapolation (which is very relevant for the topic), or needle-in-haystack experiments, or scaling studies, any larger scale experiments, etc. Also, even in this main table 2, I see a couple of typos. And looking at the results in table 2, the improvements seems to be quite minor.

So I would conclude, this needs a lot more work.

cubefox•7mo ago
> "hundreds of thousands to potentially millions of tokens" - that's the same order as current commercial LLMs.

Yes, but those are all relying on proprietary company secrets, while this is an open research paper. Besides, only Gemini so far has a context window of more than a million tokens.

littlestymaar•7mo ago
Llama 4 Scout has it also, and is an open weight LLM, unfortunately it is also disappointing at pretty much any context length…
3abiton•7mo ago
> Unlike traditional Transformer designs, which suffer from quadratic memory and computation overload due to the nature of the self attention mechanism, our model avoids token to token attention entirely.

I skimmed the paper, and unlike transformers they basically can scale much more efficiently with longer context. While it's possible to fit 1M token, you need a significant amount of memory. Alrhough they benchmark against GPT2, so I would say quite preliminary work so far, although promising architecture.

boroboro4•7mo ago
> Also note, if the sequence length is not really much larger than the model dimension (at least two orders of magnitude more), the quadratic complexity of the self-attention is really not such a big issue - the matrix multiplication in the feed-forward layers will be usually 8x the model dimension squared, and thus that part will usually dominate.

This is incorrect in case of batched inference. There are two bottlenecks at play: compute and memory, and your reasoning applies to compute. In case of memory it gets trickier: for MLP layers you’ll need to read same set of weights for all elements of your batch, while for kv cache for attention elements will be different. That’s why in practice the real length where attention dominates would be closer to model dimension / batch size, rather than just model dimension. And this number isn’t as high anymore.

daxfohl•7mo ago
Partially related, is charging by token sustainable for LLM shops? If the compute requirements go up quadratically, doesn't that mean cost should as well?
sakras•7mo ago
Typically requests are binned by context length so that they can be batched together. So you might have a 10k bin and a 50k bin and a 500k bin, and then you drop context past 500k. So the costs are fixed per-bin.
daxfohl•7mo ago
Makes sense, and each model has a max context length, so they could charge per token assuming full context by model if they wanted to assume worst case.
maxrmk•7mo ago
> While the specific internal workings of DeepSeek LLM are still being elucidated, it appears to maintain or approximate the self-attention paradigm to some extent.

Totally nonsensical. Deepseeks architecture is well documented, multiple implementations are available online.

gsf_emergency•7mo ago
https://github.com/andrew-jeremy/nonAttentionLLM
ljlolel•7mo ago
This needs way more charts and graphs
juank10•7mo ago
Funnily enough, the code was deleted in the repo, but can still be seen in the commits. It's what you would expect from the paper :D

On the general topic of non-attention LLMs, I recommend checking out the MesaNet [1], Rodimus [2], Gated DeltaNet [3], or Mamba2 [4]. They are currently SOTA.

However, I have yet to see a compelling non attention based model that achieves good performance on code, math, reasoning, or multi-turn QA tasks. I do not think we are getting rid of attention soon, I believe the ability to look back is crucial in certain tasks. [1] https://arxiv.org/abs/2506.05233 [2] https://arxiv.org/abs/2410.06577 [3] https://arxiv.org/abs/2412.06464 [4] https://arxiv.org/abs/2405.21060