frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

John Ternus to become Apple CEO

https://www.apple.com/newsroom/2026/04/tim-cook-to-become-apple-executive-chairman-john-ternus-to...
1401•schappim•7h ago•711 comments

How to make a fast dynamic language interpreter

https://zef-lang.dev/implementation
87•pizlonator•3h ago•8 comments

Jujutsu megamerges for fun and profit

https://isaaccorbrey.com/notes/jujutsu-megamerges-for-fun-and-profit
181•icorbrey•6h ago•61 comments

Qwen3.6-Max-Preview: Smarter, Sharper, Still Evolving

https://qwen.ai/blog?id=qwen3.6-max-preview
572•mfiguiere•14h ago•299 comments

Kimi vendor verifier – verify accuracy of inference providers

https://www.kimi.com/blog/kimi-vendor-verifier
199•Alifatisk•9h ago•17 comments

Ternary Bonsai: Top Intelligence at 1.58 Bits

https://prismml.com/news/ternary-bonsai
88•nnx•3d ago•20 comments

Soul Player C64 – A real transformer running on a 1 MHz Commodore 64

https://github.com/gizmo64k/soulplayer-c64
93•adunk•8h ago•24 comments

How a subsea cable is repaired

https://www.onesteppower.com/post/subsea-cable-repair
13•slicktux•4d ago•1 comments

Japan's Cherry Blossom Database, 1,200 Years Old, Has a New Keeper

https://www.nytimes.com/2026/04/17/climate/japan-cherry-blossom-database-scientist.html
48•caycep•3d ago•5 comments

ggsql: A Grammar of Graphics for SQL

https://opensource.posit.co/blog/2026-04-20_ggsql_alpha_release/
381•thomasp85•15h ago•76 comments

Quantum Computers Are Not a Threat to 128-Bit Symmetric Keys

https://words.filippo.io/128-bits/
168•hasheddan•11h ago•67 comments

All phones sold in the EU to have replaceable batteries from 2027

https://www.theolivepress.es/spain-news/2026/04/20/eu-to-force-replaceable-batteries-in-phones-an...
1056•ramonga•14h ago•875 comments

Brussels launched an age checking app. Hackers took 2 minutes to break it

https://www.politico.eu/article/eu-brussels-launched-age-checking-app-hackers-say-took-them-2-min...
170•axbyte•19h ago•76 comments

Modern Rendering Culling Techniques

https://krupitskas.com/posts/modern_culling_techniques/
109•krupitskas•1d ago•19 comments

Year of the IPv6 Overlay Network

https://www.defined.net/blog/year-of-the-ipv6-overlay-network/
27•stock_toaster•3d ago•3 comments

OpenAI ad partner now selling ChatGPT ad placements based on “prompt relevance”

https://www.adweek.com/media/exclusive-leaked-deck-reveals-stackadapts-playbook-for-chatgpt-ads/
219•jlark77777•7h ago•104 comments

Kefir C17/C23 Compiler

https://sr.ht/~jprotopopov/kefir/
135•conductor•3d ago•8 comments

WebUSB Extension for Firefox

https://github.com/ArcaneNibble/awawausb
217•tuananh•16h ago•190 comments

Deezer says 44% of songs uploaded to its platform daily are AI-generated

https://techcrunch.com/2026/04/20/deezer-says-44-of-songs-uploaded-to-its-platform-daily-are-ai-g...
324•FiddlerClamp•12h ago•312 comments

M 7.4 earthquake – 100 km ENE of Miyako, Japan

https://earthquake.usgs.gov/earthquakes/eventpage/us6000sri7/
269•Someone•18h ago•123 comments

Zero-Copy Pages in Rust: Or How I Learned to Stop Worrying and Love Lifetimes

https://redixhumayun.github.io/databases/2026/04/14/zero-copy-pages-in-rust.html
55•ingve•4d ago•5 comments

10 years ago, someone wrote a test for Servo that included an expiry in 2026

https://mastodon.social/@jdm_/116429380667467307
203•luu•1d ago•107 comments

Atlassian enables default data collection to train AI

https://letsdatascience.com/news/atlassian-enables-default-data-collection-to-train-ai-f71343d8
529•kevcampb•16h ago•122 comments

F-35 is built for the wrong war

https://warontherocks.com/cogs-of-war/the-f-35-is-a-masterpiece-built-for-the-wrong-war/
224•anjel•8h ago•466 comments

Writing string.h functions using string instructions in asm x86-64 (2025)

https://pmasschelier.github.io/x86_64_strings/
56•thaisstein•3d ago•6 comments

Sauna effect on heart rate

https://tryterra.co/research/sauna-effect-on-heart-rate
394•kyriakosel•14h ago•210 comments

Bloom (YC P26) Is Hiring

https://www.ycombinator.com/companies/trybloom/jobs
1•RayFitzgerald•11h ago

I learned Unity the wrong way

https://darkounity.com/blog/how-i-learned-unity-the-wrong-way
145•lelanthran•4d ago•92 comments

Kimi K2.6: Advancing open-source coding

https://www.kimi.com/blog/kimi-k2-6
613•meetpateltech•12h ago•322 comments

OpenClaw isn't fooling me. I remember MS-DOS

https://www.flyingpenguin.com/build-an-openclaw-free-secure-always-on-local-ai-agent/
278•feigewalnuss•20h ago•307 comments
Open in hackernews

KV Cache Compression 900000x Beyond TurboQuant and Per-Vector Shannon Limit

https://arxiv.org/abs/2604.15356
43•EGreg•2h ago

Comments

tomrod•1h ago
Extraordinary claims! I don't follow the argument though.
EGreg•1h ago
Author here. Since starting to teach AI at IENYC, I started publishing my papers recently on arXiv, and considering submitting them to a journal.

This is based on my original "PLT" paper: Probablistic Language Tries (https://news.ycombinator.com/item?id=47743585). A "Trie" is basically a tree of prefixes. While working on https://safebots.ai I became obsessed with caching generated artifacts as a means to do a lot of things: extremely cheap inference, near-optimal compression, modeling decision trees for strategies, and so on.

The PLT model was about compression in general. My main insight there was that the LLM's own weights actually contain an incredibly detailed probability distribution of "the next token" in any sequence, which can therefore be very useful to supercharge statistical compression. Sequences which occur frequently in the domain of the model receive short codes. The other insight is that if we allowed lossy compression, we could compress well below the Shannon information limit, and just have an "overflow" bag for surprising sequences.

When TurboQuant came out, I realized we can also go way below the Shannon limit in the same way, and take advantage of PLT. In fact, I'm working on publishing a paper that generalizes this to robotics (which needs to do cheap fast on-board inference "in the field"). I also believe this is how animals actually learn. In other words, over time they learn overall "sequences" of actions and then can check whether they are "good enough" to solve the problem, or whether to switch to a full analysis -- this corresponds to System 1 and 2 of Daniel Kahneman's "Thinking Fast and Slow".

If you want more specific information, or see the code for a working prototype, you can write me at the email in the paper.

Rekindle8090•1h ago
I see you using a double dash instead of an em or en dash to get around bot detection extensions and I'm not fooled.
EGreg•1h ago
Haha, yes I always used -- when I typed an em-dash manually. What bot detection extensions? :-P
cristoperb•1h ago
I can't speak for the person you're replying too, but I use -- for emdash for two reasons: I never remember how to type an actual emdash in linux/X11, and more importantly, I do most of my writing in Asciidoc which converts -- to an emdash automatically. It's nothing to do with bot detection or whatever.

But it does get me confused sometimes because in LaTeX (and other markup languages) -- gets converted to an endash whereas it takes three hyphens --- to make an emdash.

rhet0rica•1h ago
you are hereby sentenced by the council of dashers to type "—" ten million times using Windows-1252 alt codes

you have 5 seconds to comply before your planet will be demolished to make room for a giant space-typographer's punctuation case

stingraycharles•1h ago
Dropping a grand theory of animal cognition into a defense of a KV cache compression bound is not something I was anticipating. I don’t think it’s a great argument.
wholinator2•1h ago
At least some random pseudocrackpotery like that is points in the direction of it being a human. There's some strange human tendencies that AI just doesn't usually replicate
usernametaken29•1h ago
Kahnemans book is considered outdated by modern neuroscience.
himata4113•1h ago
The reasoning around the 900000x claim isn't sound and violates way too many information density principles.

I was incredibly curious since I had a pet theory in my mind about something extremely similar, but arrived at a conclusion that the time complexity of such cache would end up being extremely slow.

This is like saying that you've achieved single token compression when you're passing a single token into a model and letting it regenerate the entire output since at the end of the day models are probabilistic stateless devices. At that point you don't have a cache and are just replaying the tokens or have a caching algorithm with a complexity similar to that of a model defeating the purpose of such cache.

I've never considered that arXiv had a problem, now I do.

EGreg•59m ago
No, the 914,000x in the paper is talking about the ratio between two entropy floors, it's not a claim about practical compression. The point is that per-vector quantization has been chasing the wrong theoretical limit: the sequential entropy bound is just fundamentally lower, by that factor, because KV vectors aren't independent samples!

On complexity, that's fair concern, and the paper doesn't fully resolve it. But the analogy to "replaying tokens through the model" isn't exactly right. The delta coding layer uses the model's own next-token prediction, which is already happening during normal autoregressive inference. You're not adding a forward pass, you're using the one already running and storing only the residual, which is much smaller than the raw vector -- precisely because the model is a good predictor of its own next state.

The trie index lookup is O(sequence length), not O(model forward pass). Whether that's fast enough in practice at scale is actually a legitimate open question and I'd be the first to admit the paper doesn't settle it. But the contribution here is simply establishing that the bound exists and is dramatically lower than what the field has been targeting. That's what I wanted to put out. The engineering question of how close you can get is the natural next step.

Your pet theory about time complexity sounds interesting actually, did you write it up anywhere?

mbernstein•1h ago
This is a compute memory trade, not compression vs. turobquant? Lemma 1 is something like, "forward pass is deterministic because it's deterministic" which means the input tokens were always the lower bound...which isn't caching? Smells tautological. What am I missing?
EGreg•53m ago
Well yeah, I just wrote it as a lemma, but it's basically close to tautological. Its only job is to formally ground the entropy argument that follows it. The interesting claim is what comes after: because KV vectors are deterministic functions of tokens, and because the model is a near-optimal predictor of its own distribution, the conditional entropy of each new KV vector given all previous ones is bounded by token-level perplexity. TurboQuant compresses against the marginal distribution of each vector in isolation -- that's the gap.

And yes, it's a compute/memory tradeoff, all caching is. The claim is just that the memory floor is much lower than anyone had formally established. Whether the compute cost of getting there is worth it is a fair open question the paper doesn't settle. But what if it is? Caching is the thread running through most of my work, and I intend to find out.

gaze•1h ago
this paper looks AI generated to me... I mean, there's no experiments to go along with it.
ddtaylor•1h ago
Very intersting. A compression strategy that uses the model itself as the dictionary.
thethirdone•1h ago
> the ratio remains approximately 914x over TurboQuant, with compression improving rather than degrading as context length grows.

This line from the abstract got me really suspicious. Obviously a compression scheme that incorporates the entire sequence shouldn't get worse compared to a per element one as the length increases.

It is important to note that this paper is PURELY theoretical. I couldn't find much meat on the bone from a quick skim.

The single author, Gregory Magarshak, has only published one paper on arxiv before and appears to be a professor of business / music. I don't plan to give it more of a read hoping for something of value.

stingraycharles•1h ago
Me neither. There are no actual experiments / data, no peer reviews, and the innovation relies almost entirely on citations from the author’s other paper.

The author is not an ML researcher but rather an AI startup CTO / founder. Previously worked on “social operating systems” for the web, blockchain of course. And now an AI innovator. I’m suspicious. This was part of the author’s reply in another thread:

> When TurboQuant came out, I realized we can also go way below the Shannon limit in the same way, and take advantage of PLT. In fact, I'm working on publishing a paper that generalizes this to robotics (which needs to do cheap fast on-board inference "in the field"). I also believe this is how animals actually learn. In other words, over time they learn overall "sequences" of actions and then can check whether they are "good enough" to solve the problem, or whether to switch to a full analysis -- this corresponds to System 1 and 2 of Daniel Kahneman's "Thinking Fast and Slow".

Which doesn’t exactly inspire confidence and makes me wonder who they think their audience is. ML researchers or LinkedIn.

gaze•1h ago
the irritating thing about LLM generated papers like these is that they're wrong but are generated using LLMs that are capable enough to bury the absurd claim pretty deep in there.
stingraycharles•1h ago
Analyze it using an LLM. Claude was pretty ruthless about this one.
gaze•1h ago
sure but it seems spiritually wrong to use an LLM to debug a slop paper. Who knows, maybe claude generated it in the first place?
thethirdone•1h ago
Yeah, for me Claude identified the phrase "this holds with probability 1 over random weight matrices since the null space has dimension"

Treating trained weights as random for the purpose of a proof is immediately discrediting for a paper to me.

EGreg•44m ago
"This holds for almost all matrices" is actually something you'd want to know if we're talking about probabilities, no?
EGreg•1h ago
You're right, I'm not a well-known researcher, simply an entrepreneur who started to publish academic papers.

However, I do have a long history of diving deep into fields and building practical, open-source solutions to major problems I perceive in the fields.

15 years ago I started with social networks and PHP: https://github.com/Qbix http://laweekly.com/restoring-healthy-communities/

8 years ago I got into smart contracts on EVM, which was the SOTA at the time: https://github.com/Intercoin https://intercoin.org/applications

About a year and a half ago I started teaching a course on AI at a university not far from NYU where I studied... and that's what got me into this: https://vimeo.com/1063008765/c7ef3abcc5

I try to document everything on GitHub and popular articles, but only recently started publishing academic papers on arXiv and plan to actually start submitting them for real publications. While I build, I realized that I should start publishing any novel theoretical results that underpin my work.

I plan to publish actual code in a few weeks. To be fair, TurboQuant is also a purely theoretical paper. I just wanted to get this out and share.

thethirdone•56m ago
> To be fair, TurboQuant is also a purely theoretical paper. I just wanted to get this out and share.

TurboQuant is not a purely theoretical paper. Section 4 "Experiments" (page 15) [0] has a bunch of figure based on actual GPU computations.

[0]: https://arxiv.org/abs/2504.19874

stingraycharles•42m ago
TurboQuant went through ICLR review, has multiple Google Research co-authors, open-source implementations, CUDA kernels, and LongBench benchmarks.

Contrast that with your paper: no experiments, no implementation, no empirical validation of any kind.

Did you try engaging with LLM researchers and get their feedback on your paper?

sabareesh•1h ago
Sounds like speculative decoding but for KV cache
aesthesia•1h ago
> The second layer, predictive delta coding, stores only the residual of each new KV vector from the model's own prediction of it

I don't understand this. The key and value vectors for any given layer + token are created by the model. By definition, they are exactly equal to the model's prediction of them!

Extreme KV cache compression is easy to get---you can get an infinite compression ratio by just regenerating the key and value vectors on every forward pass. The point of a KV cache is to reduce the amount of repeated computation during generation, though. Compression only helps if you have an efficient decompression algorithm.

binsquare•1h ago
Incredulous claims and unreviewed paper.

Attention really is all you need.

CyberDildonics•1h ago
I think you mean incredible claims. You would be incredulous about them.
binsquare•12m ago
You're right, thanks for the correction
EGreg•55m ago
The prediction being used is the model's prediction of the next token's KV vector, given all previous KV vectors. Because the model was trained on language, it has strong priors about what comes next. The residual, i.e the difference between the predicted next KV vector and the actual one -- is much smaller in entropy than the raw vector, for the same reason language model perplexity is low on fluent text.
aesthesia•42m ago
What model is doing this prediction? The only way a transformer predicts the "next KV vector" is by sampling the next token and then running a forward pass with that token.
EGreg•14m ago
The predicted KV vector is the expected KV vector under the model's distribution over next tokens, i.e. a weighted average over the vocabulary, not an actual sampled token. So no forward pass with a sampled token is involved. Yes, the exact computation is expensive (one forward pass per vocabulary token), which the paper acknowledges, and the practical section covers top-k approximations that capture most of the probability mass cheaply. The entropy bound holds regardless of approximation scheme -- it's a statement about the theoretical floor. The residual is small whenever the model assigns high probability to the actual next token, which is exactly what low perplexity means.
aesthesia•7m ago
A top-k approximation still requires k forward passes; that's k times as expensive as just computing the exact value. Unless you're doing a prefix-unconditional prediction, in which case you still likely need quite a large token -> vector dictionary, and particularly for inner layers a significant amount of information left in the residual.