frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
1•paulpauper•1m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•2m ago•0 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•2m ago•0 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•2m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•5m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•5m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•5m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•8m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
2•josephcsible•8m ago•0 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
2•jdjuwadi•11m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•11m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•15m ago•0 comments

Market orientation and national homicide rates

https://onlinelibrary.wiley.com/doi/10.1111/1745-9125.70023
3•PaulHoule•15m ago•0 comments

California urges people avoid wild mushrooms after 4 deaths, 3 liver transplants

https://www.cbsnews.com/news/california-death-cap-mushrooms-poisonings-liver-transplants/
1•rolph•16m ago•0 comments

Matthew Shulman, co-creator of Intellisense, died 2019 March 22

https://www.capenews.net/falmouth/obituaries/matthew-a-shulman/article_33af6330-4f52-5f69-a9ff-58...
3•canucker2016•17m ago•1 comments

Show HN: SuperLocalMemory – AI memory that stays on your machine, forever free

https://github.com/varun369/SuperLocalMemoryV2
1•varunpratap369•18m ago•0 comments

Show HN: Pyrig – One command to set up a production-ready Python project

https://github.com/Winipedia/pyrig
1•Winipedia•20m ago•0 comments

Fast Response or Silence: Conversation Persistence in an AI-Agent Social Network [pdf]

https://github.com/AysajanE/moltbook-persistence/blob/main/paper/main.pdf
1•EagleEdge•20m ago•0 comments

C and C++ dependencies: don't dream it, be it

https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies-dont-dream-it-be-it.html
1•ingve•21m ago•0 comments

Show HN: Vbuckets – Infinite virtual S3 buckets

https://github.com/danthegoodman1/vbuckets
1•dangoodmanUT•21m ago•0 comments

Open Molten Claw: Post-Eval as a Service

https://idiallo.com/blog/open-molten-claw
1•watchful_moose•21m ago•0 comments

New York Budget Bill Mandates File Scans for 3D Printers

https://reclaimthenet.org/new-york-3d-printer-law-mandates-firearm-file-blocking
2•bilsbie•22m ago•1 comments

The End of Software as a Business?

https://www.thatwastheweek.com/p/ai-is-growing-up-its-ceos-arent
1•kteare•23m ago•0 comments

Exploring 1,400 reusable skills for AI coding tools

https://ai-devkit.com/skills/
1•hoangnnguyen•24m ago•0 comments

Show HN: A unique twist on Tetris and block puzzle

https://playdropstack.com/
1•lastodyssey•27m ago•1 comments

The logs I never read

https://pydantic.dev/articles/the-logs-i-never-read
1•nojito•29m ago•0 comments

How to use AI with expressive writing without generating AI slop

https://idratherbewriting.com/blog/bakhtin-collapse-ai-expressive-writing
1•cnunciato•30m ago•0 comments

Show HN: LinkScope – Real-Time UART Analyzer Using ESP32-S3 and PC GUI

https://github.com/choihimchan/linkscope-bpu-uart-analyzer
1•octablock•30m ago•0 comments

Cppsp v1.4.5–custom pattern-driven, nested, namespace-scoped templates

https://github.com/user19870/cppsp
1•user19870•31m ago•1 comments

The next frontier in weight-loss drugs: one-time gene therapy

https://www.washingtonpost.com/health/2026/01/24/fractyl-glp1-gene-therapy/
2•bookofjoe•34m ago•1 comments
Open in hackernews

Understanding Transformers via N-gram Statistics

https://arxiv.org/abs/2407.12034
139•pona-a•8mo ago

Comments

justanotherjoe•8mo ago
Sounds regressive and feeds into the weird unintellectual narrative that llm is just like ngram models (lol, lmao even)

Thr author submitted like 10 papers this May alone. Is that weird?

ninjin•8mo ago
These are different people:

https://arxiv.org/search/cs?searchtype=author&query=Nguyen,+...

Wikipedia mentions that up to ~40% of the Vietnamese population (~40,000,000 people) carries the name Nguyen:

https://en.wikipedia.org/wiki/Nguyen

For the paper itself, as someone working in the field, I find it interesting enough to consider reading at some point (I do not read that many analysis papers recently, but this one looks better than most). As for your accusation about it claiming that large language models are simply n-gram models, read the abstract until you realise that your accusation is very much unfair to the work.

ayhanfuat•8mo ago
> Thr author submitted like 10 papers this May alone. Is that weird?

Chances are, you just assumed all the search results for 'Nguyen, T' refer to the same author.

justanotherjoe•8mo ago
I did. My bad.
maz1b•8mo ago
How does this have 74 points and only one comment?

on topic: couldn't one in theory, re-publish this kind of paper for different kinds of LLMs, as the textual corpus upon which LLMs are built based off ultimately, at some level, human effort and human input whether it be writing, or typing?

nickpsecurity•8mo ago
"How does this have 74 points and only one comment?"

I think one cause is hobbyists upvoting submissions that might be valuable to people in a specific field. We understand just enough to think it could be important but defer to subject matter experts on the rest. That's why I upvoted it.

gwern•8mo ago
https://en.wikipedia.org/wiki/Warnock%27s_dilemma
montebicyclelo•8mo ago
> The results we obtained in Section 7 imply that, at least on simple datasets like TinyStories and Wikipedia, LLM predictions contain much quantifiable structure insofar that they often can be described in terms of our simple statistical rules

> we find that for 79% and 68% of LLM next-token distributions on TinyStories and Wikipedia, respectively, their top-1 predictions agree with those provided by our N-gram rulesets

Two prediction methods may have completely different mechanisms, but agree sometimes, because they are both predicting the same thing.

Seems a fairly large proportion of language can be predicted by a simpler model.. But it's the remaining percent that's the difficult part; which simple `n-gram` models are bad at, and transformers are really good at.

fennecbutt•8mo ago
I've always thought that LLMs are still just statistical machines and that their output is very similar to the superpermutation problem, though not exactly.

I just like to think of it as a high dimensional view of the relationships between various words and that the output is the result of continuing the path taken through that high dimensional space, where each point's probability of selection changes with each token in the sequence.

Unfortunately there's no thought or logic really going on there in the simplest cases as far as I can understand it. Though for more complex models/different architectures anything that fundamentally changes the way that the model explores a path through space like that could be implementing thought/logic I suppose.

It's why they need to outsource mathematics for the most part.

pona-a•8mo ago
I wonder if these N-gram reduced models, augmented with confidence measures, can act as a very fast speculative decoder. Or maybe the sheer number of explicit rules unfolded from the compressed latent representation will make it impractical.
nickpsecurity•8mo ago
I'd also like to see a list of similarly-simple techniques for extracting rules where ML researchers could automatically try them all. In this case, the N-gram rules would be the starting point. For what predictions failed, they'd try to throw in the other techniques. Eventually most or all of the predictions should be captured by one or more simple rules. Some might be compound rules mixing techniques.

I think there will also be benefits to that both in interpretability and hardware acceleration. In time, maybe cheaper pretraining of useful models.

pona-a•8mo ago
I don't have a list, but another popular one was this [0]. They trained a one layer attention-only transformer and could extract its weights as bigrams and skip-trigrams ("A… B C").

[0] https://transformer-circuits.pub/2021/framework/index.html

ggamecrazy•8mo ago
They literally can! The exact speculative method is supported on vLLM using `speculative_model="[ngram]"`[1]

1: https://docs.vllm.ai/en/latest/features/spec_decode.html#spe...

pona-a•8mo ago
Not quite. The paper uses its own N-gram rules with positive/negative/invariant weights as a rudimentary attention, and these rules are distilled from the model itself.

This, as I found out from this repo [0] linked in the Twitter thread in the documentation (which for some reason they didn't just link to directly), seems to be a regular Markov chain of context, if it even builds a stochastic matrix. See algorithm below.

  Current prompt
  "Article: (CNN)French striker Bafetimbi Gomis, who has a history of [...]
  Summary: French stri"

  Prompt lookup algorithm
  1. Get last few tokens from prompt -"French stri"
  2. Search for "French stri" in prompt
  3. Match found - return next k tokens after match as candidate completion -"ker Bafetimbi Gomis, who has"

  Candidate tokens
  "ker Bafetimbi Gomis, who has"
[0] https://github.com/apoorvumang/prompt-lookup-decoding
bilsbie•8mo ago
Interesting! Makes me wonder if you could replace transformers with some sort of fancy Markov chain. Maybe with a meta chain that acts as attention.
cschmidt•8mo ago
This paper was accepted as a poster to NeurIPS 2024, so it isn't just a pre-print. There is a presentation video and slides here:

https://neurips.cc/virtual/2024/poster/94849

The underlying data has been open sourced as discussed on his blog here https://timothynguyen.org/2024/11/07/open-sourced-my-work-on...