frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
50•thelok•3h ago•6 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
114•AlexeyBrin•6h ago•20 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
49•vinhnx•4h ago•7 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
809•klaussilveira•21h ago•246 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
72•onurkanbkrc•6h ago•5 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
88•1vuio0pswjnm7•7h ago•99 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1053•xnx•1d ago•599 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
470•theblazehen•2d ago•173 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
196•jesperordrup•11h ago•67 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
8•surprisetalk•59m ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
534•nar001•5h ago•248 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
42•alephnerd•1h ago•14 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
204•alainrk•6h ago•309 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
33•rbanffy•4d ago•5 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
25•marklit•5d ago•1 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
63•mellosouls•4h ago•67 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
110•videotopia•4d ago•30 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
67•speckx•4d ago•70 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•10 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
271•isitcontent•21h ago•36 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•109 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
284•dmpetrov•21h ago•151 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
553•todsacerdoti•1d ago•267 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
424•ostacke•1d ago•110 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
41•matt_d•4d ago•16 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
348•eljojo•1d ago•214 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
367•vecti•23h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
466•lstoll•1d ago•308 comments
Open in hackernews

Hacking Diffusion into Qwen3 for the Arc Challenge

https://www.matthewnewton.com/blog/arc-challenge-diffusion
126•mattnewton•6mo ago

Comments

gen3•6mo ago
Incredibly cool work, and a great primer on diffusion
ProofHouse•6mo ago
Are you aware of more in depth material outside research papers which I’ve mostly read already?
mNovak•6mo ago
Really interesting to see the diffusion model solve the puzzles in an iterative way, which feels more similar to how I (and probably most humans) solve them.

Outwardly, it seems to be limited by unmasking too few tokens per round, even when the heatmap shows many more high-confidence guesses available. On some of the larger puzzles it looks like it's wasting many rounds filling in the 'obvious' shapes, and then gets the interesting bit in the last round. It also doesn't seem to have learned the idea of "the background is blue with shapes drawn on top," where background is often 50% of the solution in these puzzles.

namibj•6mo ago
You need a retraction/correction mechanism so the diffusion isn't locked in on a bad choice in order to really reduce iteration count, sadly.
twotwotwo•6mo ago
It is kind of wild that most coding tasks are editing tasks, and we humans care a lot about code editing tools, but automated tools use code generation for editing where a valid block must be generated top-to-bottom in one go.

Fixing a mistake requires re-generating the file or block of code. Or, if something generated later has implications for earlier code--a new import or function parameter's required, something like that--the only option is to go back and re-generate a big chunk. That'd be inefficient for humans, not implausible it's wrong for other code generators too.

I don't know if diffusion specifically will be the approach. (Maybe there's something to generating edit sequences?) This post's note that diffusion kills KV caching is something I hadn't even considered. It does seem right to experiment with things other than strict start-to-end generation.

imtringued•6mo ago
A massive problem with current generation llms is that they have a single globally ordered context and that the model is only allowed to append to the context.

This is like having a single tape Turing machine. They can simulate a multi tape machine, but at O(n^2) complexity.

The computation budget of an LLM is finite, so this has a massive practical impact.

cubefox•6mo ago
The article explains that this is not a problem but an advantage.
Sabinus•6mo ago
Apart from execution speedup due to caching, how does the article explain this is an advantage?
mattnewton•5mo ago
I wouldn't agree it's an advantage per se, just that the caching workaround works better than you would expect! It's still quite slow to generate each token at a time versus diffusing batches, it's just hard to cache all the work you do in each encoder step.
namibj•6mo ago
You can still cache prompts; this just affects the cache for during generation produced tokens. And that's fairly harmless relatively speaking.
yorwba•6mo ago
If you completely do away with autoregression, prompt tokens can pay attention to generated tokens, so even the prompt tokens' KV vectors change at every step and you cannot cache anything.

For this reason, models that generate text using diffusion typically generate blocks of tokens at a time, where tokens within a block freely attend to each other, but across blocks there's causal masking so that each block only depends on the preceding ones and we're back to autoregression again. That makes caching possible, but also means you still can't have diffusion change the beginning of a long text to match the end.

namibj•6mo ago
I specifically mean prompts here, and I don't mean they'd have casual attention. Just run an encoder to get your KV cache pre-filling of the prompt, then do non-causal diffusion generation of the response referencing the cached prompt without re-encoding the prompt.

You don't need to revert to chunks to get to enjoy prompt caching, especially if you use it in a RAG type way with minor provisions to allow KV caching the RAG fragments (a bunch of work has been done on that, iirc even DeepSeekV3 would allow that).

radarsat1•6mo ago
Regarding the typewriter approach, I've wondered for a while if anyone has explored simple backtracking with LLMs? Like, have the LLM be able to generate a backspace/delete token that lets it "undo" previously generated tokens in an append-only fashion. Not sure how this would work with teacher forcing but seems feasible with RL.
_diyar•6mo ago
With current LLMs, is meaningless because the current state is stored in the "context" (system prompt, user prompt, chat output so far). So if you apply a backspace token, you just end up where you started a second ago.

I.e. At state A, you have decided to append token i to move to state B. Removing token i just sets you back to state A, where you would again just pick token i. (Note that this is ignoring the fact that there's a small probabilistic component to next token selection).

In the RL/reasoning world of LLMs, you can instead just reward correct final output without policing the reasoning steps, and a strong model should learn to backtrack on their "thoughts" as appropriate (without removing it from the context).

Edit: wording.

saurik•6mo ago
I think the idea is that the backspace would be a token, indelibly in the history, as it is something that happened: if you record on editor traces, the premise that I previously typed something and chose to delete it matters for my current state.
radarsat1•6mo ago
Exactly what I had in mind. After generating a few wrong tokens perhaps the model could realize it leads to a low probability path and have a way to "go back", while staying in context. Parent is right though that thinking models can kind of do that without some special token, I hadn't thought about that, nice observation.
namibj•6mo ago
Just wanna remind that transformers are devoid of any "ordering/sequence" concept until you feed one in via positional encoding. It'd be easy to flag retracted tokens as such (e.g. pointing an input token one direction or the opposite similar to how RoPE encodes into directional modulation/wobble) but otherwise represent the malleable edit state with the positional encoding and accept overlap (just probably make sure autoregressive decoding/casual self attention makes it so the tokens are sufficiently able to interact preferentially with their immediate neighbors _of the same attempt/edit-revision_).
cchance•6mo ago
I do find that might be useful, as it might help the LLm realize that it already made a mistake and that the mistake and memory of that mistake still exists and isn't just erased from its context
dev_hugepages•6mo ago
Research on the backspace token: https://arxiv.org/abs/2306.05426 > [...] The IL framework also allows us to incorporate backtracking by introducing a backspace action into the generation process [...]
radarsat1•6mo ago
Very interesting paper, even not considering the backspace stuff, thanks for the link. Pretty cool how that seems to tie in with more recent work on applying pure RL to LLM training.
mattnewton•6mo ago
So, not exactly the same thing at all, but the ARChitects do a really cool thing I didn't have time to talk about in this post, which is a kind of depth first search with a cumulative "minimum probability" threshold for backing out of a path. This does let the model kind of reason ahead a few tokens, and then back out if it doens't look like it's going well and try the next most likely token. https://github.com/da-fr/arc-prize-2024/blob/main/training_c...

You can image something like that for any autoregressive llm, but it probably needs some heavy heuristics. Here there are like 11 valid tokens (end of line, 1-9, or end of sequence), and other use cases are going to have way more options making this more intractable.

klintcho•6mo ago
How would it be different from regular beam search?
radarsat1•6mo ago
Yes, I thought of the analogy with beam search but here I am proposing to add the backspace tokens to the context, not to actually rewind the context.