frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•1m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•1m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
1•phi-system•1m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
1•vkelk•2m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
1•mmoogle•3m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
2•saikatsg•4m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•5m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•8m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•9m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•11m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•11m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•14m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•17m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•18m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•19m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•19m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•20m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•23m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•23m ago•1 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•25m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•26m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•27m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•27m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
3•Brajeshwar•27m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•27m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•28m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•28m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•30m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•35m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•36m ago•2 comments
Open in hackernews

Hacking Diffusion into Qwen3 for the Arc Challenge

https://www.matthewnewton.com/blog/arc-challenge-diffusion
126•mattnewton•6mo ago

Comments

gen3•6mo ago
Incredibly cool work, and a great primer on diffusion
ProofHouse•6mo ago
Are you aware of more in depth material outside research papers which I’ve mostly read already?
mNovak•6mo ago
Really interesting to see the diffusion model solve the puzzles in an iterative way, which feels more similar to how I (and probably most humans) solve them.

Outwardly, it seems to be limited by unmasking too few tokens per round, even when the heatmap shows many more high-confidence guesses available. On some of the larger puzzles it looks like it's wasting many rounds filling in the 'obvious' shapes, and then gets the interesting bit in the last round. It also doesn't seem to have learned the idea of "the background is blue with shapes drawn on top," where background is often 50% of the solution in these puzzles.

namibj•6mo ago
You need a retraction/correction mechanism so the diffusion isn't locked in on a bad choice in order to really reduce iteration count, sadly.
twotwotwo•6mo ago
It is kind of wild that most coding tasks are editing tasks, and we humans care a lot about code editing tools, but automated tools use code generation for editing where a valid block must be generated top-to-bottom in one go.

Fixing a mistake requires re-generating the file or block of code. Or, if something generated later has implications for earlier code--a new import or function parameter's required, something like that--the only option is to go back and re-generate a big chunk. That'd be inefficient for humans, not implausible it's wrong for other code generators too.

I don't know if diffusion specifically will be the approach. (Maybe there's something to generating edit sequences?) This post's note that diffusion kills KV caching is something I hadn't even considered. It does seem right to experiment with things other than strict start-to-end generation.

imtringued•6mo ago
A massive problem with current generation llms is that they have a single globally ordered context and that the model is only allowed to append to the context.

This is like having a single tape Turing machine. They can simulate a multi tape machine, but at O(n^2) complexity.

The computation budget of an LLM is finite, so this has a massive practical impact.

cubefox•6mo ago
The article explains that this is not a problem but an advantage.
Sabinus•6mo ago
Apart from execution speedup due to caching, how does the article explain this is an advantage?
mattnewton•5mo ago
I wouldn't agree it's an advantage per se, just that the caching workaround works better than you would expect! It's still quite slow to generate each token at a time versus diffusing batches, it's just hard to cache all the work you do in each encoder step.
namibj•6mo ago
You can still cache prompts; this just affects the cache for during generation produced tokens. And that's fairly harmless relatively speaking.
yorwba•6mo ago
If you completely do away with autoregression, prompt tokens can pay attention to generated tokens, so even the prompt tokens' KV vectors change at every step and you cannot cache anything.

For this reason, models that generate text using diffusion typically generate blocks of tokens at a time, where tokens within a block freely attend to each other, but across blocks there's causal masking so that each block only depends on the preceding ones and we're back to autoregression again. That makes caching possible, but also means you still can't have diffusion change the beginning of a long text to match the end.

namibj•6mo ago
I specifically mean prompts here, and I don't mean they'd have casual attention. Just run an encoder to get your KV cache pre-filling of the prompt, then do non-causal diffusion generation of the response referencing the cached prompt without re-encoding the prompt.

You don't need to revert to chunks to get to enjoy prompt caching, especially if you use it in a RAG type way with minor provisions to allow KV caching the RAG fragments (a bunch of work has been done on that, iirc even DeepSeekV3 would allow that).

radarsat1•6mo ago
Regarding the typewriter approach, I've wondered for a while if anyone has explored simple backtracking with LLMs? Like, have the LLM be able to generate a backspace/delete token that lets it "undo" previously generated tokens in an append-only fashion. Not sure how this would work with teacher forcing but seems feasible with RL.
_diyar•6mo ago
With current LLMs, is meaningless because the current state is stored in the "context" (system prompt, user prompt, chat output so far). So if you apply a backspace token, you just end up where you started a second ago.

I.e. At state A, you have decided to append token i to move to state B. Removing token i just sets you back to state A, where you would again just pick token i. (Note that this is ignoring the fact that there's a small probabilistic component to next token selection).

In the RL/reasoning world of LLMs, you can instead just reward correct final output without policing the reasoning steps, and a strong model should learn to backtrack on their "thoughts" as appropriate (without removing it from the context).

Edit: wording.

saurik•6mo ago
I think the idea is that the backspace would be a token, indelibly in the history, as it is something that happened: if you record on editor traces, the premise that I previously typed something and chose to delete it matters for my current state.
radarsat1•6mo ago
Exactly what I had in mind. After generating a few wrong tokens perhaps the model could realize it leads to a low probability path and have a way to "go back", while staying in context. Parent is right though that thinking models can kind of do that without some special token, I hadn't thought about that, nice observation.
namibj•6mo ago
Just wanna remind that transformers are devoid of any "ordering/sequence" concept until you feed one in via positional encoding. It'd be easy to flag retracted tokens as such (e.g. pointing an input token one direction or the opposite similar to how RoPE encodes into directional modulation/wobble) but otherwise represent the malleable edit state with the positional encoding and accept overlap (just probably make sure autoregressive decoding/casual self attention makes it so the tokens are sufficiently able to interact preferentially with their immediate neighbors _of the same attempt/edit-revision_).
cchance•6mo ago
I do find that might be useful, as it might help the LLm realize that it already made a mistake and that the mistake and memory of that mistake still exists and isn't just erased from its context
dev_hugepages•6mo ago
Research on the backspace token: https://arxiv.org/abs/2306.05426 > [...] The IL framework also allows us to incorporate backtracking by introducing a backspace action into the generation process [...]
radarsat1•6mo ago
Very interesting paper, even not considering the backspace stuff, thanks for the link. Pretty cool how that seems to tie in with more recent work on applying pure RL to LLM training.
mattnewton•6mo ago
So, not exactly the same thing at all, but the ARChitects do a really cool thing I didn't have time to talk about in this post, which is a kind of depth first search with a cumulative "minimum probability" threshold for backing out of a path. This does let the model kind of reason ahead a few tokens, and then back out if it doens't look like it's going well and try the next most likely token. https://github.com/da-fr/arc-prize-2024/blob/main/training_c...

You can image something like that for any autoregressive llm, but it probably needs some heavy heuristics. Here there are like 11 valid tokens (end of line, 1-9, or end of sequence), and other use cases are going to have way more options making this more intractable.

klintcho•6mo ago
How would it be different from regular beam search?
radarsat1•6mo ago
Yes, I thought of the analogy with beam search but here I am proposing to add the backspace tokens to the context, not to actually rewind the context.