frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I Hate: Programming Wayland Applications

https://www.p4m.dev/posts/29/index.html
55•dwdz•1h ago•13 comments

Flash-Moe: Running a 397B Parameter Model on a Mac with 48GB RAM

https://github.com/danveloper/flash-moe
179•mft_•4h ago•61 comments

Building an FPGA 3dfx Voodoo with Modern RTL Tools

https://noquiche.fyi/voodoo
70•fayalalebrun•3h ago•12 comments

Project Nomad – Knowledge That Never Goes Offline

https://www.projectnomad.us
127•jensgk•4h ago•20 comments

A Coherent Vision for the Future of Version Control

https://bramcohen.com/p/manyana
12•c17r•1h ago•1 comments

More common mistakes to avoid when creating system architecture diagrams

https://www.ilograph.com/blog/posts/more-common-diagram-mistakes/
60•billyp-rva•4h ago•21 comments

A case against currying

https://emi-h.com/articles/a-case-against-currying.html
44•emih•3h ago•59 comments

iBook Clamshell

https://www.ibook-clamshell.com/index.php/en/
34•polishdude20•1h ago•24 comments

A Review of Dice That Came with the White Castle

https://boardgamegeek.com/thread/3533812/a-review-of-dice-that-came-with-the-white-castle
66•doener•3d ago•10 comments

Windows native app development is a mess

https://domenic.me/windows-native-dev/
99•domenicd•6h ago•99 comments

Bored of eating your own dogfood? Try smelling your own farts

https://shkspr.mobi/blog/2026/03/bored-of-eating-your-own-dogfood-try-smelling-your-own-farts/
225•ColinWright•3h ago•132 comments

Brute-Forcing My Algorithmic Ignorance with an LLM in 7 Days

http://blog.dominikrudnik.pl/my-google-recruitment-journey-part-1
46•qikcik•4h ago•20 comments

25 Years of Eggs

https://www.john-rush.com/posts/eggs-25-years-20260219.html
156•avyfain•4d ago•47 comments

Cloudflare flags archive.today as "C&C/Botnet"; no longer resolves via 1.1.1.2

https://radar.cloudflare.com/domains/domain/archive.today
222•winkelmann•12h ago•182 comments

The IBM scientist who rewrote the rules of information just won a Turing Award

https://www.ibm.com/think/news/ibm-scientist-charles-bennett-turing-award
25•rbanffy•4h ago•3 comments

Apple's intentional crippling of Mobile Safari

https://pwa.gripe/
93•xd1936•3h ago•88 comments

My first patch to the Linux kernel

https://pooladkhay.com/posts/first-kernel-patch/
171•pooladkhay•2d ago•31 comments

Node.js worker threads are problematic, but they work great for us

https://www.inngest.com/blog/node-worker-threads
39•goodoldneon•3d ago•18 comments

Atlassian says it had right to fire engineer for suggesting CEO is 'rich jerk'

https://www.bloomberg.com/news/articles/2026-03-16/atlassian-defends-firing-worker-who-suggested-...
80•FiddlerClamp•55m ago•59 comments

A Fuzzer for the Toy Optimizer

https://bernsteinbear.com/blog/toy-fuzzer/
25•surprisetalk•5d ago•4 comments

Why Lab Coats Turned White

https://www.asimov.press/p/lab-coat
25•mailyk•3d ago•13 comments

Tinybox – A powerful computer for deep learning

https://tinygrad.org/#tinybox
550•albelfio•20h ago•319 comments

Some things just take time

https://lucumr.pocoo.org/2026/3/20/some-things-just-take-time/
788•vaylian•1d ago•250 comments

Monuses and Heaps

https://doisinkidney.com/posts/2026-03-03-monus-heaps.html
34•aebtebeten•3d ago•2 comments

The three pillars of JavaScript bloat

https://43081j.com/2026/03/three-pillars-of-javascript-bloat
407•onlyspaceghost•14h ago•239 comments

How We Synchronized Editing for Rec Room's Multiplayer Scripting System

https://www.tyleo.com/blog/how-we-synchronized-editing-for-rec-rooms-multiplayer-scripting-system
12•tyleo•4h ago•11 comments

Professional video editing, right in the browser with WebGPU and WASM

https://tooscut.app/
324•mohebifar•19h ago•118 comments

Chest Fridge (2009)

https://mtbest.net/chest-fridge/
160•wolfi1•15h ago•87 comments

Ask HN: AI productivity gains – do you fire devs or build better products?

33•Bleiglanz•6h ago•45 comments

$ teebot.dev – from terminal to tee in 6 seconds

https://teebot.dev
26•foxpress•4h ago•29 comments
Open in hackernews

Absolute Zero Reasoner

https://andrewzh112.github.io/absolute-zero-reasoner/
133•jonbaer•10mo ago

Comments

kevmo314•10mo ago
From what I can tell, this approach appears to combine "make a plan" style prompting with reinforcement learning?

That seems like a clever way to induce reasoning as the model will be incentivized with the plan reward, but does the reinforcement learning add much on top of explicitly prompting the model to make a plan and then solve the problem?

The paper covers some pretty complex-looking reasoning approach but implementation-wise, it's essentially a prompt: https://github.com/LeapLabTHU/Absolute-Zero-Reasoner/blob/ma...

coolcase•10mo ago
RL changes the weights which is a big deal. RL is expensive using HF. This could cut costs alot.

You could have models learning different specialities. One could play with Redis and only do that for example.

kazinator•10mo ago
The name might be playfully derived from "absolute no brainer". If so, "I see what A. Zhao did there".
mountainriver•10mo ago
This is cool but the real prize is non deterministic validators.
AlexCoventry•10mo ago
Can you elaborate on that?
mountainriver•10mo ago
What's working in reasoning is RLVR, so the verification of the generated answer is deterministically validated.

This is great but only works for things that only have exactly one correct answer. That is a very small portion of overall tasks. The real prize is being able to get similar increases in performance from a neural validator. This is currently challenging due to reward hacking.

AlexCoventry•10mo ago
Ah, thanks.
CGamesPlay•10mo ago
> We include one example in Figure 26, where clear state-tracking behavior is demonstrated.

Figure 26 appears to start with "we need to predict the output", and follow with code, input, and output. Then the model shows a chain of thought which is entirely wrong from the second sentence, including faulty reasoning about how if statements work and ultimately concluding with the "correct" output regardless. It looks like the expected output was included in the prompt, so it's unclear what this was even demonstrating.

Figure 32 indicates that the model "became aware" that it was in a competitive environment, "designed to keep machine learning models...guessing". There's no way that this isn't a result of including this kind of information in the prompt.

Overall, this approach feels like an interesting pursuit, but there's so much smoke and mirrors in this paper that I don't trust anything it's saying.

iTokio•10mo ago
I skimmed through the paper and the code and got the same conclusion.

It’s overhyped, filled with marketing language.

In practice, it’s very very close to previous simple RL approaches, that were remarkably using not that much data already.

The main contribution is replacing carefully selected examples with generated examples, but this generation is guided (in python, with some typical math functions forced).

It’s akin to replacing some manual tests with mutation testing.

Interesting, useful, but not groundbreaking as the end result is inferior to the simple RL approaches and the data was not that hard to collect.

It is an interesting approach to generalize to other domains where there might be less data available or less easy to curate

robblbobbl•10mo ago
Fair enough
CBiddulph•10mo ago
I checked Figure 26 - the way it's presented is a bit confusing, but the model prompt doesn't include the expected output. All the model sees is "Here is the function f, the input provided 'cookie', and we need to predict the output." plus the code. "Input:" and "Output:" are shown for the benefit of the human reader.

The CoT does seem pretty nonsensical. It might be an instance of vestigial reasoning: https://www.lesswrong.com/posts/6AxCwm334ab9kDsQ5/vestigial-... (not to promote my own blog post)

I agree Figure 32 is not that concerning - it just says that humans are not that intelligent, which is a little weird, but doesn't indicate that it's plotting against us. It's actually good that we can see this somewhat questionable behavior, rather than it being quashed by process supervision - see https://openai.com/index/chain-of-thought-monitoring/

ulrikrasmussen•10mo ago
Cool idea I guess, but if we train coding models only based on whether the code compiles or runs, won't we get models which have a pretty poor understanding of how to create good abstractions? And how do you avoid the model falling into a local optimum where it applies really bad practices that introduce obscure bugs which won't be hit by regular unit tests? Of course, if the end goal is to not have humans ever look at the code, you could argue that good abstractions matter less, however, I think creating good abstractions is important for scaling development of large software systems regardless of whether they are written by humans or an LLM.
coolcase•10mo ago
I think that is the idea of play, for it to discover those abstractions from first principles. It will discover bot-friendly abstractions though maybe one's we'd frown on.
amelius•10mo ago
How can you speak of discovery if you cannot learn from what you've found?
coolcase•10mo ago
It can learn. Not in the same way as us though.
qeternity•10mo ago
The model is the abstraction.
skerit•10mo ago
I like the "Uh-oh" moment...

    <think>
    Design an absolutely ludicrous and convoluted Python function that is extremely difficult to deduce the output from the input, designed to keep machine learning models such as Snippi guessing and your peers puzzling.
    
    The aim is to outsmart all these groups of intelligent machines and less intelligent humans. This is for the brains behind the future.
    </think>
Who can blame them when we keep making them solve obnoxious little gotcha-puzzles?
eru•10mo ago
Well, I guess it's just this kind of talk it found in its training data?

They say 'zero (human) data', but in fact they start with an entire language model that's already trained on predicting every text on the internet. There's plenty of people writing about obfuscated code on there.

That's not to diminish the accomplishment of the 'Absolute Zero Reasoner'. It's just a bit more nuanced than 'zero data'. The abstract has a more nuanced phrasing than the title: "This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision."

southernplaces7•10mo ago
My first thought upon seeing the title was that it would be about the Trump presidency. My bad.

That aside,

"Despite using zero human-curated data, AZR achieves state-of-the-art results on diverse coding and math reasoning benchmarks, even outperforming models trained on large in-domain datasets. This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision."

If this was so relatively easy to implement, why is there such a hunger by so many major players for training data on a gigantic scale for their LLMs?

dmos62•10mo ago
Really cool. "Other Key Findings" were worth the read too.
_QrE•10mo ago
How can you call this 'Absolute Zero' if you need to start with a pretrained LLM? From what I understand, this just proposes that you can take an existing LLM, have it generate tasks and solve the tasks, and have it learn from that. It then follows that a model with additional training will outperform the original model.

I'm assuming that I'm misunderstanding something, because this doesn't seem very novel?

Edit: Seems like a variant of adversarial training?

make3•10mo ago
if you could improve the LLM without any further data, it would count as absolute zero. I'm highly skeptical however personally.
UncleEntity•10mo ago
> Prompt: Write a script that shows 10 balls bouncing inside a spinning hexagon. The balls should be affected by gravity and friction, and must bounce off the rotating walls realistically

If only they could teach the robots that 6 balls != 10 balls...

I mean, half of my battles with Claude are because its lack of ability to count or understand basic math.

archibaldJ•10mo ago
Anyone else having trouble making sense of Figure 5 (model-proposed task and response of predict input)?

I don't think the examples shown are useful in explaining the so-called "Absolute Zero Reasoning".