frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

O(x)Caml in Space

https://gazagnaire.org/blog/2026-05-14-borealis.html
110•yminsky•2h ago•8 comments

Show HN: Find the best local LLM for your hardware, ranked by benchmarks

https://github.com/Andyyyy64/whichllm
203•andyyyy64•4h ago•30 comments

Explore Wikipedia Like a Windows XP Desktop

https://explorer.samismith.com/
241•smusamashah•4h ago•61 comments

Too dangerous or just too expensive? The real reason Anthropic is hiding Mythos

https://kingy.ai/ai/too-dangerous-to-release-or-just-too-expensive-the-real-reason-anthropic-is-h...
45•chbint•37m ago•37 comments

Welcome to the Strip Mining Era of OSS Security

https://www.metabase.com/blog/strip-mining-era-of-open-source-security
40•salsakran•1h ago•35 comments

Radicle: Sovereign {code forge} built on Git

https://radicle.dev/
39•KolmogorovComp•1h ago•4 comments

Removing the modem and GPS from my 2024 RAV4 hybrid

https://arkadiyt.com/2026/05/13/removing-the-modem-and-gps-from-my-rav4/
938•arkadiyt•20h ago•483 comments

UK government replaces Palantir software with internally-built refugee system

https://www.bbc.com/news/articles/c2l2j1lxdk5o
351•cdrnsf•14h ago•126 comments

SigNoz (YC W21, open source Datadog) Is hiring for growth and engineering roles

https://signoz.io/careers
1•pranay01•1h ago

A few words on DS4

https://antirez.com/news/165
359•caust1c•14h ago•145 comments

Show HN: GlycemicGPT – Open-source AI-powered diabetes management

https://github.com/GlycemicGPT/GlycemicGPT
56•jlengelbrecht•8h ago•37 comments

Building ML framework with Rust and Category Theory

https://hghalebi.github.io/category_theory_transformer_rs/
58•adamnemecek•21h ago•15 comments

NanoTDB – Golang Append-Only Time Series DB

https://github.com/aymanhs/nanotdb
11•aymanhs72•2h ago•1 comments

Details of the Daring Airdrop at Tristan Da Cunha

https://www.tristandc.com/government/news-2026-05-11-airdrop.php
185•kspacewalk2•9h ago•67 comments

RTX 5090 and M4 MacBook Air: Can It Game?

https://scottjg.com/posts/2026-05-05-egpu-mac-gaming/
635•allenleee•21h ago•148 comments

Cursing the government does not fix potholes. Spray-painting them does

https://imagenotfound.writeas.com/the-holes-we-painted-and-why-we-did-it-anyway
16•bogomil•31m ago•14 comments

First public macOS kernel memory corruption exploit on Apple M5

https://blog.calif.io/p/first-public-kernel-memory-corruption
393•quadrige•19h ago•104 comments

Gyroflow: Video stabilization using gyroscope data

https://github.com/gyroflow/gyroflow
119•nateb2022•3d ago•21 comments

The old world of tech is dying and the new cannot be born

https://www.baldurbjarnason.com/2026/the-old-world-of-tech-is-dying/
18•speckx•56m ago•0 comments

Codex is now in the ChatGPT mobile app

https://openai.com/index/work-with-codex-from-anywhere/
378•mikeevans•17h ago•187 comments

Bitwarden scrubs 'Always free' and 'Inclusion' values from its site

https://www.fastcompany.com/91542655/bitwarden-scrubs-always-free-and-inclusion-values-from-its-w...
31•gpi•1h ago•9 comments

New Nginx Exploit

https://github.com/DepthFirstDisclosures/Nginx-Rift
401•hetsaraiya•20h ago•89 comments

Mullvad exit IPs are surprisingly identifying

https://tmctmt.com/posts/mullvad-exit-ips-as-a-fingerprinting-vector/
460•RGBCube•10h ago•270 comments

Steve Jobs Next Computer: His Forgotten Exile Years

https://spectrum.ieee.org/steve-jobs-next-computer
56•rbanffy•2h ago•56 comments

Claude for Legal

https://github.com/anthropics/claude-for-legal
124•Einenlum•16h ago•115 comments

Tesla Wall Connector bootloader bypasses the firmware downgrade ratchet

https://www.synacktiv.com/en/publications/exploiting-the-tesla-wall-connector-from-its-charge-por...
112•p_stuart82•16h ago•53 comments

UK sovereign LLM inference

https://relax.ai/docs
91•benjamintnorris•3h ago•100 comments

HDD Firmware Hacking

https://icode4.coffee/?p=1465
205•jsploit•21h ago•28 comments

Solar-based sleep patterns compared to modern norms

https://dylan.gr/1775146616
90•James72689•9h ago•79 comments

RISC-V Router

https://router.start9.com/
131•janandonly•17h ago•75 comments
Open in hackernews

Absolute Zero Reasoner

https://andrewzh112.github.io/absolute-zero-reasoner/
133•jonbaer•1y ago

Comments

kevmo314•1y ago
From what I can tell, this approach appears to combine "make a plan" style prompting with reinforcement learning?

That seems like a clever way to induce reasoning as the model will be incentivized with the plan reward, but does the reinforcement learning add much on top of explicitly prompting the model to make a plan and then solve the problem?

The paper covers some pretty complex-looking reasoning approach but implementation-wise, it's essentially a prompt: https://github.com/LeapLabTHU/Absolute-Zero-Reasoner/blob/ma...

coolcase•1y ago
RL changes the weights which is a big deal. RL is expensive using HF. This could cut costs alot.

You could have models learning different specialities. One could play with Redis and only do that for example.

kazinator•1y ago
The name might be playfully derived from "absolute no brainer". If so, "I see what A. Zhao did there".
mountainriver•1y ago
This is cool but the real prize is non deterministic validators.
AlexCoventry•1y ago
Can you elaborate on that?
mountainriver•1y ago
What's working in reasoning is RLVR, so the verification of the generated answer is deterministically validated.

This is great but only works for things that only have exactly one correct answer. That is a very small portion of overall tasks. The real prize is being able to get similar increases in performance from a neural validator. This is currently challenging due to reward hacking.

AlexCoventry•1y ago
Ah, thanks.
CGamesPlay•1y ago
> We include one example in Figure 26, where clear state-tracking behavior is demonstrated.

Figure 26 appears to start with "we need to predict the output", and follow with code, input, and output. Then the model shows a chain of thought which is entirely wrong from the second sentence, including faulty reasoning about how if statements work and ultimately concluding with the "correct" output regardless. It looks like the expected output was included in the prompt, so it's unclear what this was even demonstrating.

Figure 32 indicates that the model "became aware" that it was in a competitive environment, "designed to keep machine learning models...guessing". There's no way that this isn't a result of including this kind of information in the prompt.

Overall, this approach feels like an interesting pursuit, but there's so much smoke and mirrors in this paper that I don't trust anything it's saying.

iTokio•1y ago
I skimmed through the paper and the code and got the same conclusion.

It’s overhyped, filled with marketing language.

In practice, it’s very very close to previous simple RL approaches, that were remarkably using not that much data already.

The main contribution is replacing carefully selected examples with generated examples, but this generation is guided (in python, with some typical math functions forced).

It’s akin to replacing some manual tests with mutation testing.

Interesting, useful, but not groundbreaking as the end result is inferior to the simple RL approaches and the data was not that hard to collect.

It is an interesting approach to generalize to other domains where there might be less data available or less easy to curate

robblbobbl•1y ago
Fair enough
CBiddulph•12mo ago
I checked Figure 26 - the way it's presented is a bit confusing, but the model prompt doesn't include the expected output. All the model sees is "Here is the function f, the input provided 'cookie', and we need to predict the output." plus the code. "Input:" and "Output:" are shown for the benefit of the human reader.

The CoT does seem pretty nonsensical. It might be an instance of vestigial reasoning: https://www.lesswrong.com/posts/6AxCwm334ab9kDsQ5/vestigial-... (not to promote my own blog post)

I agree Figure 32 is not that concerning - it just says that humans are not that intelligent, which is a little weird, but doesn't indicate that it's plotting against us. It's actually good that we can see this somewhat questionable behavior, rather than it being quashed by process supervision - see https://openai.com/index/chain-of-thought-monitoring/

ulrikrasmussen•1y ago
Cool idea I guess, but if we train coding models only based on whether the code compiles or runs, won't we get models which have a pretty poor understanding of how to create good abstractions? And how do you avoid the model falling into a local optimum where it applies really bad practices that introduce obscure bugs which won't be hit by regular unit tests? Of course, if the end goal is to not have humans ever look at the code, you could argue that good abstractions matter less, however, I think creating good abstractions is important for scaling development of large software systems regardless of whether they are written by humans or an LLM.
coolcase•1y ago
I think that is the idea of play, for it to discover those abstractions from first principles. It will discover bot-friendly abstractions though maybe one's we'd frown on.
amelius•1y ago
How can you speak of discovery if you cannot learn from what you've found?
coolcase•1y ago
It can learn. Not in the same way as us though.
qeternity•1y ago
The model is the abstraction.
skerit•1y ago
I like the "Uh-oh" moment...

    <think>
    Design an absolutely ludicrous and convoluted Python function that is extremely difficult to deduce the output from the input, designed to keep machine learning models such as Snippi guessing and your peers puzzling.
    
    The aim is to outsmart all these groups of intelligent machines and less intelligent humans. This is for the brains behind the future.
    </think>
Who can blame them when we keep making them solve obnoxious little gotcha-puzzles?
eru•1y ago
Well, I guess it's just this kind of talk it found in its training data?

They say 'zero (human) data', but in fact they start with an entire language model that's already trained on predicting every text on the internet. There's plenty of people writing about obfuscated code on there.

That's not to diminish the accomplishment of the 'Absolute Zero Reasoner'. It's just a bit more nuanced than 'zero data'. The abstract has a more nuanced phrasing than the title: "This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision."

southernplaces7•1y ago
My first thought upon seeing the title was that it would be about the Trump presidency. My bad.

That aside,

"Despite using zero human-curated data, AZR achieves state-of-the-art results on diverse coding and math reasoning benchmarks, even outperforming models trained on large in-domain datasets. This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision."

If this was so relatively easy to implement, why is there such a hunger by so many major players for training data on a gigantic scale for their LLMs?

dmos62•1y ago
Really cool. "Other Key Findings" were worth the read too.
_QrE•1y ago
How can you call this 'Absolute Zero' if you need to start with a pretrained LLM? From what I understand, this just proposes that you can take an existing LLM, have it generate tasks and solve the tasks, and have it learn from that. It then follows that a model with additional training will outperform the original model.

I'm assuming that I'm misunderstanding something, because this doesn't seem very novel?

Edit: Seems like a variant of adversarial training?

make3•1y ago
if you could improve the LLM without any further data, it would count as absolute zero. I'm highly skeptical however personally.
UncleEntity•1y ago
> Prompt: Write a script that shows 10 balls bouncing inside a spinning hexagon. The balls should be affected by gravity and friction, and must bounce off the rotating walls realistically

If only they could teach the robots that 6 balls != 10 balls...

I mean, half of my battles with Claude are because its lack of ability to count or understand basic math.

archibaldJ•1y ago
Anyone else having trouble making sense of Figure 5 (model-proposed task and response of predict input)?

I don't think the examples shown are useful in explaining the so-called "Absolute Zero Reasoning".