frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

.de TLD offline due to DNSSEC?

https://dnssec-analyzer.verisignlabs.com/nic.de
522•warpspin•5h ago•241 comments

Accelerating Gemma 4: faster inference with multi-token prediction drafters

https://blog.google/innovation-and-ai/technology/developers-tools/multi-token-prediction-gemma-4/
443•amrrs•9h ago•198 comments

Computer Use is 45x more expensive than structured APIs

https://reflex.dev/blog/computer-use-is-45x-more-expensive-than-structured-apis/
307•palashawas•8h ago•174 comments

Three Inverse Laws of AI

https://susam.net/inverse-laws-of-robotics.html
350•blenderob•9h ago•243 comments

Write some software, give it away for free

https://nonogra.ph/write-some-software-give-it-away-for-free-05-05-2026
125•nohell•3h ago•94 comments

EEVblog: The 555 Timer is 55 years old [video]

https://www.youtube.com/watch?v=6JhK8iCQuqI
220•brudgers•9h ago•54 comments

Why most product tours get skipped

https://productonboarding.com/articles/why-product-tours-get-skipped
64•pancomplex•4h ago•54 comments

NPR finds "no sign" of Polymarket at its Panama HQ address

https://www.npr.org/2026/05/05/nx-s1-5807918/polymarket-panama-prediction-market
195•ilamont•3h ago•88 comments

Show HN: Explore color palettes inspired by 3000 master painter artworks

https://paletteinspiration.com/
102•ouli•7h ago•38 comments

GLM-5V-Turbo: Toward a Native Foundation Model for Multimodal Agents

https://arxiv.org/abs/2604.26752
112•gmays•7h ago•23 comments

Zuckerberg 'personally authorized' Meta's copyright infringement, publishers say

https://apnews.com/article/meta-mark-zuckerberg-ai-publishers-lawsuit-llama-5609846d4d840014974a8...
122•jethronethro•3h ago•33 comments

Past Ferrari Models, 1947–2023

https://www.ferrari.com/en-US/auto/past-model
18•NaOH•2d ago•3 comments

Agents for financial services and insurance

https://www.anthropic.com/news/finance-agents
196•louiereederson•10h ago•149 comments

Urban Birds Are Rising Earlier Because of Traffic Noise (2013)

https://www.audubon.org/news/urban-birds-are-rising-earlier-because-traffic-noise
28•thunderbong•2d ago•11 comments

Show HN: Airbyte Agents – context for agents across multiple data sources

92•mtricot•10h ago•23 comments

I completed 100 Days of Java over 5 years and mapped the journey as a graph

https://mohibulsblog.netlify.app/java/100daysofjava/graph/
26•celurian92•2d ago•5 comments

Google Chrome silently installs a 4 GB AI model on your device without consent

https://www.thatprivacyguy.com/blog/chrome-silent-nano-install/
1241•john-doe•17h ago•840 comments

I'm scared about biological computing

https://kuber.studio/blog/Reflections/I%27m-Scared-About-Biological-Computing
140•kuberwastaken•9h ago•121 comments

Today I've made the difficult decision to reduce the size of Coinbase by ~14%

https://twitter.com/brian_armstrong/status/2051616759145185723
256•adrianmsmith•13h ago•360 comments

California farmers to destroy 420k peach trees following Del Monte bankruptcy

https://www.sfgate.com/centralcoast/article/usda-aid-california-farmers-22240694.php
264•littlexsparkee•7h ago•320 comments

When everyone has AI and the company still learns nothing

https://www.robert-glaser.de/when-everyone-has-ai-and-the-company-still-learns-nothing/
309•youngbrioche•15h ago•218 comments

Proliferate (YC S25) Is Hiring- 200k for junior engineers

https://www.ycombinator.com/companies/proliferate/jobs/L3copvK-founding-engineer
1•pablo24602•8h ago

Should I run plain Docker Compose in production in 2026?

https://distr.sh/blog/running-docker-in-production/
356•pmig•5d ago•255 comments

IBM didn't want Microsoft to use the Tab key to move between dialog fields

https://devblogs.microsoft.com/oldnewthing/20260505-00/?p=112298
298•SeenNotHeard•7h ago•173 comments

Researchers print structural colour with an inkjet printer

https://physicsworld.com/a/researchers-print-structural-colour-with-an-inkjet-printer/
46•zeristor•2d ago•7 comments

iOS 27 is adding a 'Create a Pass' button to Apple Wallet

https://walletwallet.alen.ro/blog/ios-27-wallet-create-pass/
380•alentodorov•12h ago•288 comments

Underwater robot tracks sperm whale conversations in real time

https://www.reuters.com/business/environment/underwater-robot-tracks-sperm-whale-conversations-re...
53•thedebuglife•2d ago•13 comments

The extended predicative Mahlo universe in Martin-Löf type theory

https://academic.oup.com/logcom/article/34/6/1032/7158523
24•danny00•2d ago•0 comments

Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement

https://variety.com/2026/digital/news/meta-ai-mark-zuckerberg-copyright-infringement-lawsuit-publ...
245•spankibalt•7h ago•183 comments

Xbox CEO ends Copilot AI development and overhauls leadership

https://www.dexerto.com/gaming/xbox-ceo-ends-copilot-ai-development-overhauls-leadership-3361353/
47•gmays•2h ago•7 comments
Open in hackernews

Absolute Zero Reasoner

https://andrewzh112.github.io/absolute-zero-reasoner/
133•jonbaer•12mo ago

Comments

kevmo314•11mo ago
From what I can tell, this approach appears to combine "make a plan" style prompting with reinforcement learning?

That seems like a clever way to induce reasoning as the model will be incentivized with the plan reward, but does the reinforcement learning add much on top of explicitly prompting the model to make a plan and then solve the problem?

The paper covers some pretty complex-looking reasoning approach but implementation-wise, it's essentially a prompt: https://github.com/LeapLabTHU/Absolute-Zero-Reasoner/blob/ma...

coolcase•11mo ago
RL changes the weights which is a big deal. RL is expensive using HF. This could cut costs alot.

You could have models learning different specialities. One could play with Redis and only do that for example.

kazinator•11mo ago
The name might be playfully derived from "absolute no brainer". If so, "I see what A. Zhao did there".
mountainriver•11mo ago
This is cool but the real prize is non deterministic validators.
AlexCoventry•11mo ago
Can you elaborate on that?
mountainriver•11mo ago
What's working in reasoning is RLVR, so the verification of the generated answer is deterministically validated.

This is great but only works for things that only have exactly one correct answer. That is a very small portion of overall tasks. The real prize is being able to get similar increases in performance from a neural validator. This is currently challenging due to reward hacking.

AlexCoventry•11mo ago
Ah, thanks.
CGamesPlay•11mo ago
> We include one example in Figure 26, where clear state-tracking behavior is demonstrated.

Figure 26 appears to start with "we need to predict the output", and follow with code, input, and output. Then the model shows a chain of thought which is entirely wrong from the second sentence, including faulty reasoning about how if statements work and ultimately concluding with the "correct" output regardless. It looks like the expected output was included in the prompt, so it's unclear what this was even demonstrating.

Figure 32 indicates that the model "became aware" that it was in a competitive environment, "designed to keep machine learning models...guessing". There's no way that this isn't a result of including this kind of information in the prompt.

Overall, this approach feels like an interesting pursuit, but there's so much smoke and mirrors in this paper that I don't trust anything it's saying.

iTokio•11mo ago
I skimmed through the paper and the code and got the same conclusion.

It’s overhyped, filled with marketing language.

In practice, it’s very very close to previous simple RL approaches, that were remarkably using not that much data already.

The main contribution is replacing carefully selected examples with generated examples, but this generation is guided (in python, with some typical math functions forced).

It’s akin to replacing some manual tests with mutation testing.

Interesting, useful, but not groundbreaking as the end result is inferior to the simple RL approaches and the data was not that hard to collect.

It is an interesting approach to generalize to other domains where there might be less data available or less easy to curate

robblbobbl•11mo ago
Fair enough
CBiddulph•11mo ago
I checked Figure 26 - the way it's presented is a bit confusing, but the model prompt doesn't include the expected output. All the model sees is "Here is the function f, the input provided 'cookie', and we need to predict the output." plus the code. "Input:" and "Output:" are shown for the benefit of the human reader.

The CoT does seem pretty nonsensical. It might be an instance of vestigial reasoning: https://www.lesswrong.com/posts/6AxCwm334ab9kDsQ5/vestigial-... (not to promote my own blog post)

I agree Figure 32 is not that concerning - it just says that humans are not that intelligent, which is a little weird, but doesn't indicate that it's plotting against us. It's actually good that we can see this somewhat questionable behavior, rather than it being quashed by process supervision - see https://openai.com/index/chain-of-thought-monitoring/

ulrikrasmussen•11mo ago
Cool idea I guess, but if we train coding models only based on whether the code compiles or runs, won't we get models which have a pretty poor understanding of how to create good abstractions? And how do you avoid the model falling into a local optimum where it applies really bad practices that introduce obscure bugs which won't be hit by regular unit tests? Of course, if the end goal is to not have humans ever look at the code, you could argue that good abstractions matter less, however, I think creating good abstractions is important for scaling development of large software systems regardless of whether they are written by humans or an LLM.
coolcase•11mo ago
I think that is the idea of play, for it to discover those abstractions from first principles. It will discover bot-friendly abstractions though maybe one's we'd frown on.
amelius•11mo ago
How can you speak of discovery if you cannot learn from what you've found?
coolcase•11mo ago
It can learn. Not in the same way as us though.
qeternity•11mo ago
The model is the abstraction.
skerit•11mo ago
I like the "Uh-oh" moment...

    <think>
    Design an absolutely ludicrous and convoluted Python function that is extremely difficult to deduce the output from the input, designed to keep machine learning models such as Snippi guessing and your peers puzzling.
    
    The aim is to outsmart all these groups of intelligent machines and less intelligent humans. This is for the brains behind the future.
    </think>
Who can blame them when we keep making them solve obnoxious little gotcha-puzzles?
eru•11mo ago
Well, I guess it's just this kind of talk it found in its training data?

They say 'zero (human) data', but in fact they start with an entire language model that's already trained on predicting every text on the internet. There's plenty of people writing about obfuscated code on there.

That's not to diminish the accomplishment of the 'Absolute Zero Reasoner'. It's just a bit more nuanced than 'zero data'. The abstract has a more nuanced phrasing than the title: "This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision."

southernplaces7•11mo ago
My first thought upon seeing the title was that it would be about the Trump presidency. My bad.

That aside,

"Despite using zero human-curated data, AZR achieves state-of-the-art results on diverse coding and math reasoning benchmarks, even outperforming models trained on large in-domain datasets. This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision."

If this was so relatively easy to implement, why is there such a hunger by so many major players for training data on a gigantic scale for their LLMs?

dmos62•11mo ago
Really cool. "Other Key Findings" were worth the read too.
_QrE•11mo ago
How can you call this 'Absolute Zero' if you need to start with a pretrained LLM? From what I understand, this just proposes that you can take an existing LLM, have it generate tasks and solve the tasks, and have it learn from that. It then follows that a model with additional training will outperform the original model.

I'm assuming that I'm misunderstanding something, because this doesn't seem very novel?

Edit: Seems like a variant of adversarial training?

make3•11mo ago
if you could improve the LLM without any further data, it would count as absolute zero. I'm highly skeptical however personally.
UncleEntity•11mo ago
> Prompt: Write a script that shows 10 balls bouncing inside a spinning hexagon. The balls should be affected by gravity and friction, and must bounce off the rotating walls realistically

If only they could teach the robots that 6 balls != 10 balls...

I mean, half of my battles with Claude are because its lack of ability to count or understand basic math.

archibaldJ•11mo ago
Anyone else having trouble making sense of Figure 5 (model-proposed task and response of predict input)?

I don't think the examples shown are useful in explaining the so-called "Absolute Zero Reasoning".