frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

AI can't solve novel problems yet

https://jamesoclaire.com/2025/06/04/ai-obviously-cant-yet-solve-novel-problems/
1•ddxv•1m ago•1 comments

Designing Algorithmic Delegates

https://arxiv.org/abs/2506.03102
1•MarcoDewey•6m ago•0 comments

Our Startup Was Hacked, Need GitHub's Assistance to Trace Attacker

https://techcrunch.com/2025/06/03/indian-grocery-startup-kiranapro-was-hacked-and-its-servers-deleted-ceo-confirms/
2•deepakravindran•12m ago•2 comments

Merlin Bird ID

https://merlin.allaboutbirds.org/
3•twitchard•13m ago•1 comments

The symbolism of the magnifying glass is not universal

https://devblogs.microsoft.com/oldnewthing/20250603-00/?p=111240
1•paulmooreparks•15m ago•0 comments

Google Scholar is Manipulatable (2024)

https://arxiv.org/abs/2402.04607
2•downboots•23m ago•0 comments

'Spiderweb' drone attack marks a new threat for top militaries

https://www.businessinsider.com/operation-spiderweb-5-ways-ukraine-drone-attack-new-era-warfare-2025-6
2•petethomas•24m ago•0 comments

Open Sesame! on the Security and Memorability of Verbal Passwords [pdf]

https://seclab.skku.edu/wp-content/uploads/2025/05/223600a683.pdf
1•grac3•24m ago•0 comments

Asian American genius got rejected from 16 colleges because of racism [video]

https://www.youtube.com/watch?v=wl7t3QiXYOI
3•robomartin•26m ago•1 comments

Chinese couple charged with smuggling a biological pathogen into the U.S.

https://www.nbcnews.com/politics/justice-department/chinese-couple-charged-smuggling-biological-pathogen-us-rcna208658
4•shinryudbz•29m ago•1 comments

How NATO is turning to startups to outpace its rivals

https://thenextweb.com/news/how-nato-startups-fight-future-wars
1•mikece•31m ago•0 comments

DiffX – Next-Generation Extensible Diff Format

https://diffx.org/
2•todsacerdoti•33m ago•0 comments

Flesh-eating New World Screwworm could pose health risks to cattle, humans

https://www.foxnews.com/health/flesh-eating-new-world-screwworm-could-pose-health-risks-cattle-humans
1•keepamovin•33m ago•1 comments

Why is PS3 emulation so fast: RPCS3 optimizations explained [video]

https://www.youtube.com/watch?v=19ae5Mq2lJE
2•alexjplant•35m ago•0 comments

Musk calls Trump's tax bill a 'disgusting abomination'

https://www.bbc.com/news/articles/c0j76djzgpvo
6•andsoitis•37m ago•2 comments

Ask HN: Stripe and Chargebacks

1•gtech1•41m ago•2 comments

Meta and Yandex exfiltrating tracking data on Android via WebRTC

https://arstechnica.com/security/2025/06/meta-and-yandex-are-de-anonymizing-android-users-web-browsing-identifiers/
1•liuandrewk•41m ago•1 comments

Why Is the US Dropping Billions of Mutant Flies from the Sky? [video]

https://www.youtube.com/watch?v=zxq60I5RSW8
3•keepamovin•46m ago•0 comments

Out of His League and Clueless: NIH Staffers Speak Out on Director Bhattacharya

https://www.importantcontext.news/p/out-of-his-depth-sold-his-soul-clueless
1•SubiculumCode•52m ago•0 comments

Show HN: Built an AI Agent that finds hidden bugs and maps web apps

https://testchimp.io/blog/agentic-exploratory-testing/
2•TestChimp•52m ago•0 comments

Show HN: All-in-one platform for AI image generation

https://www.imageninja.ai/
1•ashr_•52m ago•0 comments

EVI 3: any voice and personality

https://demo.hume.ai/
2•twitchard•53m ago•0 comments

Shein's emissions now rival entire countries

https://stand.earth/fashion/resources/2025-scorecard/all-scores/
1•fermier•53m ago•0 comments

Show HN: Enky – creators get paid to use music

https://www.enkymarketing.com
1•aibu•54m ago•0 comments

Barrelfish OS Architecture Overview (2013) [pdf]

https://barrelfish.org/publications/TN-000-Overview.pdf
3•peter_d_sherman•55m ago•2 comments

Harlem neighborhood becomes first in US to have trash containerized

https://abc7ny.com/post/mayor-eric-adams-unveils-completion-empire-bin-installation-west-harlem-new-york-reduce-rats-garbage/16633387/
3•geox•1h ago•1 comments

Don't Let Apache Iceberg Sink Your Analytics: Practical Limitations in 2025

https://quesma.com/blog-detail/apache-iceberg-practical-limitations-2025
1•killme2008•1h ago•0 comments

The Tech Recruitment Ruse That Has Avoided Trump's Crackdown on Immigration

https://www.propublica.org/article/trump-immigration-h1b-visas-perm-tech-jobs-recruitment
7•ultra_nick•1h ago•0 comments

MiSTer FPGA

https://github.com/MiSTer-devel/Wiki_MiSTer/wiki
2•rahimnathwani•1h ago•0 comments

Is Japan ready to say goodbye to tax-free shopping?

https://www.japantimes.co.jp/news/2025/06/04/japan/politics/tax-free-system/
3•mikhael•1h ago•1 comments
Open in hackernews

ReasoningGym: Reasoning Environments for RL with Verifiable Rewards

https://arxiv.org/abs/2505.24760
99•t55•1d ago

Comments

starzmustdie•1d ago
GitHub: https://github.com/open-thought/reasoning-gym
phh•1d ago
Cool cool. I'm a bit put off by calling it "reasoning" /"thought". These RL targets can be achieved without "thinking" model but still cool. Gotta love the brainfuck task.

I personally think that Gemini 2.5 Pro's superiority comes from having hundreds or thousands RL tasks (without any proof whatsoever, so rather a feeling). So I've been wanting a "RL Zoo" for quite a while. I hope this project won't be a one-off and will be maintained long term with many external contributions to add new targets!

t55•1d ago
> I personally think that Gemini 2.5 Pro's superiority comes from having hundreds or thousands RL tasks (without any proof whatsoever, so rather a feeling).

Given that GDM pioneered RL, that's a reasonable assumption

flowerthoughts•1d ago
Assuming with GDM, you mean Google-Deep Mind. They pioneered RL with deep nets as policy function estimator. The deep nets being a result of CNNs and massive improvements in hardware parallelization at the time.

RL was established, at the latest, with Q-learning in 1989: https://en.wikipedia.org/wiki/Q-learning

t55•1d ago
i didn't say they invented everything; in science you always stand on the shoulders of giants

i still think my original statement is fair

lechatonnoir•1d ago
"gdm pioneered rl" is definitely not actually right, but it's correct to assert that they were huge players.

people who knew from context that your statement was broadly not actually right would know what you mean and agree on vibes. people who didn't could reasonably be misled, i think.

olliestanley•1d ago
We definitely plan to maintain the project for as long as there is interest in it. If you have ideas for new tasks, we'd always welcome contributions!
phh•1d ago
Thanks for the answer! As a toy project I implemented wikiracing with trl. I'll probably try to PR that to your gym. (can't say that I managed to improve score with it though)
CuriouslyC•1d ago
Gemini 2.5 Pro's superiority is IMO largely driven by their long context support and training methodology. Compare Gemini as a beta reader for a 100k token book with GPT4.1 or Claude 4, and it becomes quite clear how much more effectively it can reason across its context than other comparable models. This also makes it much better for architecting new features into a system, since you can load a lot of the current system into the context and it'll conform to existing styles and architecture patterns more closely.
jacob019•1d ago
Agreed, 2.5 flash too. I analyze a large json document of metrics for pricing decisions. Typically around 200k, occtionallly up to 1M, Gemini 2.5 significantly outperforms for my task. It isn't 100%, but role playing gets close. I suppose that's a form of inference time compute.
t55•1d ago
For a 100k token context window; all those models are comparable though

gemini 2.5 pro shines for 200k+ tokens

CuriouslyC•1d ago
I can confirm from first hand experience that even at 100k they are most definitely not comparable for the task of beta reading.
throwaway314155•1d ago
splitting hairs much?
ninakostoska•1d ago
Cool to see NVIDIA’s most recent reasoning model [1] already uses Reasoning Gymas a large part of their data mixture

[1] https://arxiv.org/abs/2505.24864

t55•1d ago
> prolonged RL training can uncover novel reasoning strategies that are inaccessible to base models, even under extensive sampling

does this mean that previous RL papers claiming the opposite were possibly bottlenecked by small datasets?

yorwba•1d ago
No, they do not point to any specific examples of novel reasoning strategies that were uncovered, nor is their sampling that extensive (at most 256 samples vs the 2048 used in https://limit-of-rlvr.github.io/ ).
t55•1d ago
so you think it's fake news? another example of a paper with strong claims without much evidence?
yorwba•1d ago
I think it's a case of not coming up with alternative explanations for the observed evidence and hence not designing experiments to distinguish between those explanations.

Their results are consistent with novel reasoning strategies, but they're also consistent with more reliable execution of reasoning strategies that the base model can generate in principle, but rarely succeeds at due to a large number of steps. (If you have a model that can do each step independently with 99% success rate and getting the correct result requires 1000 steps, the chance of making it all the way to the end without a single error is only about 0.004%.)

psb217•1d ago
One challenge with this line of argument is that the base model assigns non-zero probability to all possible sequences if we ignore truncation due to numerical precision. So, in a sense you could say any performance improvement is due to shifting probability mass towards good reasoning behaviors and away from bad ones that were already present in the base model.

I agree with your general point though. Ie, we need more thorough empirical investigation of how reasoning behavior evolves during RL training starting from the base model. And, current RL training results seem more like "amplifying existing good behavior" than "inducing emergent good behavior".

yorwba•1d ago
While it's true that the model assigns non-zero probabilities to all sequences by design, those probabilities can get a lot smaller. E.g. replace that 99% per-step success probability with 10% and suddenly the overall chance of a correct result is truly astronomically small.

For a novel reasoning strategy, I would expect at least a few individual tokens where the base model assigns much smaller probabilities than the reinforcement-learning trained one, as opposed to just being a little smaller but spread out over many tokens. (Which would better fit a "death by a thousand cuts" scenario.)

jimmySixDOF•1d ago
RL is proving to be a weird science lately :

>Spurious Rewards: Rethinking Training Signals in RLVR ### *TL;DR* We show that you can do RLVR on Qwen2.5-Math models with *completely random or incorrect rewards*, and still get massive math benchmark gains.

All of the following spurious rewards give 15-20+ points on MATH-500 when RLVR training Qwen2.5-Math-7B:

- RLVR + format reward (reward responses with `\boxed{}`): *+16.4%* - RLVR + incorrect reward (only incorrect answers rewarded): *+24.6%* - RLVR + random reward: *+21.4%* - (as a reference) RLVR + ground-truth reward: + 28.8%

How can these spurious rewards possibly work? Can we get similar gains on other models with broken rewards?

>Learning to Reason without External Rewards Training large language models (LLMs) for complex reasoning via Reinforcement Learning with Verifiable Rewards (RLVR) is effective but limited by reliance on costly, domain-specific supervision. We explore Reinforcement Learning from Internal Feedback (RLIF), a framework that enables LLMs to learn from intrinsic signals without external rewards or labeled data. We propose Intuitor, an RLIF method that uses a model's own confidence, termed self-certainty, as its sole reward signal. Intuitor replaces external rewards in Group Relative Policy Optimization (GRPO) with self-certainty scores, enabling fully unsupervised learning. Experiments demonstrate that Intuitor matches GRPO's performance on mathematical benchmarks while achieving superior generalization to out-of-domain tasks like code generation, without requiring gold solutions or test cases. Our findings show that intrinsic model signals can drive effective learning across domains, offering a scalable alternative to RLVR for autonomous AI systems where verifiable rewards are unavailable. [2]

[1] https://rethink-rlvr.notion.site/Spurious-Rewards-Rethinking... [2] https://arxiv.org/abs/2505.19590

t55•1d ago
yeah, RLVR is still nascent and hence there's lots of noise.

> How can these spurious rewards possibly work? Can we get similar gains on other models with broken rewards?

it's because in those cases, RLVR merely elicits the reasoning strategies already contained in the model through pre-training

this paper, which uses Reasoning gym, shows that you need to train for way longer than those papers you mentioned to actually uncover novel reasoning strategies: https://arxiv.org/abs/2505.24864

spmurrayzzz•1d ago
I think the fact that spurious rewards were predominantly only effective for Qwen may suggest that it was triggering some shift in its language distribution. If you use those models long enough you'll see a ton of mandarin that makes its way into your outputs, and their logits tend to look more "confident" than the ones for english tokens.

So the reward value shifting may act as a sort of unintentional regularization technique (similar to adding noise to the discriminator input in GAN archs).

sadboots•1d ago
by the love of god, please stop overfitting on gsm8k
i5heu•1d ago
It looks like your neural network is overfitted on seeing overfitt where is none.

Prejudices is a form of overfitting IMHO

t55•1d ago
agree, the RG evals feel like a fresh breeze
olliestanley•1d ago
Difficult one. GSM8K and MATH evals (both reported in Reasoning Gym paper) are common in smaller model RL papers for a reason, which is that smaller models can get decent scores on them, unlike fresher & harder benchmarks.

Part of the aim of RG is to be used as a difficulty-adjustable & non-repeating eval though so if people think it's a good benchmark, perhaps it will allow this status quo to shift!