frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
162•theblazehen•2d ago•47 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
674•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
950•xnx•20h ago•552 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
123•matheusalmeida•2d ago•33 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
22•kaonwarb•3d ago•19 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
58•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
232•isitcontent•14h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
225•dmpetrov•15h ago•118 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
332•vecti•16h ago•144 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
495•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
383•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
360•aktau•21h ago•182 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
289•eljojo•17h ago•175 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
413•lstoll•21h ago•279 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
32•jesperordrup•4h ago•16 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
20•bikenaga•3d ago•8 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
17•speckx•3d ago•6 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•7 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
91•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
258•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
60•gfortaine•12h ago•26 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1070•cdrnsf•1d ago•446 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
36•gmays•9h ago•12 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•70 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
150•SerCe•10h ago•142 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
186•limoce•3d ago•100 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•14h ago•14 comments
Open in hackernews

R-Zero: Self-Evolving Reasoning LLM from Zero Data

https://arxiv.org/abs/2508.05004
121•lawrenceyan•5mo ago

Comments

cyberge99•5mo ago
What could go wrong?
magicalhippo•5mo ago
Just don't hook it into the nuclear missile controls. We've seen[1] how that goes[2].

[1]: https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

[2]: https://en.wikipedia.org/wiki/The_Terminator

koakuma-chan•5mo ago
[3] https://en.wikipedia.org/wiki/Re:Zero
jasonjmcghee•5mo ago
Conceptually, it's effectively a GAN
magicalhippo•5mo ago
For those not in the know, that's Generative Adversarial Networks[1], where two neural networks are trained in a competitive way.

One network typically generates tasks for the other, and is rewarded if it manages to make the other network fail the task. The other network is rewarded if it successfully completes the task.

Thus the adversarial network tries to find weaknesses to exploit, and the combined training makes the solving network much stronger. Or at least that's the idea.

[1]: https://en.wikipedia.org/wiki/Generative_adversarial_network

torginus•5mo ago
GAN's are a supervised training method, not really self-improving (after converging to being able to reproduce the training set).
frumiousirc•5mo ago
My initial thought as well. But, what is the "Discriminator" here? What grounds the training toward reality? The "Challenger" and "Solver" adversity alone can only serve to amplify hallucination.

Ahh, GPT-4o is the arbiter.

So, basically, this is a way to perform LLM model compression (GPT-4o to qwen3) while maximizing the in-distribution domain size. As such, it seems reasonable and useful.

However the reliance on an arbiter LLM makes the claim that it will overcome the problem of a lack of training data unreasonable. Once the target LLM is scaled up to reach the in-distribution domain size of the arbiter, it seems to me it will turn back into a hallucination amplifier.

djoldman•5mo ago
See Figure 2.

The solver/challenger is the GAN discriminator/generator.

The challenger is trained to create difficult questions. The solver is trained to strengthen pathways that correctly solve the questions like so:

> To guide the Challenger toward producing challenging yet solvable questions, we first define an uncertainty score. For a generated question x, we query the current Solver... The most frequent response is treated as the pseudo-label y˜(x), and we compute the Solver’s empirical accuracy....The uncertainty reward is then defined.... This function incentivizes questions where the Solver is maximally uncertain (accuracy approaches 50%)

Identifying the best pseudo-label seems like it would be the limitation of the approach.

frumiousirc•4mo ago
> Identifying the best pseudo-label seems like it would be the limitation of the approach.

Yes, I think this says in a different way what I'm trying to express.

In GAN, the Discriminator pegs the training to some chosen reality (assuming the "real" data set is truly real). In Challenger/Solver alone, there is no peg. The Solver could hallucinate consistently and "win" the race. It's the consistency that is the goal.

With GPT-4o as an arbiter of the Challenger/Solver training it provides the reality peg (or rather, the peg that biases toward GPT-4o's training set).

thom•5mo ago
For values of zero quite far above zero.
falcor84•5mo ago
What am I missing? From my skimming, there's zero external data beyond what is needed for the Challenger to generate questions.
thom•5mo ago
An existing trained LLM is an enormous amount of 'data' however it might be encoded. AlphaZero didn't start with Stockfish or a database of games.
magicalhippo•5mo ago
As I understand it the point of the article isn't to train a LLM from scratch, it's to teach a non-reasoning model to reason without additional explicit training data.
YeGoblynQueenne•5mo ago
The abstract does use the term "from scratch":

>> To overcome this limitation, we introduce R-Zero, a fully autonomous framework that generates its own training data from scratch.

Giving the benefit of the doubt, they're just using it wrong, but the way they use it sure reads like they claim they found a way to initialise LLMs with 0 data. Only the absurdity of the claim protects the reader from such misunderstanding, and that's never a good thing in a research paper.

magicalhippo•5mo ago
If you included the previous and following sentences, it's at least to me clear what they mean:

However, existing methods for training such models still rely heavily on vast human-curated tasks and labels, typically via fine-tuning or reinforcement learning, which poses a fundamental bottleneck to advancing AI systems toward capabilities beyond human intelligence

To overcome this limitation, we introduce R-Zero, a fully autonomous framework that generates its own training data from scratch.

Starting from a single base LLM, R-Zero initializes two independent models with distinct roles, a Challenger and a Solver.

Training a LLM is a multi-stage process[1], and they're tackling the stage at the end. That's where you do fine-tuning or reinforcement learning. They're not training a LLM from scratch. They're explicitly stating they start from a base LLM, ie a pretrained non-tuned model.

As I understand it, and as they mention, training data for the latter stages has typically required high-quality human-curated samples in large numbers, even if they're augmented using LLMs, say by generating multiple variations of each human-curated training sample.

Their proposal is to have a generative adversarial network generate that data without any initial human input, ie from scratch.

[1]: https://snorkel.ai/blog/large-language-model-training-three-...

YeGoblynQueenne•4mo ago
That's a fair reading but when you write a technical paper you must try to minimise the number of different possible readings of each sentence, otherwise different people will understand different things, and that's the most important thing you need to avoid.
magicalhippo•4mo ago
> but when you write a technical paper you must try to minimise the number of different possible readings of each sentence

Fair point. It would indeed have been much more clear had they written something like this instead:

a fully autonomous framework that generates its own fine-tuning/RL training data from scratch.

tucnak•5mo ago
AlphaZero is oftentimes dragged out to ridicule the so-called "self-play LLM training" techniques, although I don't think these arguments are terribly convincing. You can think of AlphaZero games as effectively synthetic data in adversarial setting; yes, it's easy to produce and verify as the rules of chess are verifiable, so it doesn't require much data on paper. This is not the case for most texts, with some notable exceptions in verifiable domains, where self-play is coincidentally applied most successfully. Thus, you could make an argument that the pre-existing "trained LLM" is merely functioning as a verifier proxy, analogous to the well-defined chess verifier in AlphaZero.
nerpderp82•4mo ago
Thank you for your mature intelligent answer.
nakamoto_damacy•5mo ago
Perpetual Motion Machines were a thing at some point, too.
YeGoblynQueenne•5mo ago
Don't laugh. PMMs work! I built mine ten years ago when I realised I could improve the SOTA by a huge 20%. I've been improving it for the last 10 years and I get an average performance boost of ~0.25 every year. We will have Free Energy in the next 10 years.
ojo-rojo•4mo ago
I find your comment interesting, even though I'm not sure if I really get what you're saying. You built a perpetual motion machine? You then made improvements? Can you share details?
suprfsat•4mo ago
Good news everyone, you've passed the Turing test.
amelius•4mo ago
Hmm, I guess I didn't pass it then.
pas•4mo ago
they are claiming that they built a PMM prototype, which is not fully satisfying the business requirements yet, but they are on track to do so based on all the amazing documented validated peer-reviewed published progress they made already over the years!
YeGoblynQueenne•4mo ago
That!
YeGoblynQueenne•4mo ago
This is HN so I think it's fine to break standard protocol and clarify: I was joking. Specifically I was riffing off nakamoto_damacy's comment and carrying the comparison (of LLMs) with Perpetual Motion Machines (PMMs) to its logical conclusion.
taneq•4mo ago
20%? 0.25%? Those are rookie numbers! /s

(I feel like this post is underappreciated by at least 20%. :D )

nickpsecurity•4mo ago
The trick is you use magnets, momentum, and WD-40. That can get you most of the way.

It probably will eventually stop, though. Something about the Sun becoming a red giant...

YeGoblynQueenne•4mo ago
Pf, magnets. That's so 1920's! Room-temperature superconductors are the thing nowadays. I'm sure we'll have those in just a few years.
Nevermark•4mo ago
Cool! Mine passively extracts unlimited energy from the expansion of space.

Not scientifically perpetual. But definitely, relative to the finite future lifetime of the human race, operable for perpetuity.

And by extracting dark energy, we can not only turn the big rip around. But by pulling dark energy out of space in a linear direction ahead of a ship, we can power the ship to high speed, as we contract space in front of the ship. Like fusion, we can use extracted dark energy to extract more dark energy. Essentially smoothly teleporting forward. No more fundamental speed limits relative to observers at distance. Looking forward to exploring beyond the observable universe.

It isn't "free" though. There are unique risks.

api•5mo ago
I refer to the endless self improving runaway AI as an “information theoretic perpetual motion machine.”

This will work in a sense. It will do… something… and learn… something. It will be unrelated to the physical universe in any way. See also: procedural landscape generators, etc.

K0balt•5mo ago
Might kinda work if you gave it tools to do its research on the open internet, fiverr, mechanical Turk, etc.
agentultra•5mo ago
On its own without any alignment or labelling. Super-intelligence or super-Grok?
K0balt•4mo ago
The idea would be to use mech Turk and fiverr as touch points with reality. I’m not saying it’s a good idea, just that in theory it might work.
api•5mo ago
That’s at least some contact with reality, at least by proxy. I’m referring to a brain in a vat somehow learning.
K0balt•4mo ago
Yeah, information about external reality cannot be generated from entropy alone.
nakamoto_damacy•4mo ago
Sure, it could up until some point where in order for it to figure out that it has to use a tool or access the Internet it will need more intelligence (to know that its answer or understanding is not sufficient or incorrect) How do we as humans know that? Someone tells us. Who's going to tell it? Then you end up at Minsky's Society of Mind, but also a distributed perpetual motion machine. Evolution seems to have figured out the intuition mechanism as some sort of probabilistic mechanism that's been honed for potentially millions of years, if not billions (white blood cells track pathogens, without having any neural network, so it's possible.) -- I think I opened a can of worms with these thoughts.
hodgehog11•4mo ago
This makes sense on its face, but the flaw in the logic here is the implicit assumption that current procedures extract all information available in the datasets. We know this is not even remotely close to being true.

Many decades ago, statisticians made a similar erroneous assumption that maximum likelihood estimators, which also minimize entropy, are "optimal" in terms of saturating error. The fact that you can do better by smarter regularisation is the key to why DL works in the first place.

I'm no shill for AI, but you're going to need a better argument for why runaway AI up to obscene levels of performance is not theoretically possible. There are quite a few people, including some of my colleagues, that are looking in earnest but so far no one has found one.

clbrmbr•5mo ago
Terrible choice of name. DeepSeek developed a historically important model called “R-Zero” (this was the predecessor to R1 that was training without any coldstart SFT, and was very strong but difficult to read chain of thought because it code switches into Chinese and has no line breaks).
neuroelectron•4mo ago
Now gamify it.
Iv•4mo ago
"Starting from a single base LLM"

Ok, zero data, except the data used in the teacher model.

nickpsecurity•4mo ago
Only 1-15TB of data processed at $10k-$100m depending on model size. Then, this shaves off a few hundred to a few grand on fine-tuning. I mean, we're still saving money at least.
Davidzheng•4mo ago
I think in formal domain like lean it should actually be possible to do it from zero--but seems like no major successes no far
freejazz•4mo ago
I still don't understand what a "reasoning" LLM is
cluckindan•4mo ago
It’s an LLM that has been trained and prompted to make users believe that the model is using logical reasoning to arrive at its output, when it is in fact still predicting the possible next output tokens, just like any other LLM.

There may be additional feedback loops, but fundamentally, that is what it is doing. Sure, it will show you what steps it takes to arrive at a conclusion, but it is just predicting the steps, the conclusion and the potential validity of the aforementioned based on its training data, not actually evaluating the logic or the truthiness of the output.

If you don’t believe me, ask your ”reasoning” LLM this question: What’s the name of the paternal great-great-grandfather of the son of Jacob’s son’s son’s son?

sindriava•4mo ago
I won't read this because you're not really thinking, just pressing keyboard keys.
cluckindan•4mo ago
Joke’s on you, I dictated it.
sindriava•4mo ago
Rich coming from the guy who moved his muscles until sounds came out.

Also next time you should bother to at least copy paste your questions into any recent LLM, since they can all solve it without issue. But hallucations like this are common with non-reasoning HN users.

cluckindan•4mo ago
But can they solve it without referring to the Bible, or without mentioning anyone in the biblical Jacob’s family tree?

Don’t think so. Humans solve that puzzle in a very different way than LLMs ”reason” about it.

nerpderp82•4mo ago
There can be more than one intelligence. Nature has shown us that there are many. And many which can "outsmart" a human.
astrange•4mo ago
GPT5 and DeepThink both solved it without doing that for me, yes.

(DeepThink did wonder if it was supposed to be him afterwards or if it was a trick.)

cluckindan•4mo ago
Yesterday, GPT5 was producing Bible answers. I guess the developers are lurking here. :-)

Adding a second question like ”Is Abraham included in the family tree?” still makes it regress into mentioning Isaac, Judah, Joseph, 12 sons and whatnot.

mvdwoord•4mo ago
Progress is hard to keep track of in this fast paced environment, but aren't there already models that can add external tools and simply offload parts of he reasoning there? Maybe over MCP or some other mechanism, so it can offload e.g. calculations, or test code in a sandbox, or even write code to answer part of a question, execute the code somewhere, and take the results into the rest of the inference process as context?

Or is there a more subtle issue which prevents or makes this hard?

Is there something fundamentally impossible about having a model detecting the amount of Rs in 'strawberry' to be a string search operation and in some sandbox execute something like:

% echo "strawberry" | tr -dc "r" | wc -c

       3
It seems agents do this already, but regular GPT style environments seem to lack it?
yunohn•4mo ago
My observation of AI progress over the past 2yrs has shown that LLM companies are focusing purely on raw model knowledge instead of optimised usable tooling. Unsure when this will ever change, but that’s why your example is not the industry’s standard yet.
mvdwoord•4mo ago
My intuition, which is of course woefully inadequate in this area, says there is a ton of accuracy to be gained, and I feel also a lot of offloading and therefore pruning or better use for the rest of the parameters...

Anyway,. let me refresh my page, as I am sure while typing this some new model architecture is dropping. ;)

Varelion•4mo ago
Let's break this down carefully, step by step.

Start with Jacob.

Jacob’s son → call him A.

A’s son → call him B.

B’s son → call him C.

C’s son → call him D (this is “the son of Jacob’s son’s son’s son”).

Now the question asks for the paternal great-great-grandfather of D:

D’s father → C

D’s grandfather → B

D’s great-grandfather → A

D’s great-great-grandfather → Jacob

Answer: Jacob

freejazz•4mo ago
Thank you, I do not have a "reasoning" LLM and I have not found LLMs very useful in my life so I do not really engage with them outside of reading about them here and in other places.
BrawnyBadger53•4mo ago
Or to write it less pessimistically, the models are trained to prime their own context window such that by the end of the chain they arrive at more valuable responses. By creating intermediary steps in the chain, the next step is easier to generate rather than moving directly to the desired response. We call it reasoning because it is intuitively analogous to human reasoning methods though it is understood that LLMs don't succeed as generally as humans are able to.
frozenseven•4mo ago
This isn't an explanation. Just another "AI bad!" comment.
lawlessone•4mo ago
OK but how do you ensure it's improving in a direction that aligns with reality?