frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Intellect-2 Release: The First 32B Model Trained Through Globally Distributed RL

https://www.primeintellect.ai/blog/intellect-2-release
131•Philpax•9h ago

Comments

esafak•8h ago
How are they ensuring robustness against adversarial responses?
nsingh2•8h ago
From the article, seems like TOPLOC:

> based on top of novel components such as TOPLOC, which verifies rollouts from untrusted inference workers

https://github.com/PrimeIntellect-ai/toploc

xmasotto•4h ago
Can an expert explain how this protects against adversarial actors?

At a glance it looks like something akin to a computing a checksum that's locality sensitive, so it's robust to floating point errors, etc.

What's to stop someone from sending bad data + a matching bad checksum?

yorwba•2h ago
The validation procedure is described on page 8 of the TOPLOC paper: https://arxiv.org/abs/2501.16007

The checksum is validated by redoing the computation, but making use of the fact that you already have the entire response to enable greater parallelism than when generating it one token at a time.

ndgold•8h ago
Pretty badass
quantumwoke•8h ago
Wonder what the privacy story is like. Enterprises don't usually like broadcasting their private data across a freely accessible network.
bjt12345•7h ago
A strong use case here for quantum-safe encryption.
mountainriver•8h ago
Awesome work this team is doing. Globally distributed MoE could have real legs
refulgentis•8h ago
I guess I'm bearish?

It's not that they trained a new model, but they took an existing model and RL'd it a bit?

The scores are very close to QwQ-32B, and at the end:

"Overall, as QwQ-32B was already extensively trained with RL, it was difficult to obtain huge amounts of generalized improvement on benchmarks beyond our improvements on the training dataset. To see stronger improvements, it is likely that better base models such as the now available Qwen3, or higher quality datasets and RL environments are needed."

fabmilo•7h ago
The interesting delta here is that this proves that we can distribute the training and get a functioning model. The scaling factor is way bigger than datacenters
refulgentis•6h ago
The RL, not the training. No?
itchyjunk•4m ago
RL is still training. Just like pretraining is still training. SFT is also training. This is how I look at it. Models weights are being updated in all cases.
comex•6h ago
But does that mean much when the training that produced the original model was not distributed?
christianqchung•7h ago
Third party fine tuned open weighted LLMs tend to be good at a handful of benchmarks, but parity or lower on others compared to the original model. There are some exceptions like Nvidia's Nemotron series, but the differences generally are so small as to be imperceptible. Deepseek released finetunes of several Qwen and Llama models alongside R1, and while they were better in some select (mostly math) and coding domains, there were problems resulting from fine tuning that didn't result in them overtaking the original models in usage.
cess11•54m ago
Seems that's mostly a byproduct from working on the core business idea, GPU arbitrage.
jumploops•8h ago
Congrats to the team on the launch!

Personal story time: I met a couple of their engineers at an event a few months back. They mentioned they were building a distributed training system for LLMs.

I asked them how they were building it and they mentioned Python. I said something along the lines of “not to be the typical internet commenter guy, but why aren’t you using something like Rust for the distributed system parts?”

They mumbled something about Python as the base for all current LLMs, and then kinda just walked away…

From their article: > “Rust-based orchestrator and discovery service coordinate permissionless workers”

Glad to see that I wasn’t entirely off-base :)

Havoc•3h ago
Given the latencies at play python did probably make more sense though
throwanem•8h ago
There's a name and a logo. "Hubris" feels slightly beggared. https://en.m.wikipedia.org/wiki/The_Metamorphosis_of_Prime_I...
Extropy_•29m ago
This looks like a startup company. Why shouldn't it have a name and logo?
schneehertz•7h ago
I used to have an idea related to science fiction novels that artificial intelligence could aggregate computing power through the network to perform ultra-large-scale calculations, thereby achieving strong artificial intelligence. Reality will also develop in this way, which is very interesting
abtinf•7h ago
Does this have anything to do with The Metamorphosis Of Prime Intellect, or did they just abuse the name and the cover art?
arthurcolle•7h ago
Prime Intellect is a grabby AI :)
danielhanchen•6h ago
I made some GGUFs at https://huggingface.co/unsloth/INTELLECT-2-GGUF

./llama.cpp/llama-cli -hf unsloth/INTELLECT-2-GGUF:Q4_K_XL -ngl 99

Also it's best to read https://docs.unsloth.ai/basics/tutorial-how-to-run-qwq-32b-e... on sampling issues for QwQ based models.

Or TLDR, use the below settings:

./llama.cpp/llama-cli -hf unsloth/INTELLECT-2-GGUF:Q4_K_XL -ngl 99 --temp 0.6 --repeat-penalty 1.1 --dry-multiplier 0.5 --min-p 0.00 --top-k 40 --top-p 0.95 --samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"

3abiton•6h ago
This is rather exciting! I see the future of Co-op models made by a community of experts on a specific field that would still allow them to be competitive with "AI monopolies". Maybe not all hope is lost!
iTokio•6h ago
It’s interesting that it does something useful (training a LLM) without trust and in a decentralized way.

Maybe this could be used as proof of work? To stop wasting computing resources in crypto currencies and get something useful as a byproduct.

proof_by_vibes•5h ago
There could be merit to this. Proofs are generally computationally hard, so it's possible that a currency could be created by quantifying verification.
littlestymaar•5h ago
> To stop wasting computing resources in crypto currencies and get something useful as a byproduct.

Bitcoin is the only major cryptocurrency that still use proof of work today (others are either using “proof of stakes” or are “Layer 2” chains), and due to its (relative lack of) governance structure, it's very unlikely to ever change.

fastball•4h ago
The emphasis is indeed on "without trust" – as far as I can tell this project is unable to verify whether the decentralized training nodes are contributing productively.

Without the ability to validate that training compute is heading in the globally desired direction, it is unlikely you could use it as the foundation of a (sound) cryptocurrency.

mentalgear•4h ago
The reward model could be used as a validation/reward for the client. Give the same nodes the same inferences to make, and the one with the highest reward (those could be short, or even partially calculated long-term) will also get the "currency" reward.
mentalgear•3h ago
That would be indeed a very promising way of FINALLY making cryptocurrency useful!
_ink_•3h ago
I read an argument, that proof of work needs to be useless and wasteful. If it would produce value in itself it would make 51% attacks more economic and thus the currency less secure.
throwanem•1h ago
Sure. The whole point of "proof of work" is to show (prove) you've lost energy to heat (work). That's what makes it costly and thus an honest signal.

The model breaks where work can be counterfeited (usually impossible) or where energy prices go to zero, which is why "bitcoin colonialism" was briefly a thing last decade. Much of bitcoin's design, this aspect also, is intended to protect against the bare-fanged, red-eyed money weasels it was also designed to attract.

ucha•1h ago
It needs to not have economic value but it doesn't necessarily need to be useless and wasteful.
Geee•2h ago
No, this process doesn't produce "proof of work", i.e. verifiable proofs that energy has been used.
k__•52m ago
Arweave and Filecoin use PoW algorithms that prove something useful.
Thomashuet•3h ago
Summary: We've use the most complexest, buzzwordiest training infrastructure to increase the performance of our base model by a whopping 0.5% (±1%).
Weryj•52m ago
But this isn’t about the performance, the infrastructure is the product here.

I ruined my vacation by reverse engineering WSC

https://blog.es3n1n.eu/posts/how-i-ruined-my-vacation/
197•todsacerdoti•7h ago•81 comments

Plain Vanilla Web

https://plainvanillaweb.com/index.html
1093•andrewrn•18h ago•514 comments

Continuous Thought Machines

https://pub.sakana.ai/ctm/
183•hardmaru•8h ago•15 comments

Armbian Updates: OMV support, boot improvents, Rockchip optimizations

https://www.armbian.com/newsflash/armbian-updates-nas-support-lands-boot-systems-improve-and-rockchip-optimizations-arrive/
19•transpute•3h ago•0 comments

Intellect-2 Release: The First 32B Model Trained Through Globally Distributed RL

https://www.primeintellect.ai/blog/intellect-2-release
131•Philpax•9h ago•37 comments

Making PyPI's test suite 81% faster – The Trail of Bits Blog

https://blog.trailofbits.com/2025/05/01/making-pypis-test-suite-81-faster/
67•rbanffy•3d ago•18 comments

Dart added support for cross-compilation

https://dart.dev/tools/dart-compile#cross-compilation-exe
26•Alifatisk•3d ago•21 comments

Why Bell Labs Worked

https://1517.substack.com/p/why-bell-labs-worked
225•areoform•14h ago•165 comments

Car companies are in a billion-dollar software war

https://insideevs.com/features/759153/car-companies-software-companies/
354•rntn•17h ago•604 comments

Absolute Zero Reasoner

https://andrewzh112.github.io/absolute-zero-reasoner/
82•jonbaer•4d ago•16 comments

High-school shop students attract skilled-trades job offers

https://www.wsj.com/lifestyle/careers/skilled-trades-high-school-recruitment-fd9f8257
194•lxm•19h ago•310 comments

Show HN: Vom Decision Platform (Cursor for Decision Analyst)

https://www.vomdecision.com
5•davidreisbr•3d ago•3 comments

Ask HN: Cursor or Windsurf?

151•skarat•6h ago•200 comments

The Academic Pipeline Stall: Why Industry Must Stand for Academia

https://www.sigarch.org/the-academic-pipeline-stall-why-industry-must-stand-for-academia/
103•MaysonL•8h ago•79 comments

Scraperr – A Self Hosted Webscraper

https://github.com/jaypyles/Scraperr
193•jpyles•16h ago•68 comments

Writing an LLM from scratch, part 13 – attention heads are dumb

https://www.gilesthomas.com/2025/05/llm-from-scratch-13-taking-stock-part-1-attention-heads-are-dumb
284•gpjt•3d ago•57 comments

Title of work deciphered in sealed Herculaneum scroll via digital unwrapping

https://www.finebooksmagazine.com/fine-books-news/title-work-deciphered-sealed-herculaneum-scroll-digital-unwrapping
214•namanyayg•21h ago•96 comments

A formatter for your kdl files

https://github.com/hougesen/kdlfmt
3•riegerj•3d ago•1 comments

Why alien languages could be far stranger than we imagine Essays

https://aeon.co/essays/why-alien-languages-could-be-far-stranger-than-we-imagine
8•rbanffy•1h ago•6 comments

One-Click RCE in Asus's Preinstalled Driver Software

https://mrbruh.com/asusdriverhub/
469•MrBruh•1d ago•224 comments

LSP client in Clojure in 200 lines of code

https://vlaaad.github.io/lsp-client-in-200-lines-of-code
147•vlaaad•17h ago•18 comments

ToyDB rewritten: a distributed SQL database in Rust, for education

https://github.com/erikgrinaker/toydb
97•erikgrinaker•15h ago•12 comments

How friction is being redistributed in today's economy

https://kyla.substack.com/p/the-most-valuable-commodity-in-the
213•walterbell•3d ago•97 comments

White House fires head of Copyright Office amid Library of Congress shakeup

https://www.washingtonpost.com/politics/2025/05/11/white-house-copyright-office-director-fired/
51•handfuloflight•3h ago•36 comments

Burrito Now, Pay Later

https://enterprisevalue.substack.com/p/burrito-now-pay-later
137•gwintrob•15h ago•231 comments

Show HN: Codigo – The Programming Language Repository

https://codigolangs.com
42•adamjhf•2d ago•13 comments

A simple 16x16 dot animation from simple math rules

https://tixy.land
457•andrewrn•2d ago•91 comments

Lazarus Release 4.0

https://forum.lazarus.freepascal.org/index.php?topic=71050.0
240•proxysna•5d ago•136 comments

Avoiding AI is hard – but our freedom to opt out must be protected

https://theconversation.com/avoiding-ai-is-hard-but-our-freedom-to-opt-out-must-be-protected-255873
175•gnabgib•11h ago•103 comments

3D printing in vivo for non-surgical implants and drug delivery

https://www.science.org/doi/10.1126/science.adt0293
22•Phreaker00•1d ago•5 comments