frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

WireTap: Breaking Server SGX via DRAM Bus Interposition

https://wiretap.fail
1•CharlesW•1m ago•0 comments

A tiny recursive reasoning model achieves 45% on ARC-AGI-1 and 8% on ARC-AGI-2

http://alexiajm.github.io/2025/09/29/tiny_recursive_models.html
1•stared•1m ago•0 comments

Cows Wear High-Tech Collars Now

https://www.nytimes.com/2025/10/05/technology/cows-ai-collars.html
1•reaperducer•1m ago•0 comments

LocalPDF – Privacy-first PDF tools that work in browser

https://localpdf.online
1•ulinycoin•4m ago•1 comments

YC Founders and Ruby Friends at SF Ruby Conf

https://sfruby.substack.com/p/meet-yc-founders-and-ruby-friends
3•nonconstant•6m ago•1 comments

NextGen Acela rides on old-gen infrastructure

https://www.fastcompany.com/91413859/acela-amtrak-train-new-york
2•ohjeez•11m ago•0 comments

Daily routines of well-known people

https://routines.club
1•andrewstetsenko•14m ago•0 comments

Ask HN: How many of you now use AI over doctors

1•wonderwonder•15m ago•1 comments

[Open Source]Echo Mode – a middleware to stabilize LLM tone and persona drift

https://github.com/Seanhong0818/Echo-Mode
1•teamechomode•18m ago•1 comments

Tesla's 'affordable' EVs are just stripped down versions of the Model 3 and Y

https://www.theverge.com/transportation/793302/tesla-model-y-moidel-3-standard-affordable-price-s...
2•ceejayoz•19m ago•0 comments

Show HN: DictaFlow – Privacy-first voice dictation for Windows (hold-to-talk)

https://dictaflow.vercel.app/
1•ryanshrott•20m ago•0 comments

Hacktoberfest shouldn't be AI generated PRs

https://iparaskev.com/blog/hacktoberfest_shouldnt_be_ai
2•iparaskev•21m ago•0 comments

Qualcomm's buying Arduino – what it means for makers

https://www.jeffgeerling.com/blog/2025/qualcomms-buying-arduino-%E2%80%93-what-it-means-makers
1•todsacerdoti•23m ago•1 comments

Gemini 2.5 Computer Use model

https://blog.google/technology/google-deepmind/gemini-computer-use-model/
29•mfiguiere•24m ago•2 comments

Electrodeposition of Metallic Magnesium in Ionic Liquids: A Systematic Review

https://www.mdpi.com/2075-163X/15/10/1021
1•PaulHoule•26m ago•0 comments

Cadence Workflow Joins the Cloud Native Computing Foundation

https://www.uber.com/blog/cadence-workflow-joins-the-cloud-native-computing-foundation/
1•enz•27m ago•0 comments

Banning controversy reveals Bluesky's decentralized aspiration isn't reality

https://plus.flux.community/p/banning-controversy-reveals-blueskys
5•gregsadetsky•29m ago•0 comments

Today is the Feast of Our Lady of the Rosary, app if want to pray it

https://www.prayholyrosary.com
2•javierbuilds•29m ago•0 comments

Evolving AltStore PAL – alternative iOS app store connects with the Fediverse

https://rileytestut.com/blog/2025/10/07/evolving-altstore-pal/
1•gloxkiqcza•31m ago•1 comments

Denmark leads EU push to copyright faces in fight against deepfakes

https://www.techpolicy.press/denmark-leads-eu-push-to-copyright-faces-in-fight-against-deepfakes/
1•anigbrowl•31m ago•0 comments

Words of Type Encyclopedia

https://wiki.wordsoftype.com/
1•esadek•34m ago•0 comments

Sora 2 Stole the Show at OpenAI DevDay

https://www.aiengineering.report/p/sora-2-stole-the-show-at-openai-devday
1•waprin•34m ago•0 comments

Drinking Through the Generations

https://news.flinders.edu.au/blog/2025/10/07/drinking-through-the-generations/
1•01-_-•35m ago•0 comments

Designing a SIMD Algorithm from Scratch

https://mcyoung.xyz/2023/11/27/simd-base64/
1•solfleur•37m ago•0 comments

Python Violates PEP 8 – Invent with Python

https://inventwithpython.com/blog/sweigarts-law-of-pep-8-complaints.html
1•rbanffy•37m ago•1 comments

From Caller to Suspect: Behaviors That Trigger Suspicion in 911 Calls

https://osf.io/preprints/psyarxiv/9kts5_v1
5•gnabgib•37m ago•0 comments

Tesla unveils cheaper versions of its Model 3 and Model Y

https://www.cnn.com/2025/10/07/cars/tesla-model-y-3-cheaper-evs
3•supportengineer•38m ago•2 comments

ThalamusDB: Query text, tables, images, and audio

https://github.com/itrummer/thalamusdb
1•itrummer•38m ago•0 comments

Federal shutdown deals blow to hobbled cybersecurity agency

https://theconversation.com/federal-shutdown-deals-blow-to-already-hobbled-cybersecurity-agency-2...
2•rntn•39m ago•0 comments

Fear, not hope, permeates today's technology hype

https://paulkrugman.substack.com/p/why-arent-we-partying-like-its-1999
1•treadump•44m ago•0 comments
Open in hackernews

Less Is More: Recursive Reasoning with Tiny Networks

https://arxiv.org/abs/2510.04871
69•guybedo•2h ago

Comments

guybedo•2h ago
Abstract:

Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies.

This biologically inspired method beats Large Language models (LLMs) on hard puzzle tasks such as Sudoku, Maze, and ARC-AGI while trained with small models (27M parameters) on small data (around 1000 examples). HRM holds great promise for solving hard problems with small networks, but it is not yet well understood and may be suboptimal.

We propose Tiny Recursive Model (TRM), a much simpler recursive reasoning approach that achieves significantly higher generalization than HRM, while using a single tiny network with only 2 layers.

With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters.

SeanAnderson•55m ago
"With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters."

Well, that's pretty compelling when taken in isolation. I wonder what the catch is?

esafak•53m ago
It won't be any good at factual questions, for a start; it will be reliant on an external memory. Everything would have to be reasoned from first principles, without knowledge.

My gut feeling is that this will limits its capability, because creativity and intelligence involve connecting disparate things, and to do that you need to know them first. Though philosophers have tried, you can't unravel the mysteries of the universe through reasoning alone. You need observations, facts.

What I could see it good for is a dedicated reasoning module.

Grosvenor•49m ago
That's been my expectation from the start.

We'll need a memory system, an executive function/reasoning system as well as some sort of sense integration - auditory, visual, text in the case of LLMs, symbolic probably.

A good avenue of research would be to see if you could glue opencyc to this for external "knowledge".

LLM's are fundamentally a dead end.

Github link: https://github.com/SamsungSAILMontreal/TinyRecursiveModels

briandw•56m ago
" With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI- 1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters"

That is very impressive.

Side note: Superficially reminds me of Hierarchical Temporal Memory from Jeff Hawkins "On Intelligence". Although this doesn't have the sparsity aspect, its hierarchical and temporal aspects are related.

https://en.wikipedia.org/wiki/Hierarchical_temporal_memory https://www.numenta.com

java-man•5m ago
I suspect the lack of sparsity is an Achilles' heel of the current LLM approach.
infogulch•38m ago
So what happens when we figure out how to 10x both scale and throughput on existing hardware by using it more efficiently? Will gigantic models still be useful?
peterlk•25m ago
Of course! We still have computers the size of mainframes that ran on vacuum tubes. They are just built with vastly more powerful hardware and are used for specialized tasks that supercomputing facilities care about.

But it has the potential to alter the economics of AI quite dramatically

Balinares•36m ago
Wow, so not only are the findings from https://arxiv.org/abs/2506.21734 (posted on HN a while back) confirmed, they're generalizable? Intriguing. I wonder if this will pan out in practical use cases, it'd be transformative.

Also would possibly instantly void the value of trillions of pending AI datacenter capex, which would be funny. (Though possibly not for very long.)

matthewfcarlson•21m ago
It would be fitting if the AI bubble was popped by AI getting too good and too efficient
ACCount37•16m ago
Any mention of "HRM" is incomplete without this analysis:

https://arcprize.org/blog/hrm-analysis

This here looks like a stripped down version of HRM - possibly drawing on the ablation studies from this very analysis.

Worth noting that HRMs aren't generally applicable in the same way normal transformer LLMs are. Or, at least, no one has found a way to apply them to the typical generative AI tasks yet.

I'm still reading the paper, but I expect this version to be similar - it uses the same tasks as HRMs as examples. Possibly quite good at spatial reasoning tasks (ARC-AGI and ARC-AGI-2 are both spatial reasoning benchmarks), but it would have to be integrated into a larger more generally capable architecture to go past that.

shawntan•13m ago
That analysis provided a very non-abrasive wording of their evaluation of HRM and its contributions. The comparison with a 'vanilla' transformer on the same settings is telling.
shawntan•21m ago
I think everyone should read the post from ARC-AGI organisers about HRM carefully: https://arcprize.org/blog/hrm-analysis

With the same data augmentation / 'test time training' setting, the vanilla Transformers do pretty well, close to the "breakthrough" HRM reported. From a brief skim, this paper is using similar settings to compare itself on ARC-AGI.

I too, want to believe in smaller models with excellent reasoning performance. But first understand what ARC-AGI tests for, what the general setting is -- the one that commercial LLMs use to compare against each other -- and what the specialised setting HRM and this paper uses as evaluation.

The naming of that benchmark lends itself to hype, as we've seen in both HRM and this paper.

ACCount37•8m ago
Not exactly "vanilla Transformer", but rather "a Transformer-like architecture with recurrence".

Which is still a fun idea to play around with - this approach clearly has its strengths. But it doesn't appear to be an actual "better Transformer". I don't think it deserves nearly as much hype as it gets.

shawntan•4m ago
Right. There should really be a vanilla Transformer baseline.

With recurrence: The idea has been around: https://arxiv.org/abs/1807.03819

There are reasons why it hasn't really been picked up at scale, and the method tends to do well on synthetic tasks.