frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•1m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•2m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•2m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•3m ago•0 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•4m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•6m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•6m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•7m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•8m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•8m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
1•paulpauper•12m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•12m ago•0 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•12m ago•0 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•12m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•15m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•15m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•15m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•18m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
4•josephcsible•18m ago•0 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
5•jdjuwadi•21m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•21m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•25m ago•0 comments

Market orientation and national homicide rates

https://onlinelibrary.wiley.com/doi/10.1111/1745-9125.70023
4•PaulHoule•25m ago•0 comments

California urges people avoid wild mushrooms after 4 deaths, 3 liver transplants

https://www.cbsnews.com/news/california-death-cap-mushrooms-poisonings-liver-transplants/
1•rolph•26m ago•0 comments

Matthew Shulman, co-creator of Intellisense, died 2019 March 22

https://www.capenews.net/falmouth/obituaries/matthew-a-shulman/article_33af6330-4f52-5f69-a9ff-58...
3•canucker2016•27m ago•1 comments

Show HN: SuperLocalMemory – AI memory that stays on your machine, forever free

https://github.com/varun369/SuperLocalMemoryV2
1•varunpratap369•28m ago•0 comments

Show HN: Pyrig – One command to set up a production-ready Python project

https://github.com/Winipedia/pyrig
1•Winipedia•30m ago•0 comments

Fast Response or Silence: Conversation Persistence in an AI-Agent Social Network [pdf]

https://github.com/AysajanE/moltbook-persistence/blob/main/paper/main.pdf
1•EagleEdge•30m ago•0 comments

C and C++ dependencies: don't dream it, be it

https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies-dont-dream-it-be-it.html
1•ingve•31m ago•0 comments

Show HN: Vbuckets – Infinite virtual S3 buckets

https://github.com/danthegoodman1/vbuckets
1•dangoodmanUT•31m ago•0 comments
Open in hackernews

A 27M-param model that solves hard Sudoku/mazes where LLMs fail, without CoT

https://github.com/sapientinc/HRM
10•mingli_yuan•6mo ago

Comments

mingli_yuan•6mo ago
Hi HN,

We've seen LLMs struggle with complex, multi-step reasoning tasks. The common approach, Chain-of-Thought (CoT), often requires massive datasets, is brittle, and suffers from high latency.

To tackle this, we developed the Hierarchical Reasoning Model (HRM), a novel recurrent architecture inspired by how the human brain processes information across different timescales (as seen in the diagram on the left).

It's a small model that packs a huge punch. Here are the key highlights:

Extremely Lightweight: Only 27 million parameters.

Data Efficient: Trained with just 1000 samples for the complex tasks shown.

No Pre-training Needed: It works from scratch without needing massive pre-training or any CoT supervision data.

Single Forward Pass: It solves the entire reasoning task in one go, making it incredibly fast and efficient.

How It Works HRM consists of two interconnected recurrent modules that mimic brain-wave coupling:

High-level Module: Operates slowly, like the brain's Theta waves (θ, 4-8Hz), to handle abstract planning and goal setting.

Low-level Module: Operates quickly, like Gamma waves (γ, ~40Hz), to execute the fine-grained computational steps.

These two modules work together, allowing the model to achieve significant computational depth while remaining stable and efficient to train.

Astonishing Performance The results speak for themselves (see charts on the right). On tasks requiring complex, precise reasoning, HRM dramatically outperforms much larger models:

Extreme Sudoku (9x9): HRM achieves 55.0% accuracy. Other models, including direct prediction and larger LLMs like Claude 3.7 8K, score 0.0%.

Hard Maze (30x30): HRM finds the optimal path 74.5% of the time. Again, others score 0.0%.

ARC-AGI Benchmark: On the Abstraction and Reasoning Corpus (ARC), a key test for AGI capabilities, HRM significantly outperforms larger models with much longer context windows.

We believe HRM represents a transformative step towards more general and efficient reasoning systems. It shows that a carefully designed architecture can sometimes beat brute-force scale.

We'd love to hear your thoughts on this approach! What other applications could you see for a model like this?

Paper: https://arxiv.org/abs/2506.21734 Code: https://github.com/sapientinc/HRM