frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
1•petethomas•1m ago•0 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•6m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•21m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•28m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•28m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•31m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•33m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•44m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•44m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•49m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•53m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•54m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•56m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•57m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
2•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•2h ago•1 comments
Open in hackernews

Hierarchical Modeling (H-Nets)

https://cartesia.ai/blog/hierarchical-modeling
93•lukebechtel•6mo ago

Comments

lukebechtel•6mo ago
> H-Net demonstrates three important results on language modeling:

> 1. H-Nets scale better with data than state-of-the-art Transformers with BPE tokenization, while learning directly from raw bytes. This improved scaling is even more pronounced on domains without natural tokenization boundaries, like Chinese, code, and DNA.

> 2. H-Nets can be stacked together to learn from deeper hierarchies, which further improves performance.

> 3. H-Nets are significantly more robust to small perturbations in input data like casing, showing an avenue for creating models that are more robust and aligned with human reasoning.

lukebechtel•6mo ago
https://arxiv.org/pdf/2507.07955

paper

modeless•6mo ago
I don't know if this is the one but something like this is clearly the future IMO. We need more levels of hierarchy to efficiently generalize to longer sequences with high level structure. Back when Byte Latent Transformers came out I thought extending the idea to more levels of hierarchy was the way to go, and this seems to be basically that?

Another article about H-Nets: https://main-horse.github.io/posts/hnet-inf/

blurbleblurble•6mo ago
Yes... This seems like a generalization of "large concept models" in a certain way
cs702•6mo ago
I've only skimmed the paper, but it looks interesting and credible, so I've added it to my reading list.

Thank you for sharing on HN!

---

EDIT: The hierarchical composition and routing aspects of this work vaguely remind me of https://github.com/glassroom/heinsen_routing/ but it has been a while since I played with that. UPDATE: After spending a bit more time on the OP, it's different, but the ideas are related, like routing based on similarity.

lukebechtel•6mo ago
No problem! I'm still parsing it myself, but it seems promising in theory, and the result curves are impressive.
gdiamos•6mo ago
How does it handle images?
marviel•6mo ago
it mentions native multimodality somewhere in either the Arxiv or post -- seems like it might handle it well?
miven•6mo ago
As far as I understand the "chunking" of input bytes is learned completely end to end, so it's basically up to the model to figure out how to most efficiently delineate and aggregate the information from the inputs according to the patterns provided to it during training.

Since it's end to end this allows them to apply this process not only to raw byte encodings but basically representations of any level, such as stacking two stages of aggregation one after another.

So in principle they could either let the model do its thing on raw bytes of an image or alternatively maybe cut it up into tiny patches ViT-style and feed that to their H-Net.

I wonder how hard would it be to adapt chunking to work in 2D and what would that even look like.

Some other notes on how multimodal inputs could be handled using this architecture are mentioned in Albert Gu's (one of the author's) blog, although only briefly, there's still much to figure out it would seem: https://goombalab.github.io/blog/2025/hnet-future/#alternati...

marviel•6mo ago
Thanks for sharing this blog post is a great speculative deep-dive.
andyferris•6mo ago
You can make image networks (unet-like things) by chunking rectangles in 2D (with some convolution steps)... I wonder if there is an image-specific architecture a bit like this that could possibly work well?
cubefox•6mo ago
Perhaps something like this: https://neurips.cc/virtual/2024/poster/94115 Though I haven't looked up what their actual tokenization strategy is, and whether switching to hierarchical (H-Net) chunks would be possible.
aeon_ai•6mo ago
Seems likely to be relevant for memory formation/consolidation/management.

Big, if so.

cubefox•6mo ago
As Mamba didn't make it, will H-Nets replace Transformers?
lukebechtel•6mo ago
It's meant to replace the BPE tokenizer piece, so it isn't a full Language Model by itself.

In fact in Gu's blog post (linked in a post below) it's mentioned that they created a Mamba model that used this in place of the tokenizer.

yorwba•6mo ago
Their architecture uses a mix of Transformer and Mamba layers. The question isn't whether it will replace Transformers, but whether it'll become part of the toolkit or whether it'll get abandoned like many other promising approaches.
vannevar•6mo ago
>The best AI architectures in use today treat all inputs equally.

Doesn't this architecture also treat all inputs equally? It seems like an encoder that preprocesses the input by inferring hierarchy. But don't all models essentially do that while training?

modeless•6mo ago
If I understand correctly, each level of the hierarchy divides its input into chunks of variable size, but outputs a fixed amount for each chunk. The chunking is learned. The model can choose to compress data by making its input chunks bigger, depending on their content.
blurbleblurble•6mo ago
Hand wavy idea: I wonder if we couldn't take this to another level and have some kind of general graph representation along with hierarchical reductions of it.

I sort of disagree with the assertion that "language is fundamentally hierarchical" in that it supposes there is a single abstraction hierarchy that's universally preferable or correct. That's just not true. It doesn't hurt anybody and it's definitely simpler to choose just one useful one (a hierarchy) but why learn only one? Why not learn multiple and also learn how to modulate between them?

notreallymetho•6mo ago
I haven’t read fully yet, but it reminds me of some work I’ve done. https://github.com/jamestexas/papers/blob/main/bread/paper.m...
astrange•6mo ago
> 3. H-Nets are significantly more robust to small perturbations in input data like casing, showing an avenue for creating models that are more robust and aligned with human reasoning.

If it forms a hierarchy (a tree), it seems like it wouldn't be robust to rearranging the information in a prompt.

eg if your request has a long list or a table of data, all the different permutations of that will create different trees even though they're actually the same thing.