frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
632•klaussilveira•13h ago•187 comments

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
19•theblazehen•2d ago•2 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
930•xnx•18h ago•548 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
34•helloplanets•4d ago•26 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
110•matheusalmeida•1d ago•28 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
43•videotopia•4d ago•1 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
10•kaonwarb•3d ago•10 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
213•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
323•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
372•ostacke•19h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
478•todsacerdoti•21h ago•234 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
275•eljojo•15h ago•164 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
404•lstoll•19h ago•273 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
16•jesperordrup•3h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•189 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
13•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
53•gfortaine•10h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
141•vmatsiiako•18h ago•64 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
281•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1060•cdrnsf•22h ago•435 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
133•SerCe•9h ago•118 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
177•limoce•3d ago•96 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•23 comments
Open in hackernews

When models manipulate manifolds: The geometry of a counting task

https://transformer-circuits.pub/2025/linebreaks/index.html
98•vinhnx•3mo ago

Comments

Rygian•3mo ago
> The task we study is linebreaking in fixed-width text.

I wonder why they focused specifically on a task that is already solved algorithmically. The paper does not seem to address this, and the references do not include any mentions of non-LLM approaches to the line-breaking problem.

omnicognate•3mo ago
There's also a lot of analogising of this to visual/spatial reasoning, even to the point of talking about "visual illusions", when its clearly a counting task as the title says.

It makes it tedious to figure out what they actually did (which sounds interesting) when it's couched in such terms and presented in such an LLMified style.

dist-epoch•3mo ago
it's not strictly a counting task, the LLM sees same-sized-tokens, but a token corresponds to a variable number of characters (which is not directly fed into the model)

like the difference between Unicode code-points and UTF-8 bytes, you can't just count UTF-8 bytes to know how many code-points you have

omnicognate•3mo ago
There's an aspect of figuring out what to count, but that doesn't make this task visual/spatial in any sense I can make out.
Legend2440•3mo ago
They study it because it already has a known solution.

The point is to see how LLMs implement algorithms internally, starting with this simple easily understood algorithm.

Rygian•3mo ago
That makes sense; however it does not seem like they check the LLM outputs against the known solution. Maybe I missed that in the article.
catgary•3mo ago
I think this is an interesting direction, but I think that step 2 of this would be to formulate some conjectures about the geometry of other LLMs, or testable hypotheses about how information flows wrt character counting. Even checking some intermediate training weights of Haiku would be interesting, so they’d still be working off of the same architecture.

The biology metaphor they make is interesting, because I think a biologist would be the first to tell you that you need more than one datapoint.

lccerina•3mo ago
Utter disrespect for using the term "biology" relating to LLM. No one would call the analysis of a mechanical engine "car biology". It's an artificial system, call it system analysis.
lewtun•3mo ago
The analogy stems from the notion that neural nets are "grown" rather than "engineered". Chris Olah has an old, but good post with some specific examples: https://colah.github.io/notes/bio-analogies/
UltraSane•3mo ago
It makes sense if you define "biology" as "incredibly complicated system not designed by humans that we kind of poke at to try to understand it."
addaon•3mo ago
Sure, but it makes no sense at all if you define biology as “the smell of a freshly opened can of tennis balls.” The original comment is probably better understood using a standard definition of the words it used, rather than either of our definitions.
lccerina•3mo ago
"not designed by humans"? Since when? Unless you count cortical organoids /wetware (grown in some instrumented petri dish) every artificial neural network, doesn't matter how complicated, it is designed by humans. With equations and rules designed by humans. Backpropagation, optimization algorithms, genetic selections etc... all designed by humans.

There is no biology here, and there are so many other words that describe perfectly what they are doing here, without twisting the meaning of another word.

UltraSane•3mo ago
By not designed I'm talking about the synaptic weights
lccerina•3mo ago
Still designed by humans. The loss function, backpropagation and all other mechanisms didn't just appear magically in the neural network. Someone decided which loss function to use, which architecture or which optimization techniques. Only because it takes a big GPU a lot of number crunching to assign those weights, it doesn't mean it's biological.

In the same way, a weather forecast model using a lot of complicated differential equations is not biological. A finite element model analyzing some complicated electromagnetic field, or the aerodynamics of a car is not biological. Just because someone around 70-75 years ago called them 'perceptrons' or 'neurons' instead of thingamajigs does not make them biology.

UltraSane•3mo ago
"Still designed by humans." No they are not. They are learned via backpropagation. This is the entire reason why neural networks work so well and why we have no idea how they work when they get big.
lccerina•3mo ago
And who designed backpropagation? It is not a magical property of artificial neurons or some law of nature or god's miracle. A bunch of mathematicians banged their head on the problem of backpropagation, tossed it to a computer, and voilà , neural networks made sense. Neural networks work so well because someone chooses the right loss function for the right problem. Wrong loss function -> wrong results. It's not magic. Nor it's biology.
djoldman•3mo ago
A superior LLM for line length optimization:

https://www.youtube.com/watch?v=Y65FRxE7uMc