frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•1m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•2m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•4m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•5m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•8m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•19m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•25m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•29m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•38m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•45m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•48m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•48m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•49m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•50m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•50m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•50m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•56m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
4•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
2•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1h ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•1h ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments
Open in hackernews

When models manipulate manifolds: The geometry of a counting task

https://transformer-circuits.pub/2025/linebreaks/index.html
98•vinhnx•3mo ago

Comments

Rygian•3mo ago
> The task we study is linebreaking in fixed-width text.

I wonder why they focused specifically on a task that is already solved algorithmically. The paper does not seem to address this, and the references do not include any mentions of non-LLM approaches to the line-breaking problem.

omnicognate•3mo ago
There's also a lot of analogising of this to visual/spatial reasoning, even to the point of talking about "visual illusions", when its clearly a counting task as the title says.

It makes it tedious to figure out what they actually did (which sounds interesting) when it's couched in such terms and presented in such an LLMified style.

dist-epoch•3mo ago
it's not strictly a counting task, the LLM sees same-sized-tokens, but a token corresponds to a variable number of characters (which is not directly fed into the model)

like the difference between Unicode code-points and UTF-8 bytes, you can't just count UTF-8 bytes to know how many code-points you have

omnicognate•3mo ago
There's an aspect of figuring out what to count, but that doesn't make this task visual/spatial in any sense I can make out.
Legend2440•3mo ago
They study it because it already has a known solution.

The point is to see how LLMs implement algorithms internally, starting with this simple easily understood algorithm.

Rygian•3mo ago
That makes sense; however it does not seem like they check the LLM outputs against the known solution. Maybe I missed that in the article.
catgary•3mo ago
I think this is an interesting direction, but I think that step 2 of this would be to formulate some conjectures about the geometry of other LLMs, or testable hypotheses about how information flows wrt character counting. Even checking some intermediate training weights of Haiku would be interesting, so they’d still be working off of the same architecture.

The biology metaphor they make is interesting, because I think a biologist would be the first to tell you that you need more than one datapoint.

lccerina•3mo ago
Utter disrespect for using the term "biology" relating to LLM. No one would call the analysis of a mechanical engine "car biology". It's an artificial system, call it system analysis.
lewtun•3mo ago
The analogy stems from the notion that neural nets are "grown" rather than "engineered". Chris Olah has an old, but good post with some specific examples: https://colah.github.io/notes/bio-analogies/
UltraSane•3mo ago
It makes sense if you define "biology" as "incredibly complicated system not designed by humans that we kind of poke at to try to understand it."
addaon•3mo ago
Sure, but it makes no sense at all if you define biology as “the smell of a freshly opened can of tennis balls.” The original comment is probably better understood using a standard definition of the words it used, rather than either of our definitions.
lccerina•3mo ago
"not designed by humans"? Since when? Unless you count cortical organoids /wetware (grown in some instrumented petri dish) every artificial neural network, doesn't matter how complicated, it is designed by humans. With equations and rules designed by humans. Backpropagation, optimization algorithms, genetic selections etc... all designed by humans.

There is no biology here, and there are so many other words that describe perfectly what they are doing here, without twisting the meaning of another word.

UltraSane•3mo ago
By not designed I'm talking about the synaptic weights
lccerina•3mo ago
Still designed by humans. The loss function, backpropagation and all other mechanisms didn't just appear magically in the neural network. Someone decided which loss function to use, which architecture or which optimization techniques. Only because it takes a big GPU a lot of number crunching to assign those weights, it doesn't mean it's biological.

In the same way, a weather forecast model using a lot of complicated differential equations is not biological. A finite element model analyzing some complicated electromagnetic field, or the aerodynamics of a car is not biological. Just because someone around 70-75 years ago called them 'perceptrons' or 'neurons' instead of thingamajigs does not make them biology.

UltraSane•3mo ago
"Still designed by humans." No they are not. They are learned via backpropagation. This is the entire reason why neural networks work so well and why we have no idea how they work when they get big.
lccerina•3mo ago
And who designed backpropagation? It is not a magical property of artificial neurons or some law of nature or god's miracle. A bunch of mathematicians banged their head on the problem of backpropagation, tossed it to a computer, and voilà , neural networks made sense. Neural networks work so well because someone chooses the right loss function for the right problem. Wrong loss function -> wrong results. It's not magic. Nor it's biology.
djoldman•3mo ago
A superior LLM for line length optimization:

https://www.youtube.com/watch?v=Y65FRxE7uMc