frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•50s ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•4m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•7m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•8m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•14m ago•1 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•16m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•16m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•17m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•18m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•19m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•19m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•20m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•22m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
5•codexon•22m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•23m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•27m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•28m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•28m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•28m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•29m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•32m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•32m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•34m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•36m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•37m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•37m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•37m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
2•vyrotek•38m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•42m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•46m ago•1 comments
Open in hackernews

When models manipulate manifolds: The geometry of a counting task

https://transformer-circuits.pub/2025/linebreaks/index.html
98•vinhnx•3mo ago

Comments

Rygian•3mo ago
> The task we study is linebreaking in fixed-width text.

I wonder why they focused specifically on a task that is already solved algorithmically. The paper does not seem to address this, and the references do not include any mentions of non-LLM approaches to the line-breaking problem.

omnicognate•3mo ago
There's also a lot of analogising of this to visual/spatial reasoning, even to the point of talking about "visual illusions", when its clearly a counting task as the title says.

It makes it tedious to figure out what they actually did (which sounds interesting) when it's couched in such terms and presented in such an LLMified style.

dist-epoch•3mo ago
it's not strictly a counting task, the LLM sees same-sized-tokens, but a token corresponds to a variable number of characters (which is not directly fed into the model)

like the difference between Unicode code-points and UTF-8 bytes, you can't just count UTF-8 bytes to know how many code-points you have

omnicognate•3mo ago
There's an aspect of figuring out what to count, but that doesn't make this task visual/spatial in any sense I can make out.
Legend2440•3mo ago
They study it because it already has a known solution.

The point is to see how LLMs implement algorithms internally, starting with this simple easily understood algorithm.

Rygian•3mo ago
That makes sense; however it does not seem like they check the LLM outputs against the known solution. Maybe I missed that in the article.
catgary•3mo ago
I think this is an interesting direction, but I think that step 2 of this would be to formulate some conjectures about the geometry of other LLMs, or testable hypotheses about how information flows wrt character counting. Even checking some intermediate training weights of Haiku would be interesting, so they’d still be working off of the same architecture.

The biology metaphor they make is interesting, because I think a biologist would be the first to tell you that you need more than one datapoint.

lccerina•3mo ago
Utter disrespect for using the term "biology" relating to LLM. No one would call the analysis of a mechanical engine "car biology". It's an artificial system, call it system analysis.
lewtun•3mo ago
The analogy stems from the notion that neural nets are "grown" rather than "engineered". Chris Olah has an old, but good post with some specific examples: https://colah.github.io/notes/bio-analogies/
UltraSane•3mo ago
It makes sense if you define "biology" as "incredibly complicated system not designed by humans that we kind of poke at to try to understand it."
addaon•3mo ago
Sure, but it makes no sense at all if you define biology as “the smell of a freshly opened can of tennis balls.” The original comment is probably better understood using a standard definition of the words it used, rather than either of our definitions.
lccerina•3mo ago
"not designed by humans"? Since when? Unless you count cortical organoids /wetware (grown in some instrumented petri dish) every artificial neural network, doesn't matter how complicated, it is designed by humans. With equations and rules designed by humans. Backpropagation, optimization algorithms, genetic selections etc... all designed by humans.

There is no biology here, and there are so many other words that describe perfectly what they are doing here, without twisting the meaning of another word.

UltraSane•3mo ago
By not designed I'm talking about the synaptic weights
lccerina•3mo ago
Still designed by humans. The loss function, backpropagation and all other mechanisms didn't just appear magically in the neural network. Someone decided which loss function to use, which architecture or which optimization techniques. Only because it takes a big GPU a lot of number crunching to assign those weights, it doesn't mean it's biological.

In the same way, a weather forecast model using a lot of complicated differential equations is not biological. A finite element model analyzing some complicated electromagnetic field, or the aerodynamics of a car is not biological. Just because someone around 70-75 years ago called them 'perceptrons' or 'neurons' instead of thingamajigs does not make them biology.

UltraSane•3mo ago
"Still designed by humans." No they are not. They are learned via backpropagation. This is the entire reason why neural networks work so well and why we have no idea how they work when they get big.
lccerina•3mo ago
And who designed backpropagation? It is not a magical property of artificial neurons or some law of nature or god's miracle. A bunch of mathematicians banged their head on the problem of backpropagation, tossed it to a computer, and voilà , neural networks made sense. Neural networks work so well because someone chooses the right loss function for the right problem. Wrong loss function -> wrong results. It's not magic. Nor it's biology.
djoldman•3mo ago
A superior LLM for line length optimization:

https://www.youtube.com/watch?v=Y65FRxE7uMc