frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
289•theblazehen•2d ago•95 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
20•alainrk•1h ago•11 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
34•AlexeyBrin•1h ago•5 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
15•onurkanbkrc•1h ago•1 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
717•klaussilveira•16h ago•218 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
978•xnx•21h ago•562 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
94•jesperordrup•6h ago•35 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
4•nar001•35m ago•2 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
138•matheusalmeida•2d ago•36 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
74•videotopia•4d ago•11 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
16•matt_d•3d ago•4 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
46•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
242•isitcontent•16h ago•27 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
242•dmpetrov•16h ago•128 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
4•andmarios•4d ago•1 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
344•vecti•18h ago•153 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
510•todsacerdoti•1d ago•248 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
393•ostacke•22h ago•101 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
309•eljojo•19h ago•192 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•187 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
437•lstoll•22h ago•286 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
32•1vuio0pswjnm7•2h ago•31 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
73•kmm•5d ago•11 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
26•bikenaga•3d ago•13 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
98•quibono•4d ago•22 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
278•i5heu•19h ago•227 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
43•gmays•11h ago•14 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1088•cdrnsf•1d ago•469 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
312•surprisetalk•3d ago•45 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
36•romes•4d ago•3 comments
Open in hackernews

Questioning Representational Optimism in Deep Learning

https://github.com/akarshkumar0101/fer
46•mattdesl•8mo ago

Comments

meindnoch•8mo ago
Don't editorialize. Title is: "The Fractured Entangled Representation Hypothesis"

@dang

mattdesl•8mo ago
The full title of the paper is “Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis.”

https://arxiv.org/abs/2505.11581

acc_297•8mo ago
This is an interesting paper. It's nice to see AI research addressing some of the implied assumptions that compute-scale focused initiatives are relying on.

A lot of the headline advancements in AI place lots of emphasis on model size and training dataset size. These numbers always make it into abstracts and press releases and especially for LLMs even cursory investigation into how outputs are derived from inputs through different parts of the model is completely waved off with vague language along the lines of manifold hypothesis or semantic vectors.

This section stands out: "However, order cannot be everything—humans seem to be capable of intentionally reorganizing information through reanalysis or recompression, without the need for additional input data, all in an attempt to smooth out [Fractured Entangled Representation]. It is like having two different maps of the same place that overlap and suddenly realizing they are actually the same place. While clearly it is possible to change the internal representation of LLMs through further training, this kind of active and intentional representational revision has no clear analog in LLMs today."

rubitxxx8•8mo ago
> While clearly it is possible to change the internal representation of LLMs through further training, this kind of active and intentional representational revision has no clear analog in LLMs today.

So, what are some examples as to how an LLM can fail outside of this study?

I’m having trouble seeing how this will affect my everyday uses of LLMs for coding, best-effort summarization, planning, problem solving, automation, and data analysis.

acc_297•8mo ago
> how this will affect my everyday uses of LLMs for coding

It won't - that's not what this paper is about.

dinfinity•8mo ago
That section is not really what the paper is about at all, though.

The examples they give of (what they think is) FER in LLMs (GPT-3 and GPT-4o) are most informative to a layman and most representative of what is said to be the core issue, I'd say. For instance:

User: I have 3 pencils, 2 pens, and 4 erasers. How many things do I have?

GPT-3: You have 9 things. [correct in 3 out of 3 trials]

User: I have 3 chickens, 2 ducks, and 4 geese. How many things do I have?

GPT-3: You have 10 animals total. [incorrect in 3 out of 3 trials]

acc_297•8mo ago
I don't completely agree - I think it's not about GPT-3 failing to generalize word puzzle solutions it's about the type of minimized solution that gradient descent algorithms find which will produce overwhelmingly correct outputs but may lack a useful internal organization of the semantics of the training set which may or may not translate into poor model performance on out-of-sample inputs.

It's hard to say that there is no internal organization since trillion parameter models are hard for us to summarize and we do see some semantic vector alignment in the GPT models but the toy example of the 2 skull image generator present a powerful anecdote of how current ML models find correct solutions but miss a potentially valuable property of having what the paper calls factored representation which seems to be the far more "human" way to reason about data.