frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
1•surprisetalk•1m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
2•TheCraiggers•2m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
1•birdculture•3m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
3•doener•3m ago•1 comments

MyFlames: Visualize MySQL query execution plans as interactive FlameGraphs

https://github.com/vgrippa/myflames
1•tanelpoder•4m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•4m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
1•tanelpoder•5m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•6m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
1•elsewhen•9m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•11m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•14m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
1•mooreds•15m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•15m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•15m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•15m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•15m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•17m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•17m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
3•nick007•18m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•19m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•20m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•22m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•23m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•24m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•24m ago•2 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•24m ago•1 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
2•sgt•24m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•24m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•25m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
3•Keyframe•28m ago•0 comments
Open in hackernews

Questioning Representational Optimism in Deep Learning

https://github.com/akarshkumar0101/fer
46•mattdesl•8mo ago

Comments

meindnoch•8mo ago
Don't editorialize. Title is: "The Fractured Entangled Representation Hypothesis"

@dang

mattdesl•8mo ago
The full title of the paper is “Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis.”

https://arxiv.org/abs/2505.11581

acc_297•8mo ago
This is an interesting paper. It's nice to see AI research addressing some of the implied assumptions that compute-scale focused initiatives are relying on.

A lot of the headline advancements in AI place lots of emphasis on model size and training dataset size. These numbers always make it into abstracts and press releases and especially for LLMs even cursory investigation into how outputs are derived from inputs through different parts of the model is completely waved off with vague language along the lines of manifold hypothesis or semantic vectors.

This section stands out: "However, order cannot be everything—humans seem to be capable of intentionally reorganizing information through reanalysis or recompression, without the need for additional input data, all in an attempt to smooth out [Fractured Entangled Representation]. It is like having two different maps of the same place that overlap and suddenly realizing they are actually the same place. While clearly it is possible to change the internal representation of LLMs through further training, this kind of active and intentional representational revision has no clear analog in LLMs today."

rubitxxx8•8mo ago
> While clearly it is possible to change the internal representation of LLMs through further training, this kind of active and intentional representational revision has no clear analog in LLMs today.

So, what are some examples as to how an LLM can fail outside of this study?

I’m having trouble seeing how this will affect my everyday uses of LLMs for coding, best-effort summarization, planning, problem solving, automation, and data analysis.

acc_297•8mo ago
> how this will affect my everyday uses of LLMs for coding

It won't - that's not what this paper is about.

dinfinity•8mo ago
That section is not really what the paper is about at all, though.

The examples they give of (what they think is) FER in LLMs (GPT-3 and GPT-4o) are most informative to a layman and most representative of what is said to be the core issue, I'd say. For instance:

User: I have 3 pencils, 2 pens, and 4 erasers. How many things do I have?

GPT-3: You have 9 things. [correct in 3 out of 3 trials]

User: I have 3 chickens, 2 ducks, and 4 geese. How many things do I have?

GPT-3: You have 10 animals total. [incorrect in 3 out of 3 trials]

acc_297•8mo ago
I don't completely agree - I think it's not about GPT-3 failing to generalize word puzzle solutions it's about the type of minimized solution that gradient descent algorithms find which will produce overwhelmingly correct outputs but may lack a useful internal organization of the semantics of the training set which may or may not translate into poor model performance on out-of-sample inputs.

It's hard to say that there is no internal organization since trillion parameter models are hard for us to summarize and we do see some semantic vector alignment in the GPT models but the toy example of the 2 skull image generator present a powerful anecdote of how current ML models find correct solutions but miss a potentially valuable property of having what the paper calls factored representation which seems to be the far more "human" way to reason about data.