frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Made for People, Not Cars: Reclaiming European Cities

https://www.greeneuropeanjournal.eu/made-for-people-not-cars-reclaiming-european-cities/
141•robtherobber•2h ago•58 comments

Supabase OrioleDB Patent: now freely available to the Postgres community

https://supabase.com/blog/orioledb-patent-free
64•tosh•1h ago•27 comments

I replaced Animal Crossing's dialogue with a live LLM by hacking GameCube memory

https://joshfonseca.com/blogs/animal-crossing-llm
537•vuciv•9h ago•116 comments

PKM apps need to get better at resurfacing information

https://ankursethi.com/blog/pkm-apps-need-to-get-better-at-resurfacing-information/
13•GeneralMaximus•3d ago•6 comments

iPhone Air

https://www.apple.com/newsroom/2025/09/introducing-iphone-air-a-powerful-new-iphone-with-a-breakt...
780•excerionsforte•18h ago•1597 comments

Knowledge and Memory

https://www.robinsloan.com/lab/knowledge-and-memory/
34•zdw•3d ago•13 comments

Infracost (YC W21) Is Hiring First Product Manager to Shift FinOps Left

https://www.ycombinator.com/companies/infracost/jobs/ukwJ299-senior-product-manager
1•akh•43m ago

E-paper display reaches the realm of LCD screens

https://spectrum.ieee.org/e-paper-display-modos
474•rbanffy•18h ago•148 comments

NASA finds Titan's lakes may be creating vesicles with primitive cell walls

https://www.sciencedaily.com/releases/2025/08/250831112449.htm
176•Gaishan•12h ago•38 comments

Claude now has access to a server-side container environment

https://www.anthropic.com/news/create-files
577•meetpateltech•22h ago•307 comments

Children and young people's reading in 2025

https://literacytrust.org.uk/research-services/research-reports/children-and-young-peoples-readin...
37•GeoAtreides•5h ago•21 comments

US High school students' scores fall in reading and math

https://apnews.com/article/naep-reading-math-scores-12th-grade-c18d6e3fbc125f12948cc70cb85a520a
395•bikenaga•21h ago•660 comments

We all dodged a bullet

https://xeiaso.net/notes/2025/we-dodged-a-bullet/
741•WhyNotHugo•21h ago•419 comments

All clickwheel iPod games have now been preserved for posterity

https://arstechnica.com/gaming/2025/09/all-54-lost-clickwheel-ipod-games-have-now-been-preserved-...
137•CharlesW•1d ago•35 comments

Axial twist theory

https://en.wikipedia.org/wiki/Axial_twist_theory
156•lordnacho•3d ago•39 comments

R-Zero: Self-Evolving Reasoning LLM from Zero Data

https://arxiv.org/abs/2508.05004
61•lawrenceyan•10h ago•26 comments

YouTube is a mysterious monopoly

https://anderegg.ca/2025/09/08/youtube-is-a-mysterious-monopoly
276•geerlingguy•1d ago•363 comments

Hypervisor in 1k Lines

https://1000hv.seiya.me/en
97•lioeters•13h ago•7 comments

Semantic Line Breaks

https://sembr.org
30•Bogdanp•3d ago•23 comments

Memory Integrity Enforcement

https://security.apple.com/blog/memory-integrity-enforcement/
422•circuit•18h ago•199 comments

Rendering flame fractals with a compute shader

https://wrighter.xyz/blog/2023_08_17_flame_fractals_in_comp_shader
4•ibobev•2d ago•0 comments

Show HN: Bottlefire – Build single-executable microVMs from Docker images

https://bottlefire.dev/
130•losfair•2d ago•18 comments

Tomorrow's emoji today: Unicode 17.0

https://jenniferdaniel.substack.com/p/tomorrows-emoji-today-unicode-170
169•ChrisArchitect•18h ago•283 comments

Building a DOOM-like multiplayer shooter in pure SQL

https://cedardb.com/blog/doomql/
203•lvogel•21h ago•35 comments

A new experimental Go API for JSON

https://go.dev/blog/jsonv2-exp
234•darccio•21h ago•81 comments

Immunotherapy drug clinical trial results: half of tumors shrink or disappear

https://www.rockefeller.edu/news/38120-immunotherapy-drug-eliminates-aggressive-cancers-in-clinic...
415•marc__1•15h ago•83 comments

An attacker’s blunder gave us a look into their operations

https://www.huntress.com/blog/rare-look-inside-attacker-operation
167•mellosouls•21h ago•93 comments

Microsoft is officially sending employees back to the office

https://www.businessinsider.com/microsoft-send-employees-back-to-office-rto-remote-work-2025-9
375•alloyed•20h ago•761 comments

Interesting PEZY-SC4s

https://chipsandcheese.com/p/pezy-sc4s-at-hot-chips-2025
15•christkv•3d ago•1 comments

Show HN: Downloading a folder from a repo using rust

https://github.com/zikani03/git-down
8•sonderotis•3d ago•14 comments
Open in hackernews

Knowledge and Memory

https://www.robinsloan.com/lab/knowledge-and-memory/
34•zdw•3d ago

Comments

Muromec•2h ago
Yesterday I asked one LLM about certain provisions of Ukrainian law, where severity threshold for a financial crime was specified indirectly through a certain well known constant. The machine got it wrong and when asked to give sources cited the respective law referencing a similary sounding unit. Amazingly it gave the correct English tranlation but gave me the wrong original in Ukrainian.

I guess it merged two tokens why learning the text.

Amazingly it also knows about difference between two constants, but referrs to the wrong one in both calculations and in hallucinating the quote.

It's tedious to always check for stuff like this.

Then I asked a different LLM and it turned out that actually the constant is monkey patched for specific contexts and both me and the first lying machine were wrong

ClaraForm•2h ago
I'm not convinced the brain stores memories, or that memory storage is required for human intelligence. And we "hallucinate" all the time. See: eye witness testimony being wrong regularly, "paranormal" experiences etc.

It's a statement that /feels/ true, because we can all look "inside" our heads and "see" memories and facts. But we may as well be re-constructing facts on the fly, just as re-construct reality itself to sense it.

n4r9•2h ago
What do you mean, you're not convinced that the brain stores memories? What is happening in the brain when you have an experience and later recall that same experience? It might not be perfect recall but something is being stored.
ClaraForm•1h ago
I mean an LLM (bad example, but good enough for what I'm trying to convey) doesn't need any sort of "memory" to be able to reconstruct something that looks like intelligence. It stores weights, and can re-assemble "facts" from those weights, independent of the meaning or significance of those facts. It's possible the brain is similar, on a much more refined scale. My brain certainly doesn't store 35,000 instances of my mum's image to help me identify her, just an averaged image to help me know when I'm looking at my mum.

The brain definitely stores things, and retrieval and processing are key to the behaviour that comes out the other end, but whether it's "memory" like what this article tries to define, I'm not sure. The article makes it a point to talk about instances where /lack/ of a memory is a sign of the brain doing something different from an LLM, but the brain is pretty happy to "make up" a "memory", from all of my reading and understanding.

HarHarVeryFunny•1h ago
The article isn't about LLMs storing things - it's about why they hallucinate, which is in large part due to the fact that they just deal in word statistics not facts, but also (the point of the article) that they have no episodic memories, or any personal experience of any sort for that matter.

Humans can generally differentiate between when they know something or not, and I'd agree with the article that this is because we tend to remember how we know things, and also have different levels of confidence according to source. Personal experience trumps watching someone else, which trumps hearing or being taught it from a reliable source, which trumps having read something on Twitter or some grafitti on a bathroom stall. To the LLM all text is just statistics, and it has no personal experience to lean to to self-check and say "hmm, I can't recall ever learning that - I'm drawing blanks".

Frankly it's silly to compare LLMs (Transformers) and brains. An LLM was only every meant to be a linguistics model, not a brain or cognitive architecture. I think people get confused because if spits out human text and so people anthropomorphize it and start thinking it's got some human-like capabilities under the hood when it is in fact - surprise surprise - just a pass-thru stack of Transformer layers. A language model.

DavidSJ•23m ago
> An LLM was only every meant to be a linguistics model, not a brain or cognitive architecture.

See https://gwern.net/doc/cs/algorithm/information/compression/1... from 1999.

Answering questions in the Turing test (What are roses?) seems to require the same type of real-world knowledge that people use in predicting characters in a stream of natural language text (Roses are ___?), or equivalently, estimating L(x) [the probability of x when written by a human] for compression.

graemep•2h ago
its not reliable, but we can also recall things accurately.

it does not store things in the way records of any sort do, but it does have a some store and recall mechanism that works.

To be fair, LLMs do this too - I just got ChatGPT to recite Ode to Autumn.

ClaraForm•1h ago
Yes, I agree. I'm not against the idea that the brain can "store" things. Just whether our concept of how a "memory" "feels" is useful to us further understanding the brain's function.
MadcapJake•1h ago
Reconstructing from what?
roxolotl•13m ago
I don’t know if this aligns with your thinking but there is a theory that memory is largely reconstructed every time it is remembered: https://en.wikipedia.org/wiki/Reconstructive_memory
a3w•2h ago
> I’ll remind you that biologists do not, in the year 2025, know memory’s physical substrate in the brain! Plenty of hypotheses — no agreement. Is there any more central mystery in human biology, maybe even human existence?

Did they not recently transfer memory of how to solve a maze from one mouse to another, giving credibility to what can store information?

Searching, I only find the RNA transfers done in 60s, which ran into some problems. I thought a recent study did transfer proteins.

GaggiX•2h ago
A model is capable of learning the "calibration" during the reinforcement learning phase, in this "old" post from OpenAI: https://openai.com/index/introducing-simpleqa/ you can see the positive correlation between stated confidence and accuracy.
tolerance•5m ago
You know, whatever memory is or where it’s at and however the mind works, I’m grateful I’ve got mine in tact right now and I appreciate science’s inability to zero in on these things.

It’s nice to know that this sort of appreciation is becoming more common. Somewhere between tech accelerationism and protestant resistance are those willing to re-interrogate human nature in anticipation of what lies ahead.

A different blog post from this month detailing an experience with ChatGPT that netted a similar reflection: https://zettelkasten.de/posts/the-scam-called-you-dont-have-...