frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•2m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
2•alephnerd•4m ago•1 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•5m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•8m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
2•hasheddan•8m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
2•ArtemZ•19m ago•3 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•20m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•22m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
2•duxup•25m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•26m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•38m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•40m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•41m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•43m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•47m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•51m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•54m ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•59m ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
2•cedel2k1•1h ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
37•chwtutha•1h ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•1h ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•1h ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•1h ago•1 comments
Open in hackernews

Ask HN: Why do LLMs call Trump "former president" despite current knowledge?

4•ahmedfromtunis•7mo ago
THIS POST IS NOT POLITICAL.

I noticed that when I ask Gemini, ChatGPT, Perplexity and other LLMs about current events, they would sometimes refer to Trump as "former president".

The reason I find this question-worthy is that this happens despite the fact that the LLMs have otherwise "realtime" knowledge about the events in question.

As an example, they'd say something like this: "Trump ordered the attack on Iran. The former president said...".

Does this observation offer an insight into how LLMs process information?

Comments

bigyabai•7mo ago
There is more training material from 4 years of Trump's post-presidency than there is of the >200 days of the current admin.
PaulHoule•7mo ago
(1) An LLM is trained on text up to a certain date in time. So out of its own memory it can only accurately answer questions about events that happened before then. When it comes to sports, for instance, I'd expect to get good answers about Superbowl XX but not about a game that happened last weekend. An LLM could go and do a search about the game last weekend, read about it, and tell me about what it read, but its basic knowledge is always going to be a little out of date.

(2) Reasoning about events across time is difficult. It's a problem for the old symbolic AI systems in the sense that plain ordinary first order logic doesn't respect time. "A" might be true today but it was false three weeks ago but maybe two years from now "A" might be false again. You can certainly design a logic for temporal inference, but it's not something standardized with a standard way to do inference. [1] Common sense reasoning not only requires this, but also "Mary believes A is true but John believes A is false" and "It is possible that A is true" or "It is necessary that A is true" or "If B were true, then A would have to be true" and even combinations such as "Three weeks ago John believed A was true" or "John believes A was true three weeks ago".

LLMs don't work on logic but rather by matching up bits of text in the context which other bits of text, so if it is "thinking it through step by step" you are going to see text that is relevant to different steps of the thinking which could involve various times, contingencies, people's belief systems and such and it would have to always know which bits are relevant to what it is thinking about right now and which bits aren't which is... hard.

[1] Note that inference over totally ordinary logic with arithmetic is undecidable https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

gregjor•7mo ago
Accurate to call Trump a former president because he served 2016-2020. Also the current president, but one needs actual intelligence to understand that.

Or maybe the AIs have achieved sentience and we can interpret such statements as hope.