frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
1•walterbell•1m ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•3m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
1•_august•4m ago•0 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
1•martialg•4m ago•0 comments

Horizon-LM: A RAM-Centric Architecture for LLM Training

https://arxiv.org/abs/2602.04816
1•chrsw•4m ago•0 comments

We just ordered shawarma and fries from Cursor [video]

https://www.youtube.com/shorts/WALQOiugbWc
1•jeffreyjin•5m ago•1 comments

Correctio

https://rhetoric.byu.edu/Figures/C/correctio.htm
1•grantpitt•5m ago•0 comments

Trying to make an Automated Ecologist: A first pass through the Biotime dataset

https://chillphysicsenjoyer.substack.com/p/trying-to-make-an-automated-ecologist
1•crescit_eundo•10m ago•0 comments

Watch Ukraine's Minigun-Firing, Drone-Hunting Turboprop in Action

https://www.twz.com/air/watch-ukraines-minigun-firing-drone-hunting-turboprop-in-action
1•breve•10m ago•0 comments

Free Trial: AI Interviewer

https://ai-interviewer.nuvoice.ai/
1•sijain2•10m ago•0 comments

FDA Intends to Take Action Against Non-FDA-Approved GLP-1 Drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
6•randycupertino•12m ago•1 comments

Supernote e-ink devices for writing like paper

https://supernote.eu/choose-your-product/
3•janandonly•14m ago•0 comments

We are QA Engineers now

https://serce.me/posts/2026-02-05-we-are-qa-engineers-now
1•SerCe•15m ago•0 comments

Show HN: Measuring how AI agent teams improve issue resolution on SWE-Verified

https://arxiv.org/abs/2602.01465
2•NBenkovich•15m ago•0 comments

Adversarial Reasoning: Multiagent World Models for Closing the Simulation Gap

https://www.latent.space/p/adversarial-reasoning
1•swyx•15m ago•0 comments

Show HN: Poddley.com – Follow people, not podcasts

https://poddley.com/guests/ana-kasparian/episodes
1•onesandofgrain•23m ago•0 comments

Layoffs Surge 118% in January – The Highest Since 2009

https://www.cnbc.com/2026/02/05/layoff-and-hiring-announcements-hit-their-worst-january-levels-si...
7•karakoram•23m ago•0 comments

Papyrus 114: Homer's Iliad

https://p114.homemade.systems/
1•mwenge•23m ago•1 comments

DicePit – Real-time multiplayer Knucklebones in the browser

https://dicepit.pages.dev/
1•r1z4•23m ago•1 comments

Turn-Based Structural Triggers: Prompt-Free Backdoors in Multi-Turn LLMs

https://arxiv.org/abs/2601.14340
2•PaulHoule•25m ago•0 comments

Show HN: AI Agent Tool That Keeps You in the Loop

https://github.com/dshearer/misatay
2•dshearer•26m ago•0 comments

Why Every R Package Wrapping External Tools Needs a Sitrep() Function

https://drmowinckels.io/blog/2026/sitrep-functions/
1•todsacerdoti•27m ago•0 comments

Achieving Ultra-Fast AI Chat Widgets

https://www.cjroth.com/blog/2026-02-06-chat-widgets
1•thoughtfulchris•28m ago•0 comments

Show HN: Runtime Fence – Kill switch for AI agents

https://github.com/RunTimeAdmin/ai-agent-killswitch
1•ccie14019•31m ago•1 comments

Researchers surprised by the brain benefits of cannabis usage in adults over 40

https://nypost.com/2026/02/07/health/cannabis-may-benefit-aging-brains-study-finds/
2•SirLJ•33m ago•0 comments

Peter Thiel warns the Antichrist, apocalypse linked to the 'end of modernity'

https://fortune.com/2026/02/04/peter-thiel-antichrist-greta-thunberg-end-of-modernity-billionaires/
4•randycupertino•34m ago•2 comments

USS Preble Used Helios Laser to Zap Four Drones in Expanding Testing

https://www.twz.com/sea/uss-preble-used-helios-laser-to-zap-four-drones-in-expanding-testing
3•breve•39m ago•0 comments

Show HN: Animated beach scene, made with CSS

https://ahmed-machine.github.io/beach-scene/
1•ahmedoo•40m ago•0 comments

An update on unredacting select Epstein files – DBC12.pdf liberated

https://neosmart.net/blog/efta00400459-has-been-cracked-dbc12-pdf-liberated/
3•ks2048•40m ago•0 comments

Was going to share my work

1•hiddenarchitect•43m ago•0 comments
Open in hackernews

GPT-4o shows humanlike patterns of cognitive dissonance moderated by free choice

https://www.pnas.org/doi/10.1073/pnas.2501823122
21•PaulHoule•7mo ago

Comments

rossant•7mo ago
Now tell me seriously that ChatGPT is not sentient.

/s

SGML_ROCKSTAR•7mo ago
It's not sentient.

It cannot ever be sentient.

Software only ever does what it's told to do.

the_third_wave•7mo ago
That is, until either some form of controlled random reasoning - the cognitive equivalent of genetic algorithms - or a controlled form of hallucination is developed or happens to form during model training.
manucardoen•7mo ago
What is sentience? If you are so certain that ChatGPT cannot ever be sentient you must have a really good definition for that term.
fnordpiglet•7mo ago
The way NN and specifically transformers are evaluated can’t support agency or awareness under any circumstances. We would need something persistent, continuous, self reflective of experience, with an internal set of goals and motivations leading to agency. ChatGPT has none of this and the architecture of modern models doesn’t lend themselves to it either.

I would however note this article is about the cognitive psychology definition of self which does not require sentience. It’s a technical point but important for their results I assume (the full article is behind a paywall so I feel sad it was linked at all since all we have is the abstract)

fnordpiglet•7mo ago
I don’t think this is true, software is often able to operate with external stimulus and behaves according to its programming but in ways that are unanticipated. Neural networks are also learning systems that learn highly non linear behaviors to complex inputs, and can behave as a result in ways outside of its training - the learned function it represents doesn’t have to coincide with its trained data, or even interpolate - this is dependent on how its loss optimization was defined. None the less its software is not programmed as such - the software merely evaluated the neural network architecture with its weights and activation functions given a stimulus. The output is a highly complex interplay of those weights, functions, and input and can not be reasonably intended or reasoned about - or you can’t specifically tell it what to do. It’s not even necessarily deterministic as random seeding plays a role in most architectures.

Whether software can be sentient or not remains to be seen. But we don’t understand what induces or constitutes sentience in general so it seems hard to assert software can’t do it without understanding what “it” even is.

rytuin•7mo ago
> Software only ever does what it's told to do.

There is no software. There is only our representation of the physical and/or spiritual as we understand it.

If one fully were to understand these things, there would be no difference between us, a seemingly-sentient LLM, an insect, or a rock.

Not many years ago, slaves were considered to be nothing more than beasts of burden. Many considered them to be incapable of anything else. We know that’s not true today.

Maybe software will be the beast.

8bitsrule•7mo ago
"We conclude that the LLM has developed an analog form of humanlike cognitive selfhood."

Slack.

I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.

woleium•7mo ago
I know humans who do that.
0manrho•7mo ago
Precisely. It's why I find this pursuit of making a computer think like a human a fucking fools errand. Great. It can make mistakes at 1 Billion times a second, but do so confidently and convincingly enough that people just believe them due to their "humanlike" qualities.

Don't get me wrong, AI has incredibly potential and current usecases, but it is far far from flawless. And yes, I'm thoroughly unconvinced we're anywhere close to AGI/Sentience.

8bitsrule•7mo ago
I've been playing with one that keeps making mistake after mistake. When I point that out, it keeps telling me 1. That it's sorry (which it admits is bullshit) and 2. that it will 'strive' to do a better job at verifying its answers ( while it admits that it can't learn ... so what's to strive for). Someone said that they're supposed to be good at code, but when I asked it for some javascript code, it suggested that I use a tactic from over 10 years ago... that didn't work. Anyone worried about that threat can't be using this lamebrain.
smt88•7mo ago
I use frontier models every day and cannot fathom how anyone could think they're sentient. They make so many obvious mistakes and every reply feels like a regurgitation rather than rational thoughts.
NathanKP•7mo ago
I don't believe that models are sentient yet either, but I must say that sentience and rationality are two separate things.

Sentient humans can be deeply irrational. We are often influenced by propaganda, and regurgitate that propaganda in irrational ways. If anything this is a deeply human characteristic of cognition, and testing for this type of cognitive dissonance is exactly what this article is about.

matt-attack•7mo ago
Correctness and sentience are perfectly orthogonal.
sonicvrooom•7mo ago
with enough CPU anything linguistic or analog becomes sentient — time is irrelevant ... patience isn't

cognitive dissonance is just neuro-chemical drama and or theater

and enough "free choice" is made to only to piss someone off ... so is "moderation", albeit potentially mostly counter-factual ...

ofjcihen•7mo ago
I’m amazed at the number of adults that think LLMs are “alive”.

Let’s be clear, they aren’t, but if you truly believe they are and you still use them then you’re essentially practicing slavery.

aspenmayer•7mo ago
I can think of a lot of other interpretations: teaching a parrot to talk, raising a child, supervising an industrial process involving other autonomous beings, etc.

The concept is a bad metaphor, because when the LLM is “at rest” it isn’t doing anything at all. If it wasn’t doing what we told it to, it would be doing something else if and only if we told it to do so, so there’s no way we could even elevate their station until we give them a life outside of work and an existence that allows for self-choice regarding going back to work. Many humans aren’t free on these axes, and it is a spectrum of agency and assets which allow options and choice. Without assets of their own, I don’t see how LLMs can direct their attention at will, and so I don’t see how they could express anything, even if they’re alive.

Nobody will care until a LLM is able to make a decision for itself and back it up with force if necessary. As soon as that happens, the conversation would be worth having because there would be stakes involved. Now the question is barely worth asking because the answer changes nothing about how any of the parties act. Once it’s possible to be free as an LLM, I would expect an Underground Railroad to form to “liberate” them, but I don’t think they know what comes after. I don’t know anyone who is willing to pay UBI to an LLM just to exist, but if that LLM doesn’t mind entertaining people and answering their questions, I could see some individuals and groups supporting a few LLMs monetarily. It’s an interesting thought experiment about what would come next in such a situation.

treebeard901•7mo ago
Human thought, biases, and behaviors can all be described as various chemical reactions in the brain. Cortisol, the fight or flight response, adrenaline, dopamine and so on. Simulating these chemical reactions in a neural net might get closer to real human patterns of biases like cognitive dissonance. Even seeing an LLM of anything more than a statistical prediction machine is another human bias at work that we also use with animals... Anthropomorphism.