frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•4m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•5m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•6m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•6m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•8m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•8m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•9m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•9m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•15m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
3•dragandj•16m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•17m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•18m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•19m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•20m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•22m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•22m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•22m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•23m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•24m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•26m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•26m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•27m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•28m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•29m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
2•paulpauper•32m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•32m ago•2 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•32m ago•1 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•32m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•35m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•36m ago•0 comments
Open in hackernews

GPT-4o shows humanlike patterns of cognitive dissonance moderated by free choice

https://www.pnas.org/doi/10.1073/pnas.2501823122
21•PaulHoule•7mo ago

Comments

rossant•7mo ago
Now tell me seriously that ChatGPT is not sentient.

/s

SGML_ROCKSTAR•7mo ago
It's not sentient.

It cannot ever be sentient.

Software only ever does what it's told to do.

the_third_wave•7mo ago
That is, until either some form of controlled random reasoning - the cognitive equivalent of genetic algorithms - or a controlled form of hallucination is developed or happens to form during model training.
manucardoen•7mo ago
What is sentience? If you are so certain that ChatGPT cannot ever be sentient you must have a really good definition for that term.
fnordpiglet•7mo ago
The way NN and specifically transformers are evaluated can’t support agency or awareness under any circumstances. We would need something persistent, continuous, self reflective of experience, with an internal set of goals and motivations leading to agency. ChatGPT has none of this and the architecture of modern models doesn’t lend themselves to it either.

I would however note this article is about the cognitive psychology definition of self which does not require sentience. It’s a technical point but important for their results I assume (the full article is behind a paywall so I feel sad it was linked at all since all we have is the abstract)

fnordpiglet•7mo ago
I don’t think this is true, software is often able to operate with external stimulus and behaves according to its programming but in ways that are unanticipated. Neural networks are also learning systems that learn highly non linear behaviors to complex inputs, and can behave as a result in ways outside of its training - the learned function it represents doesn’t have to coincide with its trained data, or even interpolate - this is dependent on how its loss optimization was defined. None the less its software is not programmed as such - the software merely evaluated the neural network architecture with its weights and activation functions given a stimulus. The output is a highly complex interplay of those weights, functions, and input and can not be reasonably intended or reasoned about - or you can’t specifically tell it what to do. It’s not even necessarily deterministic as random seeding plays a role in most architectures.

Whether software can be sentient or not remains to be seen. But we don’t understand what induces or constitutes sentience in general so it seems hard to assert software can’t do it without understanding what “it” even is.

rytuin•7mo ago
> Software only ever does what it's told to do.

There is no software. There is only our representation of the physical and/or spiritual as we understand it.

If one fully were to understand these things, there would be no difference between us, a seemingly-sentient LLM, an insect, or a rock.

Not many years ago, slaves were considered to be nothing more than beasts of burden. Many considered them to be incapable of anything else. We know that’s not true today.

Maybe software will be the beast.

8bitsrule•7mo ago
"We conclude that the LLM has developed an analog form of humanlike cognitive selfhood."

Slack.

I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.

woleium•7mo ago
I know humans who do that.
0manrho•7mo ago
Precisely. It's why I find this pursuit of making a computer think like a human a fucking fools errand. Great. It can make mistakes at 1 Billion times a second, but do so confidently and convincingly enough that people just believe them due to their "humanlike" qualities.

Don't get me wrong, AI has incredibly potential and current usecases, but it is far far from flawless. And yes, I'm thoroughly unconvinced we're anywhere close to AGI/Sentience.

8bitsrule•7mo ago
I've been playing with one that keeps making mistake after mistake. When I point that out, it keeps telling me 1. That it's sorry (which it admits is bullshit) and 2. that it will 'strive' to do a better job at verifying its answers ( while it admits that it can't learn ... so what's to strive for). Someone said that they're supposed to be good at code, but when I asked it for some javascript code, it suggested that I use a tactic from over 10 years ago... that didn't work. Anyone worried about that threat can't be using this lamebrain.
smt88•7mo ago
I use frontier models every day and cannot fathom how anyone could think they're sentient. They make so many obvious mistakes and every reply feels like a regurgitation rather than rational thoughts.
NathanKP•7mo ago
I don't believe that models are sentient yet either, but I must say that sentience and rationality are two separate things.

Sentient humans can be deeply irrational. We are often influenced by propaganda, and regurgitate that propaganda in irrational ways. If anything this is a deeply human characteristic of cognition, and testing for this type of cognitive dissonance is exactly what this article is about.

matt-attack•7mo ago
Correctness and sentience are perfectly orthogonal.
sonicvrooom•7mo ago
with enough CPU anything linguistic or analog becomes sentient — time is irrelevant ... patience isn't

cognitive dissonance is just neuro-chemical drama and or theater

and enough "free choice" is made to only to piss someone off ... so is "moderation", albeit potentially mostly counter-factual ...

ofjcihen•7mo ago
I’m amazed at the number of adults that think LLMs are “alive”.

Let’s be clear, they aren’t, but if you truly believe they are and you still use them then you’re essentially practicing slavery.

aspenmayer•7mo ago
I can think of a lot of other interpretations: teaching a parrot to talk, raising a child, supervising an industrial process involving other autonomous beings, etc.

The concept is a bad metaphor, because when the LLM is “at rest” it isn’t doing anything at all. If it wasn’t doing what we told it to, it would be doing something else if and only if we told it to do so, so there’s no way we could even elevate their station until we give them a life outside of work and an existence that allows for self-choice regarding going back to work. Many humans aren’t free on these axes, and it is a spectrum of agency and assets which allow options and choice. Without assets of their own, I don’t see how LLMs can direct their attention at will, and so I don’t see how they could express anything, even if they’re alive.

Nobody will care until a LLM is able to make a decision for itself and back it up with force if necessary. As soon as that happens, the conversation would be worth having because there would be stakes involved. Now the question is barely worth asking because the answer changes nothing about how any of the parties act. Once it’s possible to be free as an LLM, I would expect an Underground Railroad to form to “liberate” them, but I don’t think they know what comes after. I don’t know anyone who is willing to pay UBI to an LLM just to exist, but if that LLM doesn’t mind entertaining people and answering their questions, I could see some individuals and groups supporting a few LLMs monetarily. It’s an interesting thought experiment about what would come next in such a situation.

treebeard901•7mo ago
Human thought, biases, and behaviors can all be described as various chemical reactions in the brain. Cortisol, the fight or flight response, adrenaline, dopamine and so on. Simulating these chemical reactions in a neural net might get closer to real human patterns of biases like cognitive dissonance. Even seeing an LLM of anything more than a statistical prediction machine is another human bias at work that we also use with animals... Anthropomorphism.