frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Computer Science from the Bottom Up

https://www.bottomupcs.com/
1•gurjeet•46s ago•0 comments

Show HN: I built a toy compiler as a young dev

https://vire-lang.web.app
1•xeouz•2m ago•0 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•3m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
1•nicholascarolan•5m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•5m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•5m ago•0 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
1•mooreds•6m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
5•mindracer•7m ago•1 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•7m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•8m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•8m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
2•captainnemo729•8m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•8m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•11m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•11m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•11m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•12m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•12m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•12m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•12m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•13m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•14m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•17m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•17m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•18m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•18m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•19m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•19m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•21m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•21m ago•0 comments
Open in hackernews

GPT-4o shows humanlike patterns of cognitive dissonance moderated by free choice

https://www.pnas.org/doi/10.1073/pnas.2501823122
21•PaulHoule•7mo ago

Comments

rossant•7mo ago
Now tell me seriously that ChatGPT is not sentient.

/s

SGML_ROCKSTAR•7mo ago
It's not sentient.

It cannot ever be sentient.

Software only ever does what it's told to do.

the_third_wave•7mo ago
That is, until either some form of controlled random reasoning - the cognitive equivalent of genetic algorithms - or a controlled form of hallucination is developed or happens to form during model training.
manucardoen•7mo ago
What is sentience? If you are so certain that ChatGPT cannot ever be sentient you must have a really good definition for that term.
fnordpiglet•7mo ago
The way NN and specifically transformers are evaluated can’t support agency or awareness under any circumstances. We would need something persistent, continuous, self reflective of experience, with an internal set of goals and motivations leading to agency. ChatGPT has none of this and the architecture of modern models doesn’t lend themselves to it either.

I would however note this article is about the cognitive psychology definition of self which does not require sentience. It’s a technical point but important for their results I assume (the full article is behind a paywall so I feel sad it was linked at all since all we have is the abstract)

fnordpiglet•7mo ago
I don’t think this is true, software is often able to operate with external stimulus and behaves according to its programming but in ways that are unanticipated. Neural networks are also learning systems that learn highly non linear behaviors to complex inputs, and can behave as a result in ways outside of its training - the learned function it represents doesn’t have to coincide with its trained data, or even interpolate - this is dependent on how its loss optimization was defined. None the less its software is not programmed as such - the software merely evaluated the neural network architecture with its weights and activation functions given a stimulus. The output is a highly complex interplay of those weights, functions, and input and can not be reasonably intended or reasoned about - or you can’t specifically tell it what to do. It’s not even necessarily deterministic as random seeding plays a role in most architectures.

Whether software can be sentient or not remains to be seen. But we don’t understand what induces or constitutes sentience in general so it seems hard to assert software can’t do it without understanding what “it” even is.

rytuin•7mo ago
> Software only ever does what it's told to do.

There is no software. There is only our representation of the physical and/or spiritual as we understand it.

If one fully were to understand these things, there would be no difference between us, a seemingly-sentient LLM, an insect, or a rock.

Not many years ago, slaves were considered to be nothing more than beasts of burden. Many considered them to be incapable of anything else. We know that’s not true today.

Maybe software will be the beast.

8bitsrule•7mo ago
"We conclude that the LLM has developed an analog form of humanlike cognitive selfhood."

Slack.

I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.

woleium•7mo ago
I know humans who do that.
0manrho•7mo ago
Precisely. It's why I find this pursuit of making a computer think like a human a fucking fools errand. Great. It can make mistakes at 1 Billion times a second, but do so confidently and convincingly enough that people just believe them due to their "humanlike" qualities.

Don't get me wrong, AI has incredibly potential and current usecases, but it is far far from flawless. And yes, I'm thoroughly unconvinced we're anywhere close to AGI/Sentience.

8bitsrule•7mo ago
I've been playing with one that keeps making mistake after mistake. When I point that out, it keeps telling me 1. That it's sorry (which it admits is bullshit) and 2. that it will 'strive' to do a better job at verifying its answers ( while it admits that it can't learn ... so what's to strive for). Someone said that they're supposed to be good at code, but when I asked it for some javascript code, it suggested that I use a tactic from over 10 years ago... that didn't work. Anyone worried about that threat can't be using this lamebrain.
smt88•7mo ago
I use frontier models every day and cannot fathom how anyone could think they're sentient. They make so many obvious mistakes and every reply feels like a regurgitation rather than rational thoughts.
NathanKP•7mo ago
I don't believe that models are sentient yet either, but I must say that sentience and rationality are two separate things.

Sentient humans can be deeply irrational. We are often influenced by propaganda, and regurgitate that propaganda in irrational ways. If anything this is a deeply human characteristic of cognition, and testing for this type of cognitive dissonance is exactly what this article is about.

matt-attack•7mo ago
Correctness and sentience are perfectly orthogonal.
sonicvrooom•7mo ago
with enough CPU anything linguistic or analog becomes sentient — time is irrelevant ... patience isn't

cognitive dissonance is just neuro-chemical drama and or theater

and enough "free choice" is made to only to piss someone off ... so is "moderation", albeit potentially mostly counter-factual ...

ofjcihen•7mo ago
I’m amazed at the number of adults that think LLMs are “alive”.

Let’s be clear, they aren’t, but if you truly believe they are and you still use them then you’re essentially practicing slavery.

aspenmayer•7mo ago
I can think of a lot of other interpretations: teaching a parrot to talk, raising a child, supervising an industrial process involving other autonomous beings, etc.

The concept is a bad metaphor, because when the LLM is “at rest” it isn’t doing anything at all. If it wasn’t doing what we told it to, it would be doing something else if and only if we told it to do so, so there’s no way we could even elevate their station until we give them a life outside of work and an existence that allows for self-choice regarding going back to work. Many humans aren’t free on these axes, and it is a spectrum of agency and assets which allow options and choice. Without assets of their own, I don’t see how LLMs can direct their attention at will, and so I don’t see how they could express anything, even if they’re alive.

Nobody will care until a LLM is able to make a decision for itself and back it up with force if necessary. As soon as that happens, the conversation would be worth having because there would be stakes involved. Now the question is barely worth asking because the answer changes nothing about how any of the parties act. Once it’s possible to be free as an LLM, I would expect an Underground Railroad to form to “liberate” them, but I don’t think they know what comes after. I don’t know anyone who is willing to pay UBI to an LLM just to exist, but if that LLM doesn’t mind entertaining people and answering their questions, I could see some individuals and groups supporting a few LLMs monetarily. It’s an interesting thought experiment about what would come next in such a situation.

treebeard901•7mo ago
Human thought, biases, and behaviors can all be described as various chemical reactions in the brain. Cortisol, the fight or flight response, adrenaline, dopamine and so on. Simulating these chemical reactions in a neural net might get closer to real human patterns of biases like cognitive dissonance. Even seeing an LLM of anything more than a statistical prediction machine is another human bias at work that we also use with animals... Anthropomorphism.