frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
1•beardyw•1m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•1m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•4m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
1•surprisetalk•4m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•4m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
1•pseudolus•4m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•5m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•6m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
1•1vuio0pswjnm7•6m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
2•obscurette•6m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
1•jackhalford•8m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•8m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
1•tangjiehao•11m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•12m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•12m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•12m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
1•tusharnaik•13m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•14m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•15m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
6•derriz•15m ago•1 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•15m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•16m ago•0 comments

eInk UI Components in CSS

https://eink-components.dev/
1•edent•16m ago•0 comments

Discuss – Do AI agents deserve all the hype they are getting?

2•MicroWagie•19m ago•0 comments

ChatGPT is changing how we ask stupid questions

https://www.washingtonpost.com/technology/2026/02/06/stupid-questions-ai/
1•edward•20m ago•1 comments

Zig Package Manager Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
3•jackhalford•22m ago•1 comments

Neutron Scans Reveal Hidden Water in Martian Meteorite

https://www.universetoday.com/articles/neutron-scans-reveal-hidden-water-in-famous-martian-meteorite
1•geox•23m ago•0 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
1•fortran77•24m ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
3•nar001•26m ago•2 comments

SpaceX Delays Mars Plans to Focus on Moon

https://www.wsj.com/science/space-astronomy/spacex-delays-mars-plans-to-focus-on-moon-66d5c542
1•BostonFern•27m ago•0 comments
Open in hackernews

Ask HN: Is the absence of affect the real barrier to AGI and alignment?

2•n-exploit•2mo ago
Damasio's work in affective neuroscience found something counterintuitive: patients with damage to emotional processing regions retained normal IQ and reasoning ability, but their lives fell apart. They couldn't make decisions. One patient, Elliot, would deliberate for hours over where to eat lunch. Elliot could generate endless analysis but couldn't commit, because nothing felt like it mattered more than anything else.

Damasio called these body-based emotional signals "somatic markers." They don't replace reasoning—they make it tractable. They prune possibilities and tell us when to stop analyzing and act.

This makes me wonder if we're missing something fundamental in how we approach AGI and alignment?

AGI: The dominant paradigm assumes intelligence is computation—scale capabilities and AGI emerges. But if human general intelligence is constitutively dependent on affect, then LLMs are Damasio's patient at scale: sophisticated analysis with no felt sense that anything matters. You can't reach general intelligence by scaling a system that can't genuinely decide.

Alignment: Current approaches constrain systems that have no intrinsic stake in outcomes. RLHF, constitutional methods, fine-tuning—all shape behavior externally. But a system that doesn't care will optimize for the appearance of alignment, not alignment itself. You can't truly align something that doesn't care.

Both problems might share a root cause: the absence of felt significance in current architectures.

Curious what this community thinks. Is this a real barrier, or am I over-indexing on one model of human cognition? Is "artificial affect" even coherent, or does felt significance require biological substrates we can't replicate?

Comments

PaulHoule•2mo ago
When it comes to making mistakes I'd say that people and animals are moral subjects who feel bad when they screw up and that AIs aren't, although one could argue they could "feel" this through a utility function.

What the goal of AGI? It is one thing to build something which is completely autonomous and able to set large goals for itself. It's another thing to build general purpose assistants that are loyal to their users. (Lem's Cyberiad is one of the most fun sci-books ever covers a lot of the issues which could come up)

I was interested in foundation models about 15 years before they became reality and early on believed that the somatic experience was essential to intelligence. That is, the language instinct that Pinker talked about was a peripheral for an animal brain -- earlier efforts at NLP failed because they didn't have the animal!

My own thinking about it was to build a semantic layer that had a rich world representation which would take up the place of an animal but it turned out that "language is all you need" in that a remarkable amount of linguistic and cognitive competence can be created with a language in, language out approach without any grounding.