frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I decomposed 87 tasks to find where AI agents structurally collapse

https://github.com/XxCotHGxX/Instruction_Entropy
1•XxCotHGxX•2m ago•1 comments

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
1•timpera•3m ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•4m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
1•jandrewrogers•5m ago•0 comments

Peacock. A New Programming Language

1•hashhooshy•10m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
2•bookofjoe•11m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•15m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•15m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•16m ago•0 comments

AI overlay that reads anything on your screen (invisible to screen capture)

https://lowlighter.app/
1•andylytic•17m ago•1 comments

Show HN: Seafloor, be up and running with OpenClaw in 20 seconds

https://seafloor.bot/
1•k0mplex•17m ago•0 comments

Tesla turbine-inspired structure generates electricity using compressed air

https://techxplore.com/news/2026-01-tesla-turbine-generates-electricity-compressed.html
2•PaulHoule•19m ago•0 comments

State Department deleting 17 years of tweets (2009-2025); preservation needed

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•sleazylice•19m ago•1 comments

Learning to code, or building side projects with AI help, this one's for you

https://codeslick.dev/learn
1•vitorlourenco•19m ago•0 comments

Effulgence RPG Engine [video]

https://www.youtube.com/watch?v=xFQOUe9S7dU
1•msuniverse2026•21m ago•0 comments

Five disciplines discovered the same math independently – none of them knew

https://freethemath.org
4•energyscholar•21m ago•1 comments

We Scanned an AI Assistant for Security Issues: 12,465 Vulnerabilities

https://codeslick.dev/blog/openclaw-security-audit
1•vitorlourenco•22m ago•0 comments

Amazon no longer defend cloud customers against video patent infringement claims

https://ipfray.com/amazon-no-longer-defends-cloud-customers-against-video-patent-infringement-cla...
2•ffworld•23m ago•0 comments

Show HN: Medinilla – an OCPP compliant .NET back end (partially done)

https://github.com/eliodecolli/Medinilla
2•rhcm•26m ago•0 comments

How Does AI Distribute the Pie? Large Language Models and the Ultimatum Game

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6157066
1•dkga•26m ago•1 comments

Resistance Infrastructure

https://www.profgalloway.com/resistance-infrastructure/
3•samizdis•31m ago•1 comments

Fire-juggling unicyclist caught performing on crossing

https://news.sky.com/story/fire-juggling-unicyclist-caught-performing-on-crossing-13504459
1•austinallegro•31m ago•0 comments

Restoring a lost 1981 Unix roguelike (protoHack) and preserving Hack 1.0.3

https://github.com/Critlist/protoHack
2•Critlist•33m ago•0 comments

GPS and Time Dilation – Special and General Relativity

https://philosophersview.com/gps-and-time-dilation/
1•mistyvales•36m ago•0 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
1•davidcondrey•36m ago•1 comments

Show HN: I built a clawdbot that texts like your crush

https://14.israelfirew.co
2•IsruAlpha•38m ago•2 comments

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
2•walterbell•41m ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•42m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
2•_august•44m ago•0 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
15•martialg•44m ago•1 comments
Open in hackernews

Ask HN: Is the absence of affect the real barrier to AGI and alignment?

2•n-exploit•2mo ago
Damasio's work in affective neuroscience found something counterintuitive: patients with damage to emotional processing regions retained normal IQ and reasoning ability, but their lives fell apart. They couldn't make decisions. One patient, Elliot, would deliberate for hours over where to eat lunch. Elliot could generate endless analysis but couldn't commit, because nothing felt like it mattered more than anything else.

Damasio called these body-based emotional signals "somatic markers." They don't replace reasoning—they make it tractable. They prune possibilities and tell us when to stop analyzing and act.

This makes me wonder if we're missing something fundamental in how we approach AGI and alignment?

AGI: The dominant paradigm assumes intelligence is computation—scale capabilities and AGI emerges. But if human general intelligence is constitutively dependent on affect, then LLMs are Damasio's patient at scale: sophisticated analysis with no felt sense that anything matters. You can't reach general intelligence by scaling a system that can't genuinely decide.

Alignment: Current approaches constrain systems that have no intrinsic stake in outcomes. RLHF, constitutional methods, fine-tuning—all shape behavior externally. But a system that doesn't care will optimize for the appearance of alignment, not alignment itself. You can't truly align something that doesn't care.

Both problems might share a root cause: the absence of felt significance in current architectures.

Curious what this community thinks. Is this a real barrier, or am I over-indexing on one model of human cognition? Is "artificial affect" even coherent, or does felt significance require biological substrates we can't replicate?

Comments

PaulHoule•2mo ago
When it comes to making mistakes I'd say that people and animals are moral subjects who feel bad when they screw up and that AIs aren't, although one could argue they could "feel" this through a utility function.

What the goal of AGI? It is one thing to build something which is completely autonomous and able to set large goals for itself. It's another thing to build general purpose assistants that are loyal to their users. (Lem's Cyberiad is one of the most fun sci-books ever covers a lot of the issues which could come up)

I was interested in foundation models about 15 years before they became reality and early on believed that the somatic experience was essential to intelligence. That is, the language instinct that Pinker talked about was a peripheral for an animal brain -- earlier efforts at NLP failed because they didn't have the animal!

My own thinking about it was to build a semantic layer that had a rich world representation which would take up the place of an animal but it turned out that "language is all you need" in that a remarkable amount of linguistic and cognitive competence can be created with a language in, language out approach without any grounding.