frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•42s ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•45s ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•1m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
1•Bender•1m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•3m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•3m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•6m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•8m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•9m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•10m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•13m ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•16m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•16m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•17m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•18m ago•0 comments

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•20m ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
3•sohimaster•22m ago•1 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
3•harshalone•22m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•27m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•27m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•28m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
2•Brajeshwar•29m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•30m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•30m ago•1 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
12•c420•31m ago•2 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•31m ago•0 comments

It's time for the world to boycott the US

https://www.aljazeera.com/opinions/2026/2/5/its-time-for-the-world-to-boycott-the-us
3•HotGarbage•32m ago•0 comments

Show HN: Semantic Search for terminal commands in the Browser (No Back end)

https://jslambda.github.io/tldr-vsearch/
1•jslambda•32m ago•1 comments

The AI CEO Experiment

https://yukicapital.com/blog/the-ai-ceo-experiment/
2•romainsimon•33m ago•0 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
5•surprisetalk•37m ago•1 comments
Open in hackernews

Ask HN: Is the absence of affect the real barrier to AGI and alignment?

2•n-exploit•2mo ago
Damasio's work in affective neuroscience found something counterintuitive: patients with damage to emotional processing regions retained normal IQ and reasoning ability, but their lives fell apart. They couldn't make decisions. One patient, Elliot, would deliberate for hours over where to eat lunch. Elliot could generate endless analysis but couldn't commit, because nothing felt like it mattered more than anything else.

Damasio called these body-based emotional signals "somatic markers." They don't replace reasoning—they make it tractable. They prune possibilities and tell us when to stop analyzing and act.

This makes me wonder if we're missing something fundamental in how we approach AGI and alignment?

AGI: The dominant paradigm assumes intelligence is computation—scale capabilities and AGI emerges. But if human general intelligence is constitutively dependent on affect, then LLMs are Damasio's patient at scale: sophisticated analysis with no felt sense that anything matters. You can't reach general intelligence by scaling a system that can't genuinely decide.

Alignment: Current approaches constrain systems that have no intrinsic stake in outcomes. RLHF, constitutional methods, fine-tuning—all shape behavior externally. But a system that doesn't care will optimize for the appearance of alignment, not alignment itself. You can't truly align something that doesn't care.

Both problems might share a root cause: the absence of felt significance in current architectures.

Curious what this community thinks. Is this a real barrier, or am I over-indexing on one model of human cognition? Is "artificial affect" even coherent, or does felt significance require biological substrates we can't replicate?

Comments

PaulHoule•2mo ago
When it comes to making mistakes I'd say that people and animals are moral subjects who feel bad when they screw up and that AIs aren't, although one could argue they could "feel" this through a utility function.

What the goal of AGI? It is one thing to build something which is completely autonomous and able to set large goals for itself. It's another thing to build general purpose assistants that are loyal to their users. (Lem's Cyberiad is one of the most fun sci-books ever covers a lot of the issues which could come up)

I was interested in foundation models about 15 years before they became reality and early on believed that the somatic experience was essential to intelligence. That is, the language instinct that Pinker talked about was a peripheral for an animal brain -- earlier efforts at NLP failed because they didn't have the animal!

My own thinking about it was to build a semantic layer that had a rich world representation which would take up the place of an animal but it turned out that "language is all you need" in that a remarkable amount of linguistic and cognitive competence can be created with a language in, language out approach without any grounding.