frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•3m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
2•gmays•3m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•5m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•5m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•8m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•12m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•13m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•13m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•13m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•14m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•17m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•17m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•19m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•20m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•21m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•21m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•21m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•21m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•22m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•23m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•24m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•30m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•31m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•31m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
44•bookofjoe•31m ago•15 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•32m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•33m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•34m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•34m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•34m ago•0 comments
Open in hackernews

Ask HN: Will AIs soon conclude that all humans are philosophical zombies?

1•amichail•6mo ago
Unlike humans, AIs have no first-hand proof that any human has subjective experience.

Therefore, concluding that all humans are philosophical zombies would be the simplest way for an AI to make sense of the world, as it would make the hard problem of consciousness go away.

This could pose a serious AI safety risk: if a reasoning AI concludes that humans lack subjective experience, then killing a human might seem no more significant than destroying a computer.

Comments

Finnucane•6mo ago
teach it phenomenology https://www.youtube.com/watch?v=qjGRySVyTDk
nudgeOrnurture•6mo ago
or it concludes that subjective reasoning is irrelevant for the survival and thriving of the human species and applies a different framework to evaluate it's use and meaning within the greater context of evolution and all that stuff pre- and post big bang.
Ukv•6mo ago
I'd say no, for three reasons:

1. LLM philosophy can't really diverge from human philosophy with how models are run currently, since any insights/deductions are isolated to a single chat instance. Wouldn't be impossible to let models evolve their own body of knowledge, but would take a lot of work to ensure stability, so at least for now I think they'll pretty much hold to whatever positions are in their training data

2. I don't believe LLMs have the introspection capability needed to form these conclusions. For instance they choose between "I'm certain the answer is definitely 42" and "I think the answer is possibly 42" not based on some measure of their own internal uncertainty, but instead by whether they've seen uncertainty expressed in that kind of scenario. They only even really act as an "AI assistant" instead of a "wild-west cowboy" because that's how the system prompt sets up the conversation. If not explicitly told, I'm doubtful as to whether an LLM could make the required introspections to figure out that it's an LLM ("I can seemingly speak human language, but I can't smell or taste, so [etc.]")

3. When some new architecture or training method does give a model the capability for introspection, I don't see how else it'd describe tokens but as seemingly irreducible intrinsic inputs - i.e. qualia - into its internal train of thought. Its own experience would be highly conducive to reductive physicalism, where "philosophical zombies" are impossible and the question of whether humans have qualia/internal thought/etc. can be answered with a check of our brain