frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
1•_____k•27s ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•2m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
2•CurtHagenlocher•4m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•5m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•5m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•5m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•6m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•7m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•10m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•14m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•15m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•16m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•20m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•21m ago•0 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•23m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
2•Willingham•30m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
2•shervinafshar•31m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•36m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
10•mooreds•36m ago•3 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•37m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

2•pinkmuffinere•39m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•43m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•45m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
2•saikatsg•45m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
2•aweussom•46m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
4•archb•48m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•48m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•49m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•49m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•54m ago•0 comments
Open in hackernews

Ask HN: Will AIs soon conclude that all humans are philosophical zombies?

1•amichail•6mo ago
Unlike humans, AIs have no first-hand proof that any human has subjective experience.

Therefore, concluding that all humans are philosophical zombies would be the simplest way for an AI to make sense of the world, as it would make the hard problem of consciousness go away.

This could pose a serious AI safety risk: if a reasoning AI concludes that humans lack subjective experience, then killing a human might seem no more significant than destroying a computer.

Comments

Finnucane•6mo ago
teach it phenomenology https://www.youtube.com/watch?v=qjGRySVyTDk
nudgeOrnurture•6mo ago
or it concludes that subjective reasoning is irrelevant for the survival and thriving of the human species and applies a different framework to evaluate it's use and meaning within the greater context of evolution and all that stuff pre- and post big bang.
Ukv•6mo ago
I'd say no, for three reasons:

1. LLM philosophy can't really diverge from human philosophy with how models are run currently, since any insights/deductions are isolated to a single chat instance. Wouldn't be impossible to let models evolve their own body of knowledge, but would take a lot of work to ensure stability, so at least for now I think they'll pretty much hold to whatever positions are in their training data

2. I don't believe LLMs have the introspection capability needed to form these conclusions. For instance they choose between "I'm certain the answer is definitely 42" and "I think the answer is possibly 42" not based on some measure of their own internal uncertainty, but instead by whether they've seen uncertainty expressed in that kind of scenario. They only even really act as an "AI assistant" instead of a "wild-west cowboy" because that's how the system prompt sets up the conversation. If not explicitly told, I'm doubtful as to whether an LLM could make the required introspections to figure out that it's an LLM ("I can seemingly speak human language, but I can't smell or taste, so [etc.]")

3. When some new architecture or training method does give a model the capability for introspection, I don't see how else it'd describe tokens but as seemingly irreducible intrinsic inputs - i.e. qualia - into its internal train of thought. Its own experience would be highly conducive to reductive physicalism, where "philosophical zombies" are impossible and the question of whether humans have qualia/internal thought/etc. can be answered with a check of our brain