frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Newcomb's Paradox Needs a Demon

https://samestep.com/blog/newcombs-paradox/
7•sestep•2d ago

Comments

danbruc•2h ago
The existence of a flawless predictor means that you do not have a choice after the predictor made its prediction, the decision must already be baked into the state of the universe accessible to the predictor. It also precludes that any true randomness is affecting the choice as that could not be predicted ahead of time.

I do not think that allowing some prediction error fundamentally changes this, it only means that sometimes the choice may depend on unpredictable true randomness or sometimes the predictor does not measured the relevant state of the universe exactly enough or the prediction algorithm is not flawless. But if the predictor still arrives at the correct prediction most of the time, then most of the time you do not have a choice and most of the time the choice does not depend on true randomness.

Which also renders the entire paradox somewhat moot because there is no choice for you to be made. The existence of a good predictor and the ability to make a choice after the prediction are incompatible. Up to wild time travel scenarios and thinks like that.

halfcat•1h ago
A flawless predictor would indicate you’re in a simulation, but also we cannot even simulate multiple cells at the most fine-grained level of physics.

But also you’re right that even a pretty good (but not perfect) predictor doesn’t change the scenario.

What I find interesting is to change the amounts. If the open box has $0.01 instead of $1000, you’re not thinking ”at least I got something”, and you just one-box.

But if both boxes contain equal amounts, or you swap the amounts in each box, two-boxing is always better.

All that to say, the idea that the right strategy here is to ”be the kind of person who one-boxes” isn’t a universe virtue. If the amounts change, the virtues change.

danbruc•1h ago
A flawless predictor would indicate you’re in a simulation [...]

No, it does not. Replace the human with a computer entering the room, the predictor analyzes the computer and the software running on the computer when it enters. If the decision program does not query a hardware random source or some stray cosmic particle changes the choice, the predictor could perfectly predict the choice just by accurately enough emulating the computer. If the program makes any use of external inputs, say the image from an attached webcam, the predictor also needs to know those inputs well enough. The same could, at least in principle, work for humans.

vidarh•1h ago
I agree with you that it doesn't require that you are in a simulation, but a flawless predictor would be a strong indication that a simulation is possible, and that should raise our assumed probability that we're in a simulation.
arethuza•51m ago
I would think that the existence of a flawless predictor is probably more likely to indicate that memories of predictions, and any associated records, have been modified to make the predictor appear flawless.
ordu•1h ago
> Which also renders the entire paradox somewhat moot because there is no choice for you to be made.

Not quite. You did choose your decision making methods at some point in your life, and you could change them multiple times till you came to the setup of Newcomb's paradox. If we look at your past life as a variable in the problem, then changing this variable changes the outcome, it changes the prediction made by the predictor.

> The existence of a flawless predictor means that you do not have a choice after the predictor made its prediction

I believe, that if your definition of a choice stop working if we assume a deterministic Universe, then you need a better definition of a choice. In a deterministic Universe becomes glaringly obvious that all the framework of free will and choice is just an abstraction, that abstract away things that are not really needed to make a decision.

Moreover I think I can hint how to deal with it: relativity. Different observers cannot agree if an observed agent has free will or not. Accept it fundamentally, like relativity accepts that the universal time doesn't exist, and all the logical paradoxes will go away.

chriswarbo•1h ago
> I believe, that if your definition of a choice stop working if we assume a deterministic Universe, then you need a better definition of a choice. In a deterministic Universe becomes glaringly obvious that all the framework of free will and choice is just an abstraction, that abstract away things that are not really needed to make a decision.

Indeed, I think of concepts like "agency", "choice", "free will", etc. as aspects of a particular sort of scientific model. That sort of model can make good predictions about people, organisations, etc. which would be intractable to many other approaches. It can also be useful in situations that we have more sophisticated models for, e.g. treating a physical system as "wanting" to minimise its energy can give a reasonable prediction of its behaviour very quickly.

That sort of model has also been applied to systems where its predictive powers aren't very good; e.g. modelling weather, agriculture, etc. as being determined by some "will of the gods", and attempting to infer the desires of those gods based on their observed "choices".

It baffles me that some people might think a model of this sort might have any relevance at a fundamental level.

jack_pp•2h ago
There's rational and then there's common sense, if put in that situation who in their right mind would take even a 50% chance that the entity is wrong and greed it for 1000$. All I'd need to know is that it is far more likely I get the million if I go into the game thinking I'd only one-box
gegtik•1h ago
In some ways it is an interesting problem about whether someone can engage a question about a hypothetic question -- and for the purpose of the exercise, assume the proposed parameters of the scenario.
vidarh•1h ago
Assuming I have no way of testing the predictor, my decision would be to pick both boxes on the basis that $1000 is not a lot of money to me, but $1000000 is, and I wouldn't worry about the odds, because without knowing the nature of the specific predictor we're down to Pascal's Wager married to the Halting Problem:

We don't know whether or how our actions and thought processes processes might affect the outcome, and so any speculation over odds is meaningless and devolves to making assumptions we can't test, without even knowing whether that speculation itself might alter the outcome, or how.

But I don't need to speculate about the relative value of $1000 and $1000000 to me. Others might opt for the safe $1000 for the same reason.

malfist•46m ago
Two boxes is the only choice that makes sense. It is always better than one box.

No matter what you do after you enter the room, the predictor has already made their move, nothing you do now will change it. The only logical thing to do is to take both boxes because whatever the value in the second box is it will be added to the first box. If you only take the second box you are objectively always giving up $1,000 and getting no value in exchange for doing so (since not taking the first box doesn't change what's in the second)

Smaug123•41m ago
And for you, of course, that's true! Because you are the sort of being who two-boxes, and this fact is visible to the predictor. Other types of being can do better.
OskarS•37m ago
> Two boxes is the only choice that makes sense. It is always better than one box.

Congratulations on your $1,000. I'll use some of my $1,000,000 I got by nonsensically picking one box to toast in your honor and dedication to logic.

GaelFG•38m ago
I don't get the 'choice' : the content of the box is aldready defined when you take your decision so taking it won't change the content of the black box and the open/transparent box have no drawback. What am I missing ?
gradschool•10m ago
I don't know about y'all, but this paradox was resolved to my complete satisfaction in a blog post some years ago, I believe by Scott Aaronson, though I can't find the link. If the predictor has such a good success rate, then it must be simulating people's brains, but since it's not always right, the simulation isn't perfect. The best strategy for playing this game therefore is to look for indications as to whether I'm the real me or the simulation when the question is posed to me, and choose accordingly. Am I floating in a sensory deprivation tank being asked my choice by a disembodied voice with no recollection of how I got there and no memory of my childhood? In that case maybe I'm the simulation, so my answer is that I'll choose just one box. Is it an ordinary day of my life and a plausible setting with all of my faculties and recollections intact? Then I'll assume simulated me had my back and take both boxes.
djoldman•5m ago
For folks reasoning through the "paradox," this may be helpful:

https://arxiv.org/pdf/0904.2540

Abstract:

> ...We show that the conflicting recommendations in Newcomb’s scenario use different Bayes nets to relate your choice and the algorithm’s prediction. These two Bayes nets are incompatible. This resolves the paradox: the reason there appears to be two conflicting recommendations is that the specification of the underlying Bayes net is open to two, conflicting interpretations...

genshii•3m ago
I'm been presented with this thought experiment before and I always feel like I'm missing something when other people talk about it. Why would you ever take both boxes?

The premise is that the predictor is always right. So whether you take one or both boxes, the predictor would have predicted that choice. We know from the setup that if the predictor said you would take the one box, it will have a million dollars. Therefore, if you take the one box it will have a million dollars in it (because whatever you choose is what the predictor predicted).

As an aside, I think whatever this says about free will or if you're actually making a "choice" is irrelevant in regards to if the million dollars is in the box. The way I see both choices is this:

You "decide" to take both boxes -> the perfect predictor predicted this -> the opaque box has zero dollars -> you get a thousand dollars

You "decide" to take the opaque (one) box -> the perfect predictor predicted this -> the opaque box has a million dollars -> you get a million dollars

Big Data on the Cheapest MacBook

https://duckdb.org/2026/03/11/big-data-on-the-cheapest-macbook
111•bcye•2h ago•71 comments

Dolphin Progress Release 2603

https://dolphin-emu.org/blog/2026/03/12/dolphin-progress-report-release-2603/
114•BitPirate•4h ago•11 comments

3D-Knitting: The Ultimate Guide

https://www.oliver-charles.com/pages/3d-knitting
132•ChadNauseam•5h ago•42 comments

US private credit defaults hit record 9.2% in 2025, Fitch says

https://www.marketscreener.com/news/us-private-credit-defaults-hit-record-9-2-in-2025-fitch-says-...
74•JumpCrisscross•58m ago•51 comments

Avoiding Trigonometry (2013)

https://iquilezles.org/articles/noacos/
89•WithinReason•4h ago•17 comments

Show HN: s@: decentralized social networking over static sites

http://satproto.org/
340•remywang•13h ago•151 comments

SBCL: A Sanely-Bootstrappable Common Lisp (2008) [pdf]

https://research.gold.ac.uk/id/eprint/2336/1/sbcl.pdf
78•pabs3•6h ago•43 comments

Temporal: The 9-year journey to fix time in JavaScript

https://bloomberg.github.io/js-blog/post/temporal/
730•robpalmer•22h ago•233 comments

Printf-Tac-Toe

https://github.com/carlini/printf-tac-toe
62•carlos-menezes•4d ago•6 comments

Returning to Rails in 2026

https://www.markround.com/blog/2026/03/05/returning-to-rails-in-2026/
218•stanislavb•7h ago•132 comments

Making WebAssembly a first-class language on the Web

https://hacks.mozilla.org/2026/02/making-webassembly-a-first-class-language-on-the-web/
610•mikece•1d ago•220 comments

Thinnings: Sublist Witnesses and de Bruijn Index Shift Clumping

https://www.philipzucker.com/thin1/
10•matt_d•2d ago•0 comments

Datahäxan

https://0dd.company/galleries/witches/7.html
98•akkartik•2d ago•8 comments

I was interviewed by an AI bot for a job

https://www.theverge.com/featured-video/892850/i-was-interviewed-by-an-ai-bot-for-a-job
363•speckx•19h ago•355 comments

Tested: How Many Times Can a DVD±RW Be Rewritten? Methodology and Results

https://goughlui.com/2026/03/07/tested-how-many-times-can-a-dvd%C2%B1rw-be-rewritten-part-2-metho...
191•giuliomagnifico•4d ago•58 comments

1B identity records exposed in ID verification data leak

https://www.aol.com/articles/1-billion-identity-records-exposed-152505381.html
117•robtherobber•3h ago•29 comments

High fidelity font synthesis for CJK languages

https://github.com/kaonashi-tyc/zi2zi-JiT
9•kaonashi-tyc-01•3d ago•1 comments

Don't post generated/AI-edited comments. HN is for conversation between humans

https://news.ycombinator.com/newsguidelines.html#generated
3800•usefulposter•18h ago•1423 comments

The MacBook Neo

https://daringfireball.net/2026/03/the_macbook_neo
589•etothet•1d ago•927 comments

WebPKI and You

https://blog.brycekerley.net/2026/03/08/webpki-and-you.html
74•aragilar•3d ago•8 comments

Reliable Software in the LLM Era

https://quint-lang.org/posts/llm_era
37•mempirate•5h ago•14 comments

NASA's DART spacecraft changed an asteroid's orbit around the sun

https://www.sciencenews.org/article/spacecraft-changed-asteroid-orbit-nasa
55•pseudolus•3d ago•22 comments

Show HN: I built a tool that watches webpages and exposes changes as RSS

https://sitespy.app
282•vkuprin•21h ago•74 comments

Google closes deal to acquire Wiz

https://www.wiz.io/blog/google-closes-deal-to-acquire-wiz
309•aldarisbm•22h ago•184 comments

Meticulous (YC S21) is hiring to redefine software dev

https://jobs.ashbyhq.com/meticulous/3197ae3d-bb26-4750-9ed7-b830f640515e
1•Gabriel_h•16h ago

Faster asin() was hiding in plain sight

https://16bpp.net/blog/post/faster-asin-was-hiding-in-plain-sight/
225•def-pri-pub•23h ago•120 comments

BitNet: Inference framework for 1-bit LLMs

https://github.com/microsoft/BitNet
357•redm•1d ago•163 comments

Personal Computer by Perplexity

https://www.perplexity.ai/personal-computer-waitlist
184•josephwegner•19h ago•146 comments

Entities enabling scientific fraud at scale (2025)

https://doi.org/10.1073/pnas.2420092122
298•peyton•1d ago•204 comments

Many SWE-bench-Passing PRs would not be merged

https://metr.org/notes/2026-03-10-many-swe-bench-passing-prs-would-not-be-merged-into-main/
256•mustaphah•16h ago•138 comments