We don't know whether or how our actions and thought processes processes might affect the outcome, and so any speculation over odds is meaningless and devolves to making assumptions we can't test, without even knowing whether that speculation itself might alter the outcome, or how.
But I don't need to speculate about the relative value of $1000 and $1000000 to me. Others might opt for the safe $1000 for the same reason.
No matter what you do after you enter the room, the predictor has already made their move, nothing you do now will change it. The only logical thing to do is to take both boxes because whatever the value in the second box is it will be added to the first box. If you only take the second box you are objectively always giving up $1,000 and getting no value in exchange for doing so (since not taking the first box doesn't change what's in the second)
Congratulations on your $1,000. I'll use some of my $1,000,000 I got by nonsensically picking one box to toast in your honor and dedication to logic.
https://arxiv.org/pdf/0904.2540
Abstract:
> ...We show that the conflicting recommendations in Newcomb’s scenario use different Bayes nets to relate your choice and the algorithm’s prediction. These two Bayes nets are incompatible. This resolves the paradox: the reason there appears to be two conflicting recommendations is that the specification of the underlying Bayes net is open to two, conflicting interpretations...
The premise is that the predictor is always right. So whether you take one or both boxes, the predictor would have predicted that choice. We know from the setup that if the predictor said you would take the one box, it will have a million dollars. Therefore, if you take the one box it will have a million dollars in it (because whatever you choose is what the predictor predicted).
As an aside, I think whatever this says about free will or if you're actually making a "choice" is irrelevant in regards to if the million dollars is in the box. The way I see both choices is this:
You "decide" to take both boxes -> the perfect predictor predicted this -> the opaque box has zero dollars -> you get a thousand dollars
You "decide" to take the opaque (one) box -> the perfect predictor predicted this -> the opaque box has a million dollars -> you get a million dollars
danbruc•2h ago
I do not think that allowing some prediction error fundamentally changes this, it only means that sometimes the choice may depend on unpredictable true randomness or sometimes the predictor does not measured the relevant state of the universe exactly enough or the prediction algorithm is not flawless. But if the predictor still arrives at the correct prediction most of the time, then most of the time you do not have a choice and most of the time the choice does not depend on true randomness.
Which also renders the entire paradox somewhat moot because there is no choice for you to be made. The existence of a good predictor and the ability to make a choice after the prediction are incompatible. Up to wild time travel scenarios and thinks like that.
halfcat•1h ago
But also you’re right that even a pretty good (but not perfect) predictor doesn’t change the scenario.
What I find interesting is to change the amounts. If the open box has $0.01 instead of $1000, you’re not thinking ”at least I got something”, and you just one-box.
But if both boxes contain equal amounts, or you swap the amounts in each box, two-boxing is always better.
All that to say, the idea that the right strategy here is to ”be the kind of person who one-boxes” isn’t a universe virtue. If the amounts change, the virtues change.
danbruc•1h ago
No, it does not. Replace the human with a computer entering the room, the predictor analyzes the computer and the software running on the computer when it enters. If the decision program does not query a hardware random source or some stray cosmic particle changes the choice, the predictor could perfectly predict the choice just by accurately enough emulating the computer. If the program makes any use of external inputs, say the image from an attached webcam, the predictor also needs to know those inputs well enough. The same could, at least in principle, work for humans.
vidarh•1h ago
arethuza•51m ago
ordu•1h ago
Not quite. You did choose your decision making methods at some point in your life, and you could change them multiple times till you came to the setup of Newcomb's paradox. If we look at your past life as a variable in the problem, then changing this variable changes the outcome, it changes the prediction made by the predictor.
> The existence of a flawless predictor means that you do not have a choice after the predictor made its prediction
I believe, that if your definition of a choice stop working if we assume a deterministic Universe, then you need a better definition of a choice. In a deterministic Universe becomes glaringly obvious that all the framework of free will and choice is just an abstraction, that abstract away things that are not really needed to make a decision.
Moreover I think I can hint how to deal with it: relativity. Different observers cannot agree if an observed agent has free will or not. Accept it fundamentally, like relativity accepts that the universal time doesn't exist, and all the logical paradoxes will go away.
chriswarbo•1h ago
Indeed, I think of concepts like "agency", "choice", "free will", etc. as aspects of a particular sort of scientific model. That sort of model can make good predictions about people, organisations, etc. which would be intractable to many other approaches. It can also be useful in situations that we have more sophisticated models for, e.g. treating a physical system as "wanting" to minimise its energy can give a reasonable prediction of its behaviour very quickly.
That sort of model has also been applied to systems where its predictive powers aren't very good; e.g. modelling weather, agriculture, etc. as being determined by some "will of the gods", and attempting to infer the desires of those gods based on their observed "choices".
It baffles me that some people might think a model of this sort might have any relevance at a fundamental level.