One box or two? The predictor already knows your choice...
Newcomb's Paradox, created by physicist William Newcomb in 1960 and popularized by philosopher Robert Nozick in 1969, is one of the most divisive problems in decision theory.
The predictor's rules:
• If you'll take only Box B → Box B contains $1,000,000
• If you'll take both boxes → Box B contains $0
The prediction was made before you arrived. The contents of Box B are fixed. What do you choose?
Surveys of philosophers show roughly even splits!
"Evidential Decision Theory"
Look at the evidence: one-boxers walk away with $1M. Two-boxers walk away with $1K. The predictor is nearly perfect. Who cares about causation? Be the type of person who one-boxes!
"I want to be rich, not causally consistent."
"Causal Decision Theory"
The prediction is done. Box B is either full or empty—and nothing I do NOW can change that. Taking both boxes strictly dominates: I get $1K more no matter what's in Box B!
"I refuse to leave free money on the table."
The paradox exposes a tension between two principles that usually agree:
• Dominance: If action A is better than B in every possible state, choose A. (Two-boxing dominates: +$1K in all scenarios)
• Expected Value: Choose the action with highest expected payoff. (One-boxing: ~$999,000 vs two-boxing: ~$1,001)
In Newcomb's Problem, these principles give opposite answers! The predictor creates a correlation between your choice and Box B's contents that makes the "dominated" strategy actually better in expectation.
Here's a mind-bender: What if you firmly decide to one-box, then at the last moment grab both boxes? The predictor already saw you as a one-boxer, so Box B has $1M. You walk away with $1,001,000!
But wait—if you're the type of person who would pull this trick, the predictor would have seen THAT too. The paradox runs deep into questions of free will, determinism, and the nature of prediction.