When Maximizing Happiness Goes Wrong
Monster's Utility (grows with each resource)
Imagine a creature—the Utility Monster—that experiences enormously more pleasure from each resource than any normal person. Eating an apple gives you 1 unit of happiness. The same apple gives the monster 1,000 units. Under utilitarian logic, which says we should maximize total happiness... where should all the apples go?
"Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater gains in utility from any sacrifice of others than these others lose... the theory seems to require that we all be sacrificed in the monster's maw."— Robert Nozick, Anarchy, State, and Utopia (1974)
Utilitarianism says: maximize total happiness. The right action is whatever produces the most utility overall.
The Utility Monster gets massively more utility from each resource than ordinary people. 1 apple → 1,000 utils (vs. your measly 1 util).
Therefore, to maximize total utility, we should give everything to the monster. All food. All resources. Your freedom. Your life.
But this is monstrous! It treats humans as mere means to the monster's satisfaction. Utilitarianism fails to protect individual rights.
Follow rules that maximize utility in the long run. A rule saying "feed the monster everything" would create terrible incentives and societal collapse.
Give extra weight to helping the worst-off. The monster is already getting enormous utility—priority goes to those with less.
Some actions are inherently wrong regardless of consequences. Sacrificing people to a monster violates their rights and dignity.
Focus on character, not calculations. A virtuous person wouldn't serve a monster; they'd cultivate justice, compassion, and courage.
Real pleasure diminishes with consumption. Maybe monsters can't exist—no being experiences constant returns to resources forever.
Design society from behind a "veil of ignorance"—not knowing if you'd be the monster or a victim. You'd choose protections for the worst-off.
Should a corporation's profits outweigh workers' well-being if it creates more "total value"?
An addict gets enormous pleasure from drugs—more than others get from the same resources. Should we maximize their supply?
If an AI could experience vastly more "utility" than humans, would utilitarianism demand we serve it?
Does a whale's suffering count more than a mouse's because of its bigger brain? Where do we draw lines?