Newcomb's Problem

Discuss the wiki-tag on this page. Here is the place to ask questions and propose changes.
New Comment
1 comment, sorted by

Similar issues arise if we imagine a skilled human psychologist who can predict other people's actions with 65% accuracy.

This statement hides a subtle problem.

Consider the case of an agent that can predict an arbitrary agent with at least 65% probability, running the following algorithm: predict what it itself will do, then do the opposite. It is trivial to show that said agent is self-contradictory.

The only case that you can have a "skilled human psychologist who can predict other people's actions with 65% accuracy" and not have similar self-contradictions is both a) there is no more than one such agent in existence, and b) said agent cannot predict themselves.

In which case we've essentially elevated said 'skilled human psychologist' to Oracle level (in the computational-oracle sense)...