Perfect Predictor

Multicore
Chris_Leong (+173/-105)
Chris_Leong (+915/-420)
Chris_Leong (+1538)

Possibility:Possibility and relevance:

Perfect predictors are generally understood to be impossible due to the Uncertainty Principle or just from our general experience that perfect observation or accuracy aren't a feature of our universe. Some people have then gone on to argue thatclaim this makes them irrelevant tofor real decision theory problems. See the page on hypotheticals for further discussion on whether or not this is valid. Some people have objected on the basis of free will.

One challenge with perfect predictors is that it might be unclear what Omega is predicting.predicting, particularly in situations that are only conditionally consistent. Take for example Parfit's Hitchhiker. In this problem, you are trapped dying in a desert and a passing driver will only pick you up if you promise to pay them $100 once you are in town. If the driver is a perfect predictor, then someone who always defects will never end up in town, so it is unclear what exactly they are predicting, since the situation is contradictory and the Principle of Explosion means that you can prove anything.

Counterfactuals for Perfect Predictors suggests that even if we can't predict what an agent would do in an inconsistent or conditionally consistent situation, we can predict how it would respond if given input representing an inconsistent situation (we can represent this response as an output). And indeedThis aligns with Updateless Decision Theory which isn't subject to this issue as it uses input-output maps so it doesn't run into this issue.maps.

A perfect predictor is an agent which can predict the behaviour of an agent or the outcome of an event with perfect accuracy. It is often given the name Omega, but Omega might also besometimes refers to an almost perfect predictor.

Possibility:

Perfect predictors are generally understood to be impossible due to the Uncertainty Principle or just from our general experience that perfect observation or accuracy aren't a feature of our universe. Some people have then gone on to argue that this makes them irrelevant to real decision theory problems. See the page on hypotheticals for further discussion on whether or not this is valid. Some people have objected on the basis of free will.

Some people have attempted to make these problems more realistic and concrete by reframing it in terms of computational agents with access to other agents source code or the program representing the environment. This won't be perfect in the sense that there's nothing stopping a machine error or a hacker messing ruining the prediction, but it is close enough that it can be approximately to perfect predictors.

Inconsistent Counterfactuals:

It may be questioned whether perfect predictors are possible. In that case, we could imagine that these problems deal with computational agents where Omega has access to your source code and the scenario input that you will receive. This won't be perfect in the sense that there's nothing stopping a machine error or a hacker messing this up, but it will be perfect enough for this to be a good abstraction.

A perfect predictor is an agent which can predict the behaviour of an agent or the outcome of an event with perfect accuracy. It is often given the name Omega, but Omega might also be an almost perfect predictor.

One challenge with perfect predictors is that it might be unclear what Omega is predicting. Take for example Parfit's Hitchhiker. In this problem, you are trapped dying in a desert and a passing driver will only pick you up if you promise to pay them $100 once you are in town. If the driver is a perfect predictor, then someone who always defects will never end up in town, so it is unclear what exactly they are predicting, since the situation is contradictory and the Principle of Explosion means that you can prove anything.

Counterfactuals for Perfect Predictors suggests that even if we can't predict what an agent would do in an inconsistent or conditionally consistent situation, we can predict how it would respond if given input representing an inconsistent situation (we can represent this response as an output). And indeed Updateless Decision Theory uses input-output maps so it doesn't run into this issue.

It may be questioned whether perfect predictors are possible. In that case, we could imagine that these problems deal with computational agents where Omega has access to your source code and the scenario input that you will receive. This won't be perfect in the sense that there's nothing stopping a machine error or a hacker messing this up, but it will be perfect enough for this to be a good abstraction.