Perfect predictors are generally understood to be impossible due to the Uncertainty Principle or just from our general experience that perfect observation or accuracy aren't a feature of our universe. Some people
have then gone on to argue that this makes them irrelevant to real decision theory problems. See the page on hypotheticals for further discussion on whether or not this is valid. Some people have objected on the basis of free will.
One challenge with perfect predictors is that it might be unclear what Omega is
predicting. Take for example Parfit's Hitchhiker. In this problem, you are trapped dying in a desert and a passing driver will only pick you up if you promise to pay them $100 once you are in town. If the driver is a perfect predictor, then someone who always defects will never end up in town, so it is unclear what exactly they are predicting, since the situation is contradictory and the Principle of Explosion means that you can prove anything.
Counterfactuals for Perfect Predictors suggests that even if we can't predict what an agent would do in an inconsistent or conditionally consistent situation, we can predict how it would respond if given input representing an inconsistent situation (we can represent this response as an output).
And indeed Updateless Decision Theory uses input-output maps so it doesn't run into this issue.
A perfect predictor is an agent which can predict the behaviour of an agent or the outcome of an event with perfect accuracy. It is often given the name Omega, but Omega
might also be an almost perfect predictor. It may be questioned whether perfect predictors are possible. In that case, we could imagine that these problems deal with computational agents where Omega has access to your source code and the scenario input that you will receive. This won't be perfect in the sense that there's nothing stopping a machine error or a hacker messing this up, but it will be perfect enough for this to be a good abstraction.