Discussion article for the meetup : MIRIx: Sleeping Beauty discussion
One important question that frequently comes up in considering probabilities and decision making processes is how do we estimate probabilities that are entangled with our own existence, aka, anthropic probabilities. An even broader question is the notion of probability an actually correct philosophical notion in this case or is the notion of a decision, algorithm or utility more fundamental?
I want to actually try and tackle a current unsolved problem (Sleeping Beauty) and attempt to understand it better and see if any progress can be made on it. There are several issues that are brought up here: a) What is the “correct probability”? Can we make situations in which one is better than another? b) Do we abandon probability? How do we update on new data in the case? c) Does considering “self” as algorithm vs “self” as instance make a difference?