Posts

Sorted by New

Wiki Contributions

Comments

You can hate cruelty without hating the people who cause it

Considering Harry might destroy the world, and this might be the very way he does it, why not let Hermione take care of them?

Regardless of other differences in utility function, Harry and Voldemort both want the world to not be destroyed, and consider this of the utmost priority.

Aumann's agreement theorem means that as they are both rationalists, they should be able to come to the same opinion on what the best course of action is to prevent that. Harry was willing to sacrifice himself earlier to save others.

Harry is allowed to convince voldy to keep him in a coma to kill later. He just has to "evade immediate death", even if there is no hope of survival afterwards

How about simply telling Voldemort that he doesn't have a complete model of time, and give him a bunch of examples of things until one is found which voldy wouldn't have predicted. Suggest to voldemort that he should keep harry in a coma until he has done more experiments with Time to derive its nature, and then kill Harry without waking him up.

I mean the existing unbreakable vow that Harry has just been bound by could perhaps be used for something else.

Thoughts:

  • Can the unbreakable vow be leveraged for unbreakable pre commitments?
  • Harry knows that the horcruxes will eventually be destroyed through heat death of the universe if nothing else and could use this to tell Voldemort something like "if you kill me you will die" in parseltongue

Equivalence of infinite cardinalities is determined by whether a bijection between sets of those cardinalities exists. In this case, if interpreted as cardinalities, both infinities would be equal.

Also, the order in which you sum the terms in a series can matter. See here: https://en.wikipedia.org/wiki/Alternating_series#Rearrangements

I was reading about the St. Petersburg paradox

I was wondering how you compare two games with infinite expected value. The obvious way would seem to be to take the limit of the difference of their expected value, as one tolerates less and less likely outcomes.

Is there any existing research on this?

I'm confused because I had always thought it would be the exact opposite. To predict your observational history given a description of the universe, solomonoff induction needs to find you in it. The more special you are, the easier you are to find and thus the easier it is to find your observational history.

Load More