Sorted by New

# Wiki Contributions

I really appreciate having the examples in parentheses and italicised. It lets me easily skip them when I know what you mean. I wish others would do this.

"Doesn't physics say this universe is going to run out of negentropy before you can do an infinite amount of computation?" Actually, there is a [proposal that could create a computer that runs forever.

I see. Does the method of normalization you gave work even when there is an infinite number of hypotheses?

Decreasing existential risk isn't incredibly important to you? Could you explain why?

Right; I forgot that it used a prefix-free encoding. Apologies if the answer to this is painfully obvious, but does having a prefix-free encoding entail that there is a finite number of possible hypotheses?

I still don't see how that would make all the hypotheses sum to 1. Wouldn't that only make the probabilities of all the hypotheses of length n sum to 1, and thus make the sum of all hypotheses sum to > 1? For example, consider all the hypotheses of length 1. Assuming Omega = 1 for simplicity, there are 2 hypotheses, each with a probability of 2^-1/1 = 0.5, making all them sum to 1. There are 4 hypotheses of length 2, each with a probability of 2^-2/1 = 0.25, making them sum to 1. Thus, the sum of the probabilities of all hypotheses of length <= 2 = 2 > 1.

Is Omega doing something I don't understand? Would all hypotheses be required to be some set length?

That would assign a zero probability of hypotheses that take more than n bits to specify, would it not? That sounds far from optimal.

Did you not previously state that one should learn about the problem as much as one can before coming to a conclusion, lest one falls prey to the confirmation bias? Should one learn about the problem fully before making a decision only when one doesn't suspect to be biased?

"Of course" implies that the answer is obvious. Why is it obvious?

Unfortunately Chaitin's Omega's incomputable, but even if it wasn't I don't see how it would work as a normalizing constant. Chaitin's Omega is a real number, there is an infinite number of hypotheses, and (IIRC) there is no real number r such that r multiplied by infinite equals one, so I don't see how Chaitin's Omega could possible work as a normalizing constant.