Sorted by New

# Wiki Contributions

All right - but here the evidence predicted would simply be "the coin landed on heads", no? I don't really the contradiction between what you're saying and conventional probability theory (more or less all which was developped with the specific idea of making predictions, winning games etc.) Yes I agree that saying "the coin landed on heads with probability 1/3" is a somewhat strange way of putting things (the coin either did or did not land on heads) but it's a shorthand for a conceptual framework that has firmly simple and sound foundations.

I do not agree that accuracy has no meaning outside of resolution. At least this is not the sense in which I was employing the word. By accurate I simply mean numerically correct within the context of conventional probability theory. Like if I ask the question "A dice is rolled - what is the probability that the result will be either three or four?" the accurate answer is 1/3. If I ask "A fair coin is tossed three times, what is the probability that it lands heads each time?" the accurate answer is 1/8 etc. This makes the accuracy of a probability value proposal wholly independent from pay-offs.

I don't think so. Even in the heads case, it could still be Monday - and say the experimenter told her: "Regardless of the ultimate sequence of event, if you predict correctly when you are woken up, a million dollars will go to your children."

To me "as a rational individual" is simply a way of saying "as an individual who is seeking to maximize the accuracy of the probability value she proposes - whenever she is in a position to make such proposal (which implies, among others, that she must be alive to make the proposal)."

I laughed. However you must admit that your comical exaggeration does not necessarily carry a lot of ad rem value.

But then would a less intelligent being  (i.e. the collectivity of human alignment researchers and less powerful AI systems that they use as tool in their research) be capable of validly examining a more intelligent being, without being deceived by the more intelligent being?

Exactly - and then we can have an interesting conversation etc. (e.g. are all ASIs necessarily paperclip maximizers?), which the silent downvote does not allow for.

I see. But how can the poster learn if he doesn't know where it has gone wrong? To give one concrete example: in a comment recently, I simply stated that some people  hold that AI could be a solution to the Fermi paradox (past a certain level of collective smartness an AI is created that destroys its creators). I got a few downvotes on that - and frankly I am puzzled as to why and I would really be curious to understand the reasonings between the downvotes. Did the downvoters hold that the Fermi paradox is not really a thing? Did they think that it is a thing but that AI can't be a solution to it for some obvious reason? Was it something else - I simply don't know; and so I can't learn.

Humm I see... not sure if it totally serves the purpose though. For instance, when I see a comment with a large number of downvotes, I'm much more likely to read it than a comment with a relatively now number of upvotes. So: within certain bounds, I guess.