I have an intuition that there is a version of reflective consistency which requires R to code S so that, if R was created by another agent Q, S would make decisions using Q's beliefs even if Q's beliefs were different from R's beliefs (or at least the beliefs that a Bayesian updater would have had in R's position), and even when S or R had uncertainty about which agent Q was. But I don't know how to formulate that intuition to something that could be proven true or false. (But ultimately, S has to be a creator of its own successor states, and S should us

... (read more)

Yes, any physical system could be subverted with a sufficiently unfavorable environment. You wouldn't want to prove perfection. The thing you would want to prove would be more along the lines of, "will this system become at least somewhere around as capable of recovering from any disturbances, and of going on to achieve a good result, as it would be if its designers had thought specifically about what to do in case of each possible disturbance?". (Ideally, this category of "designers" would also sort of bleed over in a principled way i

... (read more)

Rationality Quotes from people associated with LessWrong

by ChristianKl 1 min read29th Jul 201362 comments

24


The other rationality quotes thread operates under the rule:

Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.

Lately it seems that every MIRI or CFAR employee is excempt from being quoted.

As there are still interesting quotes that happen on LessWrong, Overcoming Bias, HPMoR and MIRI/CFAR employee in general, I think it makes sense to open this thread to provide a place for those quotes.