There's something I've seen some rationalists try for, which I think Eliezer might be aiming at here, which is to try and be a truly robust agent.
This is very close to what I've always seen as the whole intent of the Sequences. I also feel like there's a connection here to what I see as a bidirectional symmetry of the Sequences' treatment of human rationality and Artificially Intelligent Agents. I still have trouble phrasing exactly what it is I feel like I notice here, but here's an attempt:
As an introductory manual on improving the Art of Human Rationality, the hypothetical perfectly rational, internally consistent, computationally Bayes-complete superintelligence is used as the... (read more)
This is very close to what I've always seen as the whole intent of the Sequences. I also feel like there's a connection here to what I see as a bidirectional symmetry of the Sequences' treatment of human rationality and Artificially Intelligent Agents. I still have trouble phrasing exactly what it is I feel like I notice here, but here's an attempt:
As an introductory manual on improving the Art of Human Rationality, the hypothetical perfectly rational, internally consistent, computationally Bayes-complete superintelligence is used as the... (read more)