There's something I've seen some rationalists try for, which I think Eliezer might be aiming at here, which is to try and be a truly robust agent.
This is very close to what I've always seen as the whole intent of the Sequences. I also feel like there's a connection here to what I see as a bidirectional symmetry of the Sequences' treatment of human rationality and Artificially Intelligent Agents. I still have trouble phrasing exactly what it is I feel like I notice here, but here's an attempt:
As an introductory manual on improving the Art of Human Rationality, the hypothetical perfectly rational, internally consistent, computationally Bayes-complete superintelligence is used as the Platonic Ideal of a Rational Intelligence, and the Sequences ground many of Rationality's tools, techniques, and heuristics as approximations of that fundamentally non-human ideal evidence processor.
or in the other direction:
As an introductory guide to building a Friendly Superintelligence, the Coherent Extrapolated Human Rationalist, a model developed from intuitively appealing rational virtues, is used as a guide for what we want optimal intelligent agents to look like, and the Sequences as a whole are about taking this more human grounding, and justifying it as the basis on which to guide the development of AI into something that works properly, and something that we see as Friendly.
Maybe that's not the best description, but I think there's something there and that it's relevant to this idea of trying to use rationality to be a "truly robust agent". In any case I've always felt there was an interesting parallel with how the Sequences can be seen as "A Manual For Building Friendly AI" based on rational Bayesian principles, or "A Manual For Teaching Humans Rational Principles" based on an idealized Bayesian AI.