Who ignores, or argues against courage and honesty?
As an intrinsic value? Lots of utilitarians, myself included. I'm unsure if Rob's intent was to suggest these things are values worth respecting intrinsically or just instrumentally.
(At the risk of necroposting:) Was this paper ever written? Can't seem to find it, but I'm interested in any developments on this line of research.
It seems plausible that transformative agents will be trained exclusively on real-world data (without using simulated environments); including social media feed-creation algorithms, and algo-trading algorithms. In such cases, the researchers don't choose how to implement the "other agents" (the other agents are just part of the real-world environment that the researchers don't control).
I have quite a different intuition on this, and I'm curious if you have a particular justification for expecting non-simulated training for multi-agent problems. Some reasons I expect otherwise: