I think your choice of "contemporary example of inadequate ethical heuristics" weakens the post as a whole, by invoking the specter of the sort of third-rail issue where the discourse is relatively well-modeled by conflict theory. That is, I was loving the post until I got to that part and my brain was suddenly full of maybe-I'm-about-to-be-eaten-by-a-tiger alarms.
Medical prediction markets, like in dath ilan, might prefer a 5-second reaction time to 1-minute.
I feel like this post contains a valuable insight (it's virtuous to distrust and verify, to protect the community against liars), sandwiched inside a terrible framing (honor is for suckers).
I gesture vaguely at Morality as Fixed Computation, moral realism, utility-function blackboxes, Learning What to Value, discerning an initially-uncertain utility function that may not be straightforward to get definitive information about.
Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.
Does this mean that we don't even need to get into anything as esoteric as brain surgery – that AIXI can't learn to play Sokoban (without the ability to restart the level)?
Belief in disbelief, perhaps.
The religious version
You may want something like "the Christian version". Ancient Greek paganism was a religion.
I assumed it was something like neuroplasticity.