Coherence is the property that an agent (always) updates their beliefs through probabilistic conditioning. Usually, one argues that coherence is desirable through Cox's theorem or the Dutch Book results. This means that coherence is a very brittle thing - you can either be coherent or not, and being approximately Bayesian in most senses still violates the conditions which these results pose as desirable. If you
There is some incomplete text "If you " here.
You might not have read aisafety.dance. Although it doesn't explain in detail what AI and superintelligence are, it did a really good job of describing the specifics of AI safety, possibly on par with the book (I haven't read the book yet, so this is an educated guess)