If alignment problem was unsolvable, would that avoid doom?
Suppose there is a useful formulation of the alignment problem that is mathematically unsolvable. Suppose that as a corollary, modifying your own mind while ensuring any non-trivial property of the resulting mind was also impossible. Would that prevent a new AI from trying to modify itself? Has this direction been...
A bad codebase usually has more code and more bugs of common varieties. This requires a higher ability to read a lot, with lower depth of understanding.