A minor point about instrumental convergence that I would like feedback on
Preamble My current understanding: the EY/MIRI perspective is that superintelligent AI will invariably instrumentally converge on something that involves extinguishing humanity. I believe I remember a tweet from EY saying that he would be happy with building ASI even if it only had a 10% chance of working out and...
Feb 264