Do you think it would be useful to contrast your views with those of Yudkowksy/Soares/MIRI? Both parties seem largely aligned on the direction of the efforts that need to be made, with the MIRI folks making a more fundamental conditional statement about alignment, and yours seeming less pessimistic about its tractability given the potential for bootstrapping alignment via automated researchers, which is the crux of MIRI's catch-22 view on the matter
Do you think it would be useful to contrast your views with those of Yudkowksy/Soares/MIRI? Both parties seem largely aligned on the direction of the efforts that need to be made, with the MIRI folks making a more fundamental conditional statement about alignment, and yours seeming less pessimistic about its tractability given the potential for bootstrapping alignment via automated researchers, which is the crux of MIRI's catch-22 view on the matter