Disagreements over the prioritization of existential risk from AI
Earlier this year, the Future of Life Institute and the Center for AI Safety published open letters that promoted existential risks (x-risks) from AI as a global priority. In July, Google Research fellow Blaise Aguera y Arcas and MILA-affiliated AI researchers Blake Richards, Dhanya Sridhar, and Guillaume Lajoie co-wrote an...