Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
I think that in the coming 15-30 years, the world could plausibly develop “transformative AI”: AI powerful enough to bring us into a new, qualitatively different future, via an explosion in science and technology R&D. This sort of AI could be sufficient to make this the most important century of all time for humanity. The most straightforward vision for developing transformative AI that I can imagine working with very little innovation in techniques is what I’ll call human feedback[1] on diverse tasks (HFDT): > Train a powerful neural network model to simultaneously master a wide variety of challenging tasks (e.g. software development, novel-writing, game play, forecasting, etc) by using reinforcement learning on human feedback and other metrics of performance. HFDT is not the only approach to developing transformative AI,[2] and it may not work at all.[3] But I take it very seriously, and I’m aware of increasingly many executives and ML researchers at AI companies who believe something within this space could work soon. Unfortunately, I think that if AI companies race forward training increasingly powerful models using HFDT, this is likely to eventually lead to a full-blown AI takeover (i.e. a possibly violent uprising or coup by AI systems). I don’t think this is a certainty, but it looks like the best-guess default absent specific efforts to prevent it. More specifically, I will argue in this post that humanity is more likely than not to be taken over by misaligned AI if the following three simplifying assumptions all hold: 1. The “racing forward” assumption: AI companies will aggressively attempt to train the most powerful and world-changing models that they can, without “pausing” progress before the point when these models could defeat all of humanity combined if they were so inclined. 2. The “HFDT scales far” assumption: If HFDT is used to train larger and larger models on more and harder tasks, this will eventually result in models that can autonomou
I agree with this particular reason to worry that we can't agree on a meta-philosophy, but separately think that there might not actually be a good meta-philosophy to find, especially if you're going for greater certainty/clarity than mathematical reasoning!