The Michigan AI Safety Initiative (MAISI) is a student organization at the University of Michigan. Our mission is to:

  • Build the AI safety community at the University of Michigan

  • Launch students into high-impact AI safety careers

  • Conduct research that reduces the risk of AI-related catastrophes

Who we are

Hopefully not! AI has tremendous potential for making the world a better place, especially as the technology continues to develop. We’re already seeing some beneficial applications of AI in healthcare, accessibility, language translation, automotive safety, and art creation, to name just a few.

But as an incredibly powerful technology, AI also poses some serious risks. At the very least, malicious actors could use AI to cause harm, such as building biological weapons, deploying hazardous malware, or empowering oppressive regimes. Additionally, AI systems could become widespread and irreplaceable because of their potential for generating business revenue. They would then have significant influence over society, and they might lead the world down paths that conflict with human values.

More speculatively, future AI systems could seek power or control over humans. AI is evolving rapidly, and we might see qualitatively different systems in the years ahead. Such systems may be able to form sophisticated plans and act autonomously to achieve their own goals. If that’s the case, they may try to acquire raw materials or resist shutdown attempts, as these strategies are useful for achieving a wide variety of goals. Highly capable AI systems might overcome human resistance to these efforts, similar to how modern chess machines defeat even the best human chess players.

These possibilities can sound like science fiction, and some AI practitioners are skeptical. Indeed, in a 2022 survey of AI researchers, 25% of respondents gave a 0% (impossible) chance of AI causing “extremely bad” outcomes such as the death of all humans. But more alarmingly, 48% of respondents in the same survey assigned at least a 10% probability to such outcomes. And in May 2023, hundreds of AI experts signed a statement underscoring the severity of the risks: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Perhaps the biggest challenge is the breakneck pace of AI research, which might accelerate as AI systems themselves become useful contributors to AI research, such as through writing code or designing hardware. If AI starts causing significant problems, there may be only a short time for the world to address them.

Will AI really cause a catastrophe?

Introductory resources

The brief arguments above ignored many important considerations. For more details on how AI might cause a catastrophe, check out these readings:

Or watch this video:

Or listen to these podcasts:

Lastly, some great books include The Alignment Problem by Brian Christian, Human Compatible by Stuart Russell, and Superintelligence by Nick Bostrom.