Every quarter, we have a newsletter with updates on what’s happening at AI Impacts, with an emphasis on what we’ve been working on. You can see past newsletters here and subscribe to receive more newsletters and other blogposts here.

During the past quarter, Katja wrote an article in TIME, we created and updated several wiki pages and blog posts, and we began working on several new research projects that are in progress.

We’re running a reader survey, which takes 2-5 minutes to complete. We appreciate your feedback!

If you’d like to donate to AI Impacts, you can do so here. Thank you!

News

Katja Grace’s TIME article

In May, TIME published Katja’s article “AI Is Not an Arms Race.” People sometimes say that the situation with AI is an arms race that rewards speeding forward to develop AI before anyone else. Katja argues that this is likely not the situation, and that if it is, we should try to get out of it.

References to AI Impacts Research

The 2022 Expert Survey on Progress in AI was referenced in an article in The Economist, a New York Times op-ed by Yuval Noah Harari, a Politico op-ed that argues for a “Manhattan Project” for AI Safety, and a report from Epoch AI’s Matthew Barnett and Tamay Besiroglu about a method for forecasting the performance of AI models.

The Japanese news agency Kyodo News published an article about AI risks that referenced Katja’s talk at EA Global from earlier this year.

We also maintain an ongoing list of citations of AI Impacts work that we know of.

Research and writing highlights

Views on AI risks

Average given likelihoods of how good or bad human-level AI will be, from 559 machine learning experts in the 2022 Expert Survey on Progress in AI.

The supply chain of AI development

Ideas for internal and public policy about AI

AI timeline predictions

Median predicted year of given probabilities of Human-Level AI from surveys over the years.

Slowing AI

Miscellany

  • Jeffrey wrote a blogpost arguing that AI systems are currently too reliant on human-supported infrastructure to easily cause human extinction without putting the AI system at risk
  • Harlan, Jeffrey, and Rick submitted responses to the National Telecommunication and Information Administration’s AI accountability policy request for comment and the Office of Science and Technology Policy’s request for information.

Ongoing projects

  • Katja and Zach are preparing to publish a report about the 2022 Expert Survey on Progress in AI, with further analysis of the results and details about the methodology.
  • Jeffrey is working on a case study of Institutional Review Boards in medical research.
  • Harlan is working on a case study of voluntary environmental standards.
  • Zach is working on a project that explores ways of evaluating AI labs for the safety of their practices in developing and deploying AI.

Funding

We are still seeking funding for 2023 and 2024. If you want to talk to us about why we should be funded or hear more details about our plans, please write to Elizabeth, Rick, or Katja at [firstname]@aiimpacts.org.

If you'd like to donate to AI Impacts, you can do so here. (And we thank you!)

Reader survey

We are running a reader survey in the hopes of getting useful feedback about our work. If you’re reading this and would like to spend 2-5 minutes filling out the reader survey, you can find it here. Thank you!

New to LessWrong?

New Comment