Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1]
Today, the Forecasting Research Institute (FRI) released “Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration,” which discusses the results of an adversarial collaboration focused on forecasting risks from AI.
In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIcollaboration.pdf
(This report is cross-posted to the EA Forum.)
We brought together generalist forecasters and domain experts (n=22) who disagreed about the risk AI poses to humanity in the next century. The “concerned” participants (all of whom were domain experts) predicted a 20% chance of an AI-caused existential catastrophe by 2100,...
When Alice harms Bob, it is likely that one of the following three things went wrong:
When I first had depression, I remember thinking everything felt dull, like all the color was sucked out of the world, but I thought this was a metaphor even in my own mind. But then I learned from this post that probably my color perception actually was messed up by depression https://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/