There’s a new report on the AI Impacts web site, that focuses on reanalyzing the data from the 2023 Expert Survey on Progress in AI (originally written up in Thousands of AI Authors on the Future of AI). The report, by Tom Adamczewski, introduces several improvements over the earlier analysis....
We’re delighted to announce the winners of the Essay competition on the Automation of Wisdom and Philosophy. Overview The competition attracted 90 entries in total (only one of which was obviously just the work of an LLM!), taking a wide variety of angles on the topic. The judges awarded the...
With AI Impacts, we’re pleased to announce an essay competition on the automation of wisdom and philosophy. Submissions are due by July 14th, 2024. The first prize is $10,000, and there is a total of $25,000 in prizes available. Submit an entry via this form. The full announcement text is...
Harlan Stewart and Katja Grace*, 22 December, 2022 News New Hires and role changes In 2022, the AI Impacts team has grown from two to seven full time staff. Out of more than 250 applicants, we hired Elizabeth Santos as Operations Lead, Harlan Stewart as Research Assistant, and three Research...
By Katja Grace, April 1 2021 Today we are sharing with our blog readers a collection of yet-to-be-published drafts, in the hope of receiving feedback. We are especially looking for methodological critique, but all comments welcome! Human-level performance estimate (Katja Grace) How much hardware will we need to create AGI?...
Last year, several researchers at AI Impacts (primarily Robert Long and I) interviewed prominent researchers inside and outside of the AI safety field who are relatively optimistic about advanced AI being developed safely. These interviews were originally intended to focus narrowly on reasons for optimism, but we ended up covering...
I’ve been thinking about a class of AI-takeoff scenarios where a very large number of people can build dangerous, unsafe AGI before anyone can build safe AGI. This seems particularly likely if: * It is considerably more difficult to build safe AGI than it is to build unsafe AGI. *...