Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar.
We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window.
More details on our website.
We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared.
Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future.
Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization.
We are currently pursuing the following perspectives:
We tend to think that many non-alignment areas of work are particularly neglected.
However, we are not confident that these are the best frames for this work, and we are keen to work with people who are pursuing their own agendas.
Today we’re also launching “Preparing for the Intelligence Explosion”, which makes a more in-depth case for some of the perspectives above.
You can see some of our other recent work on the site. We have a backlog of research, so we’ll be publishing something new every few days for the next few weeks.
We draw inspiration from the Future of Humanity Institute and from OpenPhil’s Worldview Investigations team: like them we aim to focus on big picture important questions, have high intellectual standards, and build a strong core team.
Generally, we’re more focused than many existing organizations on:
We’d love for you to read our research, discuss the ideas, and criticize them! We’d also love to see more people working on these topics.
You can follow along by subscribing to our podcast, RSS feed, or Substack.
Please feel free to contact us if you are interested in collaborating, or would like our feedback on something (though note that we won’t be able to substantively engage with all requests).
We are not currently actively hiring (and will likely stay quite small), but we have an expression of interest form on our site, and would be particularly keen to hear from people who have related research ideas that they would like to pursue.
We have funding through to approximately March 2026 at our current size, from two high-net-worth donors.
We’re looking for $1-2M more, which would help us to diversify funding, make it easier for us to hire more researchers, and extend our runway to 2 years. If you are interested to learn more, please contact us.
We are a new team and project, starting in mid-2024. However, we’ve built ourselves out of the old Forethought Foundation for Global Priorities Research to help get the operations started, and Will was involved with both projects. We considered like 500 names and couldn’t find something that we liked better than “Forethought”. Sorry for the confusion!