"Brain enthusiasts" in AI Safety
TL;DR: If you're a student of cognitive science or neuroscience and are wondering whether it can make sense to work in AI Safety, this guide is for you! (Spoiler alert: the answer is "mostly yes"). Motivation AI Safety is a rapidly growing field of research that is singular in its goal: to avoid or mitigate negative outcomes from advanced AI. At the same time, AI Safety research touches on many aspects of human lives: from the philosophy of human values, via the neuroscience of human cognition, to the intricacies of human politics and coordination. This interplay between a singular goal with multiple facets makes the problem intrinsically interdisciplinary and warrants the application of various tools by researchers with diverse backgrounds. Both of us (Sam & Jan) have backgrounds in cognitive science and neuroscience (we'll use the blanket term "brain enthusiasts" from here on). Important characteristics of brain enthusiasts are (see if these apply to you, dear reader): * A propensity for empirical data and experiments. * Regarding coding and mathematics as tools rather than as ends in themselves. * Having accumulated semi-random facts about some biological systems. While being brain enthusiasts undoubtedly makes us biased in favor of their importance, it also gives us a (hopefully) useful inside view of how brain enthusiasts might meaningfully contribute to AI Safety[1]. In this post, we attempt to shine a light on the current representation of brain enthusiasts in AI Safety and provide some advice on how brain enthusiasts might enter the field. A long tail of number-lovers When you hear terms like "AI Alignment", "AI Safety," or "AGI", you probably think of people with a strong technical background, e.g. in computer science, mathematics, or physics. This is an instance where stereotypes are correct: Using a previously published dataset, we determined the number of Alignment Forum posts per researcher and sorted the resulting table. We then added a "Bac