Many thanks to Brandon Goldman, David Langer, Samuel Härgestam, Eric Ho, Diogo de Lucena, and Marc Carauleanu, for their support and feedback throughout.
Most alignment researchers we sampled in our recent survey think we are currently not on track to succeed with alignment–meaning that humanity may well be on track to lose control of our future.
In order to improve our chances of surviving and thriving, we should apply our most powerful coordination methods towards solving the alignment problem. We think that startups are an underappreciated part of humanity’s toolkit, and having more AI-safety-focused startups would increase the probability of solving alignment.
That said, we also appreciate that AI safety is highly complicated by nature and therefore calls... (read 3031 more words →)