The Alignment Project is a global fund of over £15 million, dedicated to accelerating progress in AI control and alignment research. It is backed by an international coalition of governments, industry, venture capital and philanthropic funders. This post is part of a sequence on research areas that we are excited to fund through the The Alignment Project.
Apply now to join researchers worldwide in advancing AI safety.
Computational Complexity Theory
Achieving high-assurance alignment will require formal guarantees as well empirical observations. Just as the concept of entropy made information theory far more tractable, we suspect that there are intuitive concepts in AI security which—if formalised mathematically—could make previously intractable problems suddenly approachable. For example, many approaches... (read 2967 more words →)