Give me feedback! :)
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
It seems plausible to me that if AGI progress becomes strongly bottlenecked on architecture design or hyperparameter search, a more "genetic algorithm"-like approach will follow. Automated AI researchers could run and evaluate many small experiments in parallel, covering a vast hyperparameter space.
How fast should the field of AI safety grow? An attempt at grounding this question in some predictions.
Crucial questions for AI safety field-builders:
Additional resources, thanks to Avery:
Why does the AI safety community need help founding projects?