TLDR; We present the AI safety ideas and research platform AI Safety Ideas in open alpha. Add and explore research ideas on the website here: aisafetyideas.com.
AI Safety Ideas has been accessible for a while in an alpha state (4 months, on-and-off development) and we now publish it in open alpha to receive feedback and develop it continuously with the community of researchers and students in AI safety. All of the projects are either from public sources (e.g. AlignmentForum posts) or posted on the website itself.
The current website represents the first steps towards an accessible crowdsourced research platform for easier research collaboration and hypothesis testing.
Research prioritization is hard and even more so in a pre-paradigmatic field like AI safety. We can grok the highest-karma post on the AlignmentForum but is there another way?
With AI Safety Ideas, we introduce a collaborative way to prioritize and work on specific agendas together through social features. We hope this can become a scalable research platform for AI safety.
Successful examples of less systematized but similar, collaborative, online, and high quality output projects can be seen in Discord servers such as EleutherAI, CarperAI, Stability AI, and Yannic Kilcher’s Discord, in hackathons, and in competitions such as the inverse scaling competition.
Additionally, we are also missing an empirically driven impact evaluation of AI safety projects. With the next steps of development described further down, we hope to make this easier and more available while facilitating more iteration in AI safety research. Systemized hypotheses testing with bounties can help funders directly fund specific results and enables open evaluation of agendas and research projects.
Novice and entrant participation in AI safety research is mostly present in two forms at the moment: 1) Active or passive part-time course participation with a capstone project (AGISF, ML Safety) and 2) flying to London or Berkeley for three months to participate in full-time paid studies and research (MLAB, SERI MATS, PIBBSS, Refine).
Both are highly valuable but a third option seems to be missing: 3) An accessible, scalable, low time commitment, open research opportunity. Very few people work in AI safety and allowing decentralized, volunteer or bounty-driven research will allow many more to contribute to this growing field.
By allowing this flexible research opportunity, we can attract people who cannot participate in option (2) because of visa, school / life / work commitments, location, rejection, or funding while we can attract a more senior and active audience compared to option (1).
Several of these are not implemented yet but will be as we develop it further.
Give anonymous feedback on the website here or write your feedback in the comments. If you end up using the website, we also appreciate your in-depth feedback here (2-5 min). If you want any of your ideas removed or rephrased on the website, please send an email to email@example.com.
PS: It is still very much in alpha and there might be mistakes in the research project descriptions. Please do point out any problems in the "Report an issue".
The platform is open source and we appreciate any pull requests on the insider branch. Add any bugs or feature requests on the issues page.
Apply to join the insider builds here to give feedback for the next versions. Join our Discord to discuss the development.
Thanks to Plex, Maris Sala, Sabrina Zaki, Nonlinear, Thomas Steinthal, Michael Chen, Aqeel Ali, JJ Hepburn, Nicole Nohemi, and Jamie Bernardi.