This seems great in principle.
The below is meant in the spirit of [please consider these things while moving forward with this], and not [please don't move forward until you have good answers on everything].
That said:
First, I think it's important to clearly distinguish:
This program would be doing (3), so it's important to be aware that (1) is not in itself much of an argument. I expect that it's very hard to do (3) well, and that even a perfect version doesn't allow us to jump to the (1) of our dreams. But I still think it's a good idea!
Some thoughts that might be worth considering (very incomplete, I'm sure):
Hi Joe, thanks a lot for your thoughtful comment! We think you're making some valid points here and will take your suggestions and questions into consideration
All the leading AI labs so far seem to have come from attempts to found AI safety orgs. Do you have a plan against that failure case?
I don't think that's actually true at all; Anthropic was explicitly a scaling lab when made, for example, and Deepmind does not seem like it was "an attempt to found an ai safety org".
It is the case that Anthropic/OAI/Deepmind did feature AI Safety people supporting the org, and the motivation behind the orgs is indeed safety, but the people involved did know that they were also going to build SOTA AI models.
Hi there, thanks for bringing this up. There are a few ways we're planning to reduce the risk of us incubating orgs that end up fast-tracking capabilities research over safety research.
Firstly, we want to select for a strong impact focus & value-alignment in participants.
Secondly, we want to assist the founders to set up their organization in a way that limits the potential for value drift (e.g. a charter for the forming organization that would legally make this more difficult, choosing the right legal structure, and helping them with vetting or suggestions for who you can best take on as an investor or board member)
If you have additional ideas around this we'd be happy to hear them.
Retain an option to buy the org later for a billion dollars, reducing their incentive to become worth more than a billion dollars.
Tl;dr: If you might want to participate in our incubation program and found an AI safety research organization, express your interest here. If you want to help out in other ways please fill out that same form.
We - Catalyze Impact - believe it is a bottleneck in AI safety that there are too few AI safety organizations. To address this bottleneck we plan on piloting an incubation program, similar to Charity Entrepreneurship's program.
The incubation program is designed to help you
Program overview
We aim to deliver this program starting Q2 2024. Here's a broad outline of the 3 phases we are planning:
Who is this program for?
We are looking for motivated and ambitious engineers, generalists, technical researchers, or entrepreneurs who would like to contribute significantly to reducing the risks from AI.
Express your Interest!
If you are interested in joining the program, funding Catalyze, or helping out in other ways, please fill in this form!
For more information, feel free to reach out at alexandra@catalyze-impact.org