Edit November 2021: there is now the Cambridge AGI Safety Fundamentals course, which promises to be successful. It is enlightening to compare this project with RAISE. Why is that one succeeding while this one did not? I'm quite surprised to find that the answer isn't so much about more funding, more senior people to execute it, more time, etc. They're simply using existing materials instead of creating their own. This makes it orders of magnitude easier to produce the thing, you can just focus on the delivery. Why didn't I, or anyone around me, think of this? I'm honestly perplexed. It's worth thinking about.
Since June, RAISE has stopped operating. I’ve taken some time to process things, and now I’m wrapping up.
What was RAISE again
AI Safety is starved for talent. I saw a lot of smart people around me that wanted to do the research. Their bottleneck seemed to be finding good education (and hero licensing). The plan was to alleviate that need by creating an online course about AI Safety (with nice diplomas).
How did it go
We spent a total of ~2 years building the platform. It started out as a project based on volunteers creating the content. Initially, many people (more than 80) signed up to volunteer, but we did not manage to get most of them to show up consistently. We gradually pivoted to paying people instead.
We received a lot of encouragement for the project. Most of the enthusiasm came from people wanting to learn AI Safety. Robert Miles joined as a lecturer. When we reached out to some AI Safety researchers for suggestions on which topics to cover, we readily received helpful advice. Sometimes we also received some funds from a couple of prominent AIS organizations who thought the project could be high value, at least in expectation.
The stream of funding was large enough to sustain about 1 fte working for a relatively low wage. Obtaining it was a struggle: our runway was never longer than 2 months. This created a large attention sink that made it a lot harder to create things. Nearly all of my time was spent on overhead, while others were creating the content. I did not have the time to review much of it.
About 1 year into the project, we escaped this poverty trap by moving to the EA Hotel and starting a content development team there. We went up to about 4 fte, and the production rate shot up leading to an MVP relatively quickly.
How did it end
Before launch, the best way to secure funding seemed to be to just create the damn thing, make sure it’s good, and let it advocate for itself. After launch, a negative signal could not be dismissed as easily.
We got two clear negative signals: one from a major AIS research org (that has requested not to be named), and one from the LTF fund. The former declined to continue their experimental funding of RAISE. The latter declined a grant request. These were clear signals that people in the establishment of AI Safety did not deem the project worth funding, so I reached out for a conversation.
The question was this: “what version of RAISE would you fund?” The answer was roughly that while they agreed strongly with the vision for RAISE, our core product sadly wasn’t coming together in a way that suggested it would be worth it for us to keep working on it. I was tentatively offered a personal grant if I spent it on taking a step back to think hard and figure out what AI Safety needs (I ended up declining for career-strategic reasons).
In another conversation, an insider told us that AI Safety needs to grow in quality more than quantity. There is already a lot of low-quality research. We need AI Safety to be held to high standards. Lowering the bar for a research-level understanding will not solve that.
I decided to quit. I was out of runway, updated towards RAISE not being as important as I thought, and frankly I was also quite tired.
These are directed towards my former self. YMMV.
- Don’t rely on volunteers. At least in my case, it didn’t work. Again, YMMV. It will depend on the task and the incentive landscape.
- Start with capital. When I declared RAISE, I knew maybe 20 rationalists in the Netherlands. I was a Bachelor’s student coming out of nowhere. I had maybe 10-15 hours per week to spend on this. I had no dedicated co-founders. I had no connections to funders. I didn’t have much of a technical understanding of AI Safety. Coming from this perspective, the project was downright quixotic. If you’re going to start a company, first make sure you have a network, domain expertise, experience in running things, some personal runway, and some proof that you can do things.
- Relatedly, have a theory of change for funding. I see many people starting projects with the hopes of securing funding on the go. Good for you on doing some proof of work, but there is a limit. If you scramble to get by, even if you never go broke, you haven’t properly sorted out the funding situation. There should be long periods where you don’t have to worry about it.
- Relatedly, reach out to insiders. This is what a relatively successful AI Safety researcher told me, and it makes a lot of sense: “If I get to spend an hour on influencing what you will do for your next 100 hours, and if I tell you some crucial consideration that will double your impact, it is probably worth it”. Insiders will feel like an out-group. This will make it hard to respect them. Put that bias aside. You know that these people are as reasonable and awesome as your best friends. Maybe even more reasonable.
- You’re not doing this just for impact. You’re also doing this because you have a need to be personally relevant. That’s okay, everyone has this to some extent, but remember to purchase fuzzies and utilons separately. You can buy relevance much more cheaply by organising meetups.
- Apply power laws to life years. This is an untested hypothesis, and it needs to be checked with data, but here’s the idea: the most impactful years of your life will be 100x more impactful than the median. Careers tend to progress exponentially. My intuitive guess is that my most impactful years will not come around until my 40s. I can try to have impact now, but I might be better off spending my 20s finding ways to multiply the impact I will be making in my 40s.
The RAISE Facebook group will be converted into a group for discussing the AI Safety pipeline in general. Let’s see if it will take off. If you think this discussion has merit, consider becoming a moderator.
The course material is still parked right here. Feel free to use it. If you would like to re-use some of it or maybe even pick up the production where it left off, please do get in touch.
Robert has received a grant from the LTF Fund, so he will continue to create high-quality educational content about AI Safety.
I enjoyed being a founder, and feel like I have a comparative advantage there. I’ll be spending my next 5-10 years preparing for a potential new venture. I’ll be building capital and a better model of what needs to be done. I have recently accepted an offer to work as a software developer at a Dutch governmental bank. My first workday was 2 weeks ago.
I would like to thank everyone who has invested significant time and effort and/or funding towards RAISE. I’m forever grateful for your trust. I would especially like to thank Chris van Merwijk, Remmelt Ellen, Rupert McCallum, Johannes Heidecke, Veerle de Goederen, Michal Pokorný, Robert Miles, Scott Garrabrant, Pim Bellinga, Rob Bensinger, Rohin Shah, Diana Gherman, Richard Ngo, Trent Fowler, Erik Istre, Greg Colbourn, Davide Zagami, Hoagy Cunningham, Philip Blagoveschensky, and Buck Shlegeris. Each one of you has really made an outsized contribution, in many cases literally saving the project.
If you have any project ideas and you’re looking for some feedback, I’ll be happy to be in touch. If you’re looking for a co-founder, I’m always open to a pitch.