UPDATE 07/15/2023: Applications are now closed. If you still want to apply, I will consider you for the future, which may be as soon as a few months from now, or I could initially offer you a lower salary if you'd like to join sooner.Rational Animations is hiring. If you would like to apply, reach out to firstname.lastname@example.org.
The more material you include, the easier for me to evaluate you. The hiring process will probably consist of a short interview and a period you should consider a trial, even if paid at full salary.
At the moment, Rational Animations has two consistently active writers, and the only one doing AI Safety scripts is me. I'd like to have more slack and be able to publish a lot of high-quality AI Safety explainers in the coming months and years. Our animation team is growing, and we will be able to publish videos much faster.
Pay: 50 -150 USD per hour. If I offer you 150 per hour, I'll limit your hours to 10 per week, at least for the next few months. That said, I'm looking for someone who can put in consistent effort. At any given time, most scriptwriters in Rational are inactive, and I'm 100% OK with that, but I'm also looking for someone I can consistently rely on.
What may help you land the job:
I may hire more than one person if many good candidates apply. I may also decide to hire no one.
Bonus: I expect people in this role to be able to write scripts for shorts about AI Safety (more below).
Shortly, we will include many more calls to action in our videos to increase our impact. But that is only one way to go about it. Another option I'm excited about is building our own spaces for the most hardcore followers of the channel to go much deeper into our topics. These spaces include, but may not be limited to:
Your job consists of developing strategies to help with this objective and taking action to pursue them. The outcome you should try to achieve is to help people land in those spaces and curate them so that the users learn a lot.
Our highest-priority topic is becoming AI Safety, so the most important thing you'll do is to help people skill up in that subject. Here's an example of how I picture the funnel:1. We make videos about AI Safety.2. People land on our Discord server.3. You help the most interested people skill up by talking with them and linking resources.4. You help them stay accountable if they decide to embark on a learning journey by, e.g., organizing book clubs, weekly meetings in the style of AGISF, etc.
You will also manage a small team comprising two artists and a moderator. They will help you achieve your objectives for our social media and online spaces.
Pay: 25-50 USD per hour. More for exceptional candidates. At 50 per hour, I may have to limit your hours for a while, but probably not more than five months, and possibly not at all.
Bonus: I expect people in this role to potentially be able to write scripts for shorts about AI Safety (more below).
We want to experiment with shorts. There are lots of things happening in AI Alignment and AI Governance lately. Rational Animations cannot inform its audience about these events with our long-form videos. Shorts, instead, are the perfect medium. Being able to keep up with current events in AI Alignment and AI Governance is easier than writing long-form explainers about them. If you can do this job well and feel excited about it, apply!
Could you be of help in any other way? Let Rational Animations know!
At the moment, there is no set deadline! If I still haven't updated this post to indicate that applications are closed, I will review your submission. I may select a few candidates within the next two to four weeks, but I may also consider additional applications later. I will make sure to keep this post current with any updates.
I think it makes sense to bring in new talent. I think that rational animations has a ridiculous amount of potential value (e.g. making it so that everyone in EA can understand complex concepts with less than 3 minutes per person per concept, nickkross has been doing some good research on engagement optimization for learning valuable concepts), but the "power of intelligence" video had a ton of unimpressive moments where the contribution from the animation fell flat, and it was always the kinds of thing where the problem could only have been solved by more scriptwriters to notice ways something could have been done better (I might be wrong about this though, maybe it was just that that particular essay was hard to work with).
I can definitely see it warranting becoming an org on its own, but the flexible system described here seems more efficient.