Progressive exposure: Most people who eventually worked in AI Safety needed multiple exposures from different sources before taking action. Even viewers who don't click anywhere are getting those crucial early exposures that add up over time.
Related to this, is "The Sleeper Effect". Where a person hears a piece of information, and remembers both the info and the source of that info. Over time, they forget the source, but remember the original info. That info then becomes just another thing they believe. I think this adds weight to this part of your strategy.
Adapted from a Manifund proposal I announced yesterday.
In the past two weeks, I have been posting daily AI-Safety-related clips on TikTok and YouTube reaching more than 1M people.
I'm doing this because I believe short-form AI Safety content is currently neglected: most outreach efforts target long-form YouTube viewers, missing younger generations who get information from TikTok.
With 150M active TikTok users in the UK & US, this audience represents massive untouched potential for our talent pipeline (e.g., Alice Blair, who recently dropped out of MIT to work at Center for AI Safety as a Technical Writer would be the kind of person we'd want to reach).
On Manifund, people have been asking me what kinds of messages I wanted to broadcast and what outcomes I wanted to achieve with this. Here's my answer:
And here is the accompanying diagram I made:
Although the diagram above makes it seem like calls to action and clicking on links are the "end goals", I believe that "Progressive Exposure" is actually more important.
And I'll go as far as to say that multiple exposures are actually needed in order to fully grok basic AI Risk arguments.
To give a personal example, the first time I wanted to learn about AI 2027, I listened to Dwarkesh's interview of Daniel Kokotajlo & Scott Alexander to get a first intuition for it. I then read the full post while listening to the audio version, and was able to grasp many more details and nuances. A bit later, I watched Drew's AI 2027 video which made me feel the scenario through the animated timeline of events and visceral music. Finally, a month ago I watched 80k's video which made things even more concrete and visceral through the board game elements. And when I started cutting out clips from multiple Daniel Kokotajlo's interviews, I internalized the core elements of the story even more (though I still miss a lot of the background research).
Essentially, what I'm trying to say is that as we're trying to onboard more talent into doing useful AI Safety work, we probably don't need to make them click on a single link that would lead them to take action or subscribe to some career coaching.
Instead, the algorithms will directly feed people to more and more of that kind of content if they find it interesting, and they'll end up finding out the relevant resources if they're sufficiently curious and motivated.
Curated websites like aisafety.com or fellowships are there to shorten the time it takes to transition from "learning about AI risk" to "doing relevant work". And the goal of outreach is to accelerate people's progressive exposure to AI Safety ideas.
Longer description of the project here. Clips here.