Run for US or UK election with AI pause as an agenda
Background assumption
Assuming I and people like me do nothing, the most likely scenario I forecast is that heads of US AI labs, heads of US executive branch and heads of US intelligence community will choose to race to building superintelligence despite being atleast vaguely aware of the risks of doing so.
I support creating a mass movement to force them to not do this. I am not optimistic on the strategy of using persuasion but not force, although I also think persuasion is worth trying.
I also think a weak ASI (might be boxed not aligned) can be used to build a permanent dictatorship. Persuading someone to not build a dictatorship seems hard, because they might want this world, from a self-interested point of view. (Unlike human extinction, which nobody wants.)
Support the movement
If you have only a little time to devote
Like, share, subscribe to my content or people publishing similar content on AI extinction risk. Can share with your friends, people in media or politics, people working at AI labs or in x-risk, anyone really.
If you are good at social media
Start a social media channel to persuade people at scale about AI extinction risk. Even one video is better than zero, as it motivates other people to also come forward.
(Requires social media skills)
Advise other political youtubers and politicians how to speak about AI risk
(My guess is that giving good advice is hard, and making youtube videos yourself is a good way to figure out what good advice looks like. Hence I'm focussed on making videos myself.) (Weak opinion)
Teach people social media skills so they can all start their own channels around AI risk.
(Only do this after you are already good at social media yourself. Otherwise making youtube videos yourself is a good way to learn.)
Find information about people's values, decisions and decision-making processes that makes them look bad in the eyes of the public. This helps grow the mass movement against AI.
(In order to kickstart this, atleast a few people need to have technical skills as cyberhackers, and atleast one person needs to raise $10M in funding in order to hire top talent.)
If building such a team from scratch is too hard, maybe go join the Russian intelligence agency instead.
If you are already powerful
Run for US or UK election with AI pause as an agenda.
(Requires large social media following or high status credentials, and a UK or US citizenship)
Use your social media channel to run referendums on the topic, as well as iteratively test messaging.
This is IMO the single largest bottleneck to growing the entire movement. Most people have very little time to devote to this issue and "Vote for Mr XYZ" is a better call-to-action than "Like/Share/Subscribe to Mr XYZ's content". Also you will get feedback from reality on how to translate vague public support into concrete actions in the real world.
(Maybe) Consider supporting UBI as an agenda, as one of the largest group of single-issue voters in US is only concerned with losing their own job/income/equity. Example: Andrew Yang (signed FLI pause letter).
Sponsor bounties for potential whistleblowers at top AI labs and their supporting govts.
(Requires atleast $100k, likely more)
Moonshots
Moonshoot = Low probability of success unless you have some special insights I don't, in which case please trust your insights.
Become a US policymaker.
I personally don't think policymakers have much power, when they are going directly against the national security interest of the US govt, which will accelerate by default. (I would love to be wrong.)
Organise a protest in your city around AI extinction risk.
(I'm personally not working on this because I think the movement needs to first get more raw number of people, via growing social media presence.) (Weak opinion, I would love to be wrong).
Invent a new political ideology or system of governance that makes it safer to deploy superintelligent AI, human genetic engg, and whole brain emulation in this world. Neoliberalism won't work, so something new is required. Invent a new spiritual ideology or religion that can unite humanity around a common position on superintelligent AI, human genetic engg, and whole brain emulation.
IMO superintelligent AI and human genetic engineering are both possibly less than 5 years away, unless people take political action otherwise. Whole brain emulation is seeing slow and steady progress, so maybe it is 30 years away.
(I'm personally not working on this because I think it'll take more than 5-10 years to pull off.) (Weak opinion, I would love to be wrong).
(I'm not working on this because I currently think human genetic engineering is likely to lead to value drift, and hence it is bad to work on it. Any country that benefits militarily from making their citizens less capable of love, trust, etc will tend to do so. Also, it will take more than 10 years for the superhuman babies to grow into adults.) (Weak opinion, I would love to be wrong about value drift being a possibility)
Otherwise
If for some reason you are incapable of working on any of the above, my current recommendation is not do anything that gets in the way of the people working on the above.
You could work to make solar energy cheaper. You could fix politics in a country that doesn't have nukes. You could work on intra-city bullet trains to build a city with a billion people. You could work on alternative proteins or meal replacements. You could work on making games or art. You could work on some useless software project.
Once an intelligence-enhancing tech is deployed on Earth, most of this will probably turn out useless anyway. If your project significantly changes the incentive structures and ideologies that influence the creation of an intelligence-enhacing tech, then your project might matter. Your project could matter for the humans alive until an intelligence-enhancing tech is deployed. Otherwise, it won't matter. (Weak opinion)
I used to have an older list of many more projects, but I now think listing too many projects is a sign I lack clarity on what is most important.
Probably not useful
Work on censorship-resistant social media
I think there's lots of obvious information about AI risk that hasn't reached the public, so it is better for that info to reach the public first.
Assassinate AI lab CEOs
I think it is difficult to have a public discussion on pros and cons of assassinating people. I think pros and cons are both significant. People who support assassination are not likely to feel safe enough to share their reasons in public, hence the discussion becomes biased.
I am not planning to assassinate anyone, and I am not recommending anyone around me to plan this either.