Disagreement
Some people in lesswrong and EA circles support AI safety researchers working with or inside AI companies, and policymakers working with US govt.
Example: work done by Paul Christiano,
I support social media channels hostile to AI companies and US govt, and protests against AI companies and US govt, and electing politicians that are in favour of pausing AI research.
Example: work done by John Sherman, Holly Elmore
Not a crux
We both agreed that these plans are in conflict with each other. Doing protests and social media channels makes it harder for ai safety researchers or policymakers to collaborate with AI companies and US govt.
We both had different guesses which plan has higher probability of working.
Crux
He said it is possible to use social media to raise public awareness of AI risk without being hostile to AI companies or naming CEOs and leaders specifically.
I said being hostile to AI companies is almost a necessary precondition to doing social media successfully.
My argument
The most popular ideas in society all have outgroups that are the enemy.
Here is empirical evidence.
https://chatgpt.com/s/t_693fc0fbb6548191bc169ad2d0f8511d
We want AI risk to become one of the most popular ideas in society. This means AI companies and the governments supporting them must become an outgroup for signification fractions of society.
See also: I can tolerate anything but the outgroup by Scott Alexander
https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/