I am leading an early-stage effort to target AI x-risk. We're currently analyzing the bottlenecks in the AI x-risk prevention "supply chain" to decide where to focus our efforts. We would love to get comments from the community.
The x-risk community has a strong focus on technical/policy research, but perhaps not enough advocacy. AI 2027, Rob Miles, CAIS, CivAI, and others are doing well, but these efforts could be small compared to the rapidly growing power and influence of AI developers, who have misaligned incentives that could lead to x-risk.
What's Missing?
We are testing the hypothesis that operating a viral influencer marketing operation would be beneficial in targeting x-risk. Here's the logic:
We build a media hub with simple, factual x-risk resources and assets
We identify creators with relevant audiences and a track record of creating viral content.
We pay them to create their own versions of x-risk awareness content based on our media kit (also known as UGC - User Generated Content)
They push the content via their channels, and we amplify it with paid ads for max reach
The content might be re-shared or even pop up on traditional media once it gains enough traction.
This builds broad awareness of x-risk among the voters' base, creating an opportunity for politicians to score wins with voters and gain political power by promoting x-risk solutions.
Since this is similar to a political campaign, we can hire people or firms with such experience to manage the project.
How can the community help?
We are looking for answers to the following questions:
According to the Theory of Constraints, a system is limited to one constraint at any given time. Is advocacy the current bottleneck in x-risk prevention? If not, what is?
If advocacy isn't the bottleneck, would you still want new resources invested in it, or would you prefer them invested elsewhere?
Is a viral influencer campaign (similar to a political campaign) the right solution for the advocacy problem? If not, what is?
Related Posts
[..] we’ll need to shift significant resources from research (which helps us understand problems better) to advocacy (which helps us change bad incentives). [link]
“[..] I estimated that we have 3 researchers for every advocate working on US AI governance, and I argued that this ratio is backwards.” “Without political power, we can’t change the bad incentives of AI developers that are very likely to lead to the collapse of human civilization.” “Thus, I urge AI safety grantmakers to aggressively recruit as many political advocacy experts as possible.” [link]
Introduction
I am leading an early-stage effort to target AI x-risk. We're currently analyzing the bottlenecks in the AI x-risk prevention "supply chain" to decide where to focus our efforts. We would love to get comments from the community.
The x-risk community has a strong focus on technical/policy research, but perhaps not enough advocacy. AI 2027, Rob Miles, CAIS, CivAI, and others are doing well, but these efforts could be small compared to the rapidly growing power and influence of AI developers, who have misaligned incentives that could lead to x-risk.
What's Missing?
We are testing the hypothesis that operating a viral influencer marketing operation would be beneficial in targeting x-risk. Here's the logic:
Since this is similar to a political campaign, we can hire people or firms with such experience to manage the project.
How can the community help?
We are looking for answers to the following questions:
Related Posts
[..] we’ll need to shift significant resources from research (which helps us understand problems better) to advocacy (which helps us change bad incentives). [link]
“[..] I estimated that we have 3 researchers for every advocate working on US AI governance, and I argued that this ratio is backwards.”
“Without political power, we can’t change the bad incentives of AI developers that are very likely to lead to the collapse of human civilization.”
“Thus, I urge AI safety grantmakers to aggressively recruit as many political advocacy experts as possible.” [link]