Wiki Contributions

Comments

Great post! With institutional design, would you have any advice on making it less abstract and increasing the value of such a proposal?

I cannot help but shrug off the feeling that just about anyone can whip up a structure/design which considers a couple of the stakeholders - what would a design which moves the needle have/be able to do? 

I agree with the concern about accidentally making it harder for X-risk regulations to be passed - probably also something to keep in mind for the part of the community that works on mitigating the misuse of AI. 
Here are some concerns specifically to this point which I have and am curious what people think about it: 

1. Policy Feasibility: Policymakers often operate on short-term electoral cycles, which inherently conflict with the long-term nature of x-risks. This temporal mismatch reduces the likelihood of substantial policy action. Therefore, advocacy strategies should focus on aligning x-risk mitigation with short-term political incentives. 

2. Incrementalism as Bayesian Updating: A step-by-step regulatory approach can serve as real-world Bayesian updating. Initial, simpler policies can act as 'experiments,' the outcomes of which can inform more complex policies. This iterative process increases the likelihood of effective long-term strategies. 

3. Balanced Multi-Tiered Regulatory Approach: Addressing immediate societal concerns or misuse (like deep fakes) seems necessary to any sweeping AI x-risk regulation since it seems to be in the Overton window and constituents' minds. In such a scenario, it would require significant political or social capital to pass something only aimed at x-risks but not about the other concerns. 

By establishing regulatory frameworks that address more immediate concerns based on multi-variate utility functions, we can probably lay the groundwork for more complex regulations aimed at existential risks. This is also why I think X-risk policy advocates come off as radical, robotic or "a bit out there" - they are so focused on talking about X-risk that they forget the more immediate or short-term human concerns. 

With X-risk regulation, there doesn't seem to be a silver bullet; these things will require intellectual rigour, pragmatic compromise and iterations themselves (also say hello to policy inertia). 
 

Very nice approach! I like the almost algorithmic flow; Other approaches i find important: a) talking to 2-4 people for 20 mins who are working on the problem but are not too far along (so the conversation can have an informal tone) b) talking to 1-2 people who don't have an idea about it ( this gives a bird's eye view) c) going to a conference to see what kind of language people use, what are the presentations at the cusp/current edge of development are up to; this also helps form connections (maybe for use of the first two steps)

Excellent work! I have also been pretty concerned about gaps in the global AI Governance ecosystem but a bit sceptical of how impactful focusing on developing countries would be. This essay reminds me that LMICs are still a part of the ecosystem, and one hole can cause a leaky bucket.

 

Particularly love the bit on incentivizing checks and balances instead of forcing it on countries!