Epistemic Status: Probably not necessary, certainly not sufficient.

Think you have a good AI alignment plan? Prove it!

Build some lower stakes mass influence system, like a social media platform, which implements your plan. If it is more engaging and more pro-social, then your plan will be worth looking at. No powerful AI should be deployed until the alignment plan can pass this basic test.

Alignment plans need to deal with the fundamental divide between machines doing what we tell them to do and machines doing what we want them to do. Any proposal that can address this problem can not be limited to a particular machine architecture. Thus, it should be able to be translated into social media.

The aligned social media system would connect people who might do good together, facilitate cooperation, and promote ideas and messages that encourage the public good. The meta-debate about what counts as the public good seem likely to be a particularly important conversation to promote and referee. 

Perhaps establish a moral parliament to facilitate the role of referee. There may need to be a moral landscape to allow multiple different ways of flourishing. All of this should probably be transparent.

Should the system create text of its own? Probably not.

If your proposal can not make social media less toxic and more pro-social it is not ready for primetime.

New to LessWrong?

New Comment