What if we could use the theory of Mechanism Design from Game Theory as a medium achieve AI Alignment?
I watch some nice conferences from the MIRI team about the challenges in the area of AI Alignment (specifying the right utility function, making an AI reliable for the task, ways to teach an AI etcetera). While looking to these information, and given the reward and utility functions paradigm they...
Apr 4, 20214