This Sunday at noon PT, Daniel Kokotaljo will be running a meetup focused on the commitment races problem in AI. What is it? How does it relate to broader issues like equilibrium selection, bargaining, and multi-multi alignment? How dire is it? How should we go about trying to solve it?
From Daniel's post:
Consequentialists can get caught in commitment races, in which they want to make commitments as soon as possible. When consequentialists make commitments too soon, disastrous outcomes can sometimes result. The situation we are in (building AGI and letting it self-modify) may be one of these times unless we think carefully about this problem and how to avoid it.
This will be an online meetup, held in LessWrong's Walled Garden in Bayes Hall.
February 7th, 12pm PT