I’m Jose. I realized recently I wasn’t taking existential risk seriously enough, and in April, a year after I first applied, I started running a MIRIx group in my college. I’ll write summaries of the sessions that I thought were worth sharing. Most of the members are very new to FAI, so this will partly be an incentive to push upward and partly my own review process. Hopefully some of this will be helpful to others.
This one focuses on how aligning creator intent with the base objective of an AI might not be enough for outer alignment, starting with an overview of Coherent Extrapolated Volition and its flaws. This was created in... (read 1512 more words →)