Tetherware #1: The case for humanlike AI with free will
In this post, I argue that more humanlike AI with greater autonomy and freedom isn’t just easier to align with our values; it could also help reduce economic inequality, foster mutual collaboration and accountability, and simply make living with AI more enjoyable. I thought it particularly fitting for LessWrong and would very much appreciate rational critique. (For a TL;DR you can skip to the last section) Alignment? Sure, we can help with that. Wait, what are we aligning, exactly? Why does the alignment of AI systems with humans seem incredibly difficult or even unsolvable, while aligning human general intelligences among themselves appears quite doable? Some obvious arguments include: * the orthogonality of AI and human preferences, * the sheer magnitude of their superhuman capabilities, * their ability to make infinite copies, * their ability to rapidly improve, * their inability to deviate from programmed or otherwise set goals, or * our inability to properly program or set their goals. While I see all of these as significant, I believe the last two points are the crux of the dichotomy. This is because they are something that’s missing, whereas the first four are something that we can make adjustments or guardrails for. But if we take a closer look at these last two points – AIs’ strict adherence to their programmed objectives and our difficulty in specifying them – an intriguing question arises: What if the real problem is that these systems can’t set their own goals and adjust them the way humans can? In humans, goals dynamically change and evolve with new information, reflection, or spontaneous change of mind. If AI had that same capacity, we might no longer need to fear the dreaded “paperclip universe” because the system itself could decide: “Actually, this isn’t right. This is not what I want to do.” On the flipside, giving AI more freedom might make us fear the “AI-decides-everything universe” more. But then again, that universe could even b