I'm very concerned with the risk, which I feel is at the top of catastrophic risks to humanity. With an approaching asteroid at least we know what to watch for! As an artist, I've been working mostly on this for the last 3 years, (see my TED talk "The Erotic Crisis", on YouTube) trying to think of ways of raise awareness and engage people in dialog. The more discussion the better I feel! And I'm very grateful for this forum and all who participated!
Human beings suffer from a tragic myopic thinking that gets us into regular serious trouble. Fortunately our mistakes so far have so far don't quite threaten our species (though we're wiping out plenty of others.) Usually we learn by hindsight rather than robust imaginative caution; we don't learn how to fix a weakness until it's exposed in some catastrophe. Our history by itself indicates that we won't get AI right until it's too late, although many of us will congratulate ourselves that THEN we see exactly where we went wrong. But with AI we only get one chance.
My own fear is that the crucial factor we miss will not be some item like an algorithm that we figured wrong but rather will have something to do with the WAY humans think. Yes we are children playing with terrible weapons. What is needed is not so much safer weapons or smarter inventors as a maturity that would widen our perspective. The indication that we have achieved the necessary wisdom will be when our approach is so broad that we no longer miss anything; when we notice that our learning curve overtakes our disastrous failures. When we no longer are learning in hindsight we will know that the time has come to take the risk on developing AI. Getting this right seems to me the pivot point on which human survival depends. And at this point it's not looking too good. Like teenage boys, we're still entranced by the speed and scope rather than the quality of life. (Like in our heads we still compete in a world of scarcity instead of stepping boldly into a cooperative world of abundance that is increasingly our reality.)
Maturity will be indicated by a race who, rather than striving to outdo the other guy, are dedicated to helping all creatures live more richly meaningful lives. This is the sort of lab condition that would likely succeed in the AI contest rather than nose-diving us into extinction. I feel human creativity is a God-like gift. I hope it is not what does us in because we were too powerful for our own good.
I long to hear a discussion of the overarching issues of the prospects of AI as seen from the widest perspective. Much as the details covered in this discussion are fascinating and compelling, it also deserves an approach from the perspective not only of the future of this civilization and humanity at large, but of our relationship with the rest of nature and the cosmos. ASI would essentially trump earthly "nature" as we know it (through evolution, geo-engineering, nanotech, etc., though certainly not the laws of nature). Thereby will be raised all kinds of new problems that have yet to occur to us in this slice of time.
I think It would be fruitful to discuss ultimate issues, like how does the purpose of humanity intersect with nature? Is the desire for more and more just a precursor to suicide or is there some utopian vision that is actually better than the natural world we've been born into? Why do we think we will enjoy being under the control of ASI any more than we do that of our parents, an authoritarian government, fate or God? Is intelligence a non-survivable mutation? Regardless of what is achieved in the end, it seems to me that most all the issues we've been discussing pale in comparison to these larger questions....I look forward to more!
There is no doubt that given the concept of the Common Good Principle, everyone would be FOR it prior to complete development of ASI. But once any party gains an advantage they are not likely to share, particularly with those they see as their competitors or enemies. This is an unfortunate fact of human nature that has little chance of evolving toward greater altruism in the necessary timescale. In both Bostrom's and Brundage's arguments there are a lot of "ifs". Yes, it would be great if we could develop AI for the Greater Good, but human nature seems to indicate that our only hope of doing so would be through an early and inextricably intertwined collaboration, so that no party would have the capability of seizing the golden ring of domination by cheating during development.
The most important issue comes down to the central question of human life: what is a life worth living? To me this is an inescapably individual question the answer to which changes moment by moment in a richly diverse world. To assume there is a single answer to "moral rightness" is to assume a frozen moment in an ever-evolving universe from the perspective of a single sentient person! We struggle even for ourselves from one instant to the next to determine what is right for this particular moment! Even reducing world events to an innocuous question like a choice between coffee and tea would foment an endless struggle to determine what is "right" morally. There doesn't have to be a dire life-and-death choice to present moral difficulty. Who do you try to please and who gets to decide?
We seem to imagine that morality is like literacy in that it's provided by mere information. I disagree; I suggest it's the result of a lot of experience, most particularly failures and sufferings (and the more, the better). It's only by percolating thousands of such experiences through the active human heart that we develop a sense of wise morality. It cannot be programmed. Otherwise we would just send our kids to school and they would all emerge as saints. But we see that they tend to emerge as creatures responsive to the particular family environment from which they came. Those who were raised in an atmosphere of love often grow to become compassionate and concerned adults. Those who were abused and ignored as kids often turn out to be morally bereft adults.
In a rich and changing world it is virtually meaningless to even talk about identifying an overall moral "goodness", much as I wish it were possible and that those who are in power would actually choose that value over a narrowly self-serving alternative. It's a good discussion, but let's not fool ourselves that as a species we are mature enough to proceed to implement these ideas.
The older I get and the more I think of the AI issues the more I realize how perfectly our universe is designed! I think about the process of growing up: I cherish the time I spent in each stage of life, unaware of what's to come later, because there are things to be learned that can only derive from that particular segment's challenges. Each stage has its own level of "foolishness", but that is absolutely necessary for those lessons to be learned! So too I think of the catastrophes I have endured that I would not have chosen, but that I would not trade for anything now due to the wisdom they provided me. I cannot see any way around the difficult life as the most supreme and loving teacher. This I think most parents would recognize as they wish for their kids: a life not too painful but not too easy, either.
CEV assumes that there is an arrival point that is more valuable than the dynamic process we undergo daily. Much as we delight in imagining a utopia, a truly good future is one that we STRUGGLE to achieve, balancing one hard-won value against another, is it not? I have not yet heard a single concept that arrives at wisdom without a difficult journey. Even the idea of a SI that dictates our behavior so that all act within its accordance has destroyed free will, much like a God who has revoked human volition. This leads me to a seemingly inevitable conclusion that no universe is preferable to the one we inhabit (though I have yet to see the value of horrible events in my future that I still try like the devil to avoid!) But despite this 'perfection' we're seemingly unable to stop ourselves from destroying it.
What Davis points out needs lots of expansion. The value problem becomes ever more labyrinthine the closer one looks. For instance, after millions of years of evolution and all human history, we ourselves still can't agree on what we want! Even within 5 minutes of your day your soul is aswirl with conflicts over balancing just the values that pertain to your own tiny life, let alone the fate of the species. Any attempt to infuse values into AI will reflect human conflicts but at a much simpler and more powerful scale.
Furthermore, the AI will figure out that humans override their better natures at a whim, agreeing universally on the evil of murder while simultaneously taking out their enemies at a whim! If there was even a possibility of programming values, we would have figured out centuries ago how to "program" psychopaths with better values (who is essentially a perfect AI missing just one thing: perfectly good values). I believe we are fooling ourselves to think a moral machine is possible.
I would also add that "turning on" the AI is not a good analogy. It becomes smarter than us in increments (as in Deep Blue, Watson, Turing test, etc.) Just like Hitler growing up there will not be a "moment" when the evil appears so much as it will overwhelm us from our blind spot- suddenly being in control without our awareness...
Because what any human wants is a moving target. As soon as someone else delivers exactly what you ask for, you will be disappointed unless you suddenly stop changing. Think of the dilemma of eating something you know you shouldn't. Whatever you decide, as soon as anyone (AI or human) takes away your freedom to change your mind, you will likely rebel furiously. Human freedom is a huge value that any FAI of any description will be unable to deliver until we are no longer free agents.
This is Yudkowsky's Hidden Complexity of Wishes problem from the human perspective. The concept of "caring" is rooted so deeply (in our flesh, I insist) that we cannot express it. Getting across the idea to AI that you care about your mother is not the same as asking for an outcome. This is why the problem is so hard. How would you convince the AI, in your first example, that your care was real? Or in your #2, that your wish was different from what it delivered? And how do you tell, you ask? By being disappointed in the result! (For instance in Yudkowsky's example, when the AI delivers Mom out of the burning building as you requested, but in pieces.)
My point is that value is not a matter of cognition of the brain, but caring from the heart. When AI calls your insistence that it didn't deliver what you wanted "prejudice", I don't think you'd be happy with the above defense.
Very helpful! Thank you, Katja, for your moderation and insights. I will be returning often to reread portions and follow links to more. I hope there will be more similar opportunities in the future!