Point of clarification: if human ascension to Mind status is possible, and speeding that ascension is within the power of the NFSI, how are you avoiding having at least one human mind ascend to main character status well ahead of the rest of the species?
At least one of the current six billion squishy things is going to want to enter the godmode code and ascend immediately, and if not them then one of the other trillions of Earth organisms that could be uplifted. Even if the NFSI limits the rate of ascension to the eudaimonic rate, that will vary between people; given six billion rolls of the dice (and more rolls every day), someone will have the value "really really fast" for his/her personal eudaimonic rate. Anything worth waiting for is worth having right now.
The effect seems like passing the recursive buck a very short distance. Humans create a computer that can but will not make all human efforts superfluous; the computer can and does uplift a human to equal capacities; at least one human can and may make all human efforts superfluous. Perhaps CEV includes something like, "No one should be able to get (much) smarter (much) faster than the rest of us," but restricting your intelligence because I am not ready for anyone that smart is an odd moral stance.
This post seems to be missing one important thing about Culture universe (unless I missed it): in that universe "high-grade transhumanism", if I understand the term correctly, is possible, and, if anything, common. The Culture is an aberration, one of very few civilization in that universe which is capable of Sublimation, and yet remains in its human form. The only reason for that must be very strong cultural reasons, which are constantly reinforced, because all those who do not agree with them sublimate into incomprehensibility before they can can significantly influence anything.
I think sublimation is a big literary dodge of the very problem of recursive self-improvement, and doesn't make much sense, neither as a plot device nor as an explanation.
My guess is that Eliezer will be horrified at the results of CEV-- despite the fact that most people will be happy with it.
This is obvious given the degree to which Eliezer's personal morality diverges from the morality of the human race.
Zubon, I thought of that possibility, and one possible singleton-imposed solution is "Who says that subjective time has to run at the same rate for everyone?" You could then do things fast or slow, as you preferred, without worrying about being left behind, or for that matter, worrying about pulling ahead. To look at it another way, if people can increase in power arbitrarily fast on your own playing field, you may have to increase in power faster than you prefer, to keep up with them; this is a coordination/competition problem, and two singlet...
"(b) my being a mutant,"
It looks like (especially young) humans have quite a lot of ability to pick up a wide variety of basic moral concerns, in a structured fashion, e.g. assigning ingroups, objects of purity-concerns, etc. Being raised in an environment of science-fiction and Modern Orthodox Judaism may have given you quite unusual terminal values without mutation (although personality genetics probably play a role here too). I don't think you would characterize this as an instance of c), would you?
Presumably, because if they increased their intelligence they would realize that the war is stupid and go home, which leaves fighting only those that did not. This starts to look like a rationalization, rather than serious reason, but then I always thought that Culture books are carefully constructed to retain as much as possible of "classic" science fiction (starships! lasers! aliens!) in the face of singularity.
Carl, I would indeed call that an "uncommon but self-consistent attractor" if we assume that it is neither convergent, mistaken, nor mutated. As far as I can tell, those four possibilities seem to span the set - am I missing anything?
I'm just confused by your distinction between mutation and other reasons to fall into different self-consistent attractors. I could wind up in one reflective equilibrium than another because I happened to consider one rational argument before another, because of early exposure to values, genetic mutations, infectious diseases, nutrition, etc, etc. It seems peculiar to single out the distinction between genetic mutation and everything else. I thought 'mutation' might be a shorthand for things that change your starting values or reflective processes before extensive moral philosophy and reflection, and so would include early formation of terminal values by experience/imitation, but apparently not.
"If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?"
Yes, definititely. If nothing else, it means diversity.
"Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?"
I do not care, as long as story continues.
And yes, I would like to hear the story - which is about the same thing I would get in case Minds are prohibited. I will not be th...
I saw it from the other side, "why on earth would humans not choose to uplift" - given the contextual quite reasonable expectation they could just ask and receive. The real problem with that universe is not a lack of things for humans to do, but a lack of things for anybody to do. Minds are hardly any better placed. I could waste my time as human dabbling uselessly in obsolete skills, or as a Mind acting as a celestial truck driver and bored tinkerer on the edges of other people's civilizations - what a worthless choice.
Julian Morrison:
Or you can revert the issue once again. You can enjoy your time on obsolete skills (like sports, arts or carving table legs...).
There is no shortage of things to do, there is only a problem with your definition of "worthless".
Eliezer (about Sublimation):
"Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened."
Just a technical (or fandom?) note:
Sublimed civilization is the central plot of Consider Phlebas (Schar's world, where Mind escapes, is "protected" by sublimed civilization - that is why direct military action by either Iridans or Culture is impossible).
luzr, in Consider Phlebas, the term "Sublimed" is never used. It is implied that the Dra'Azon are simply much older than the Culture and hence more powerful - a very standard idiom in SF which makes no mention of deliberately refraining from progress at higher speeds. In Consider Phlebas, the Culture is implied to be advancing its technology as fast as possible in order to fight the war.
Julian, what in any possible reality would count as "something to do"?
Eliezer:
It is really off-topic, and I do not have a copy of Consider Phlebas at hand now, but
http://en.wikipedia.org/wiki/Dra%27Azon
Even if Banks have not mentioned 'sublimed' in the first novel, the concept exactly fits Dra'Azon.
Besides, Culture is not really advancing its 'base' technology, but rather rebuilding its infrastructure to war-machine.
And I will not, if at all possible, give any other human being the least cause to think that someone else might spark a better Singularity. I can make no promises upon the future, but I will at least not close off desirable avenues through my own actions.
A possible problem here is that your high entry requirement specifications may well, with a substantial probability, allow others with lower standards to create a superintelligence before you do.
So: since you seem to think that would be pretty bad, and since you say you are a consequentialist - and belie...
I do have a copy of Consider Phlebas on hand, and reread it, along with Player of Games before writing this post. Wikipedia can say what it likes, but the term "Sublimed" is certainly never used, nor anything like the concept of "deliberately refused hard takeoff" implied. The Culture is advancing its base technology level as implied by the notion of an unusually advanced Mind-prototype, capable of feats thought to be impossible to the Culture's technology level, being lost on Schar's World. "Subliming" is an obvious later ...
Eliezer, I'm confused what you're asking. Read literally, you're asking for a summary description of reachable fun space, which you can make better than I can. All the other parses I can see are more confusing than that. Plain text doesn't carry tone. Please could you elaborate?
Consider Phlebas is subpar Culture and Player of Games is the perfect introductory book but still not full power Banks. Use of Weapons, Look to Windward, Inversions.. and Feersum Endjinn favourite non-Culture.
More to the point however, Look to Windward discusses part of the points you raise. I'm just going by memory here but one of the characters Cr. Ziller, a brilliant and famous non human composer, asks a Mind whether it could create symphonies as beautiful as it and how hard it would be. The Mind answers that yes, it could (and we get the impression tha...
David:
"asks a Mind whether it could create symphonies as beautiful as it and how hard it would be"
On somewhat related note, there are still human chess players and competitions...
I agree with Unknown. It seems that Eliezer's intuitions about desirable futures differ greatly from many of the rest of us here at this blog, and mostly likely even more from the rest of humanity today. I see little evidence that we should explain this divergence as mainly due to his "having moved further toward reflective equilibrium." Without a reason to think he will have vastly disproportionate influence, I'm having trouble seeing much point in all these posts that simply state Eliezer's intuitions. It might be more interesting if he argued for those intuitions, engaging with existing relevant literatures, such as in moral philosophy. But what is the point of just hearing his wish lists?
Robin, it's not clear to me what further kind of argument you think I should offer. I didn't just flatly state "the problem with the Culture is the Minds", I described what my problem was, and offered Narnia as a simplified case where the problem is especially stark.
It's not clear to me what constitutes an "argument" beyond sharing the mental images that invoke your preferences, in this matter of terminal values. What other sort of answer could I give to "Why don't you think that's fun?" Would you care to briefly state a co...
I have a lot of sympathy for what Unknown said here:
"My guess is that Eliezer will be horrified at the results of CEV-- despite the fact that most people will be happy with it."
And Carl Schulmann has a very good point here:
"It looks like (especially young) humans have quite a lot of ability to pick up a wide variety of basic moral concerns, in a structured fashion, e.g. assigning ingroups, objects of purity-concerns, etc. Being raised in an environment of science-fiction and Modern Orthodox Judaism may have given you quite unusual terminal v...
If there's nothing more to life than eliminating suffering, you might as well eliminate life and be done.
I nominate this for the next "Rationality Quotes".
Doesn't this line of thinking make the case for Intelligence Augmentation (IA) over that of FAI? And let me qualify that when I say IA, I really mean friendly intelligence augmentation relative to friendly artificial intelligence. If you could 'level up' all of humanity to the wisdom and moral ethos of 'friendliness', wouldn't that be the most important step to take first and foremost? If you could reorganize society and reeducate humans in such a way to make a friendly system at our current level of scientific knowledge and technology, that would almo...
Eliezer,
I have to question your literary interpretation of the Culture. Is Banks' intention really to show an idealized society? I think the problem of the Minds that you describe is used by Banks to show the existential futility of the Culture's activities. The Culture sans Minds would be fairly run-of-the-mill sci-fi. With all of its needs met (even thinking), it throws into question every action the Culture takes, particularly the meddlesome ones. That's the difference between Narnia and the Culture; Aslan has a wonderful plan for the childrens' lives, ...
haig, one might also believe that Friendly Artificial Intelligence is easier than Friendly Biological Intelligence. We have relatively few examples of FBI and no consistent, reliable way to reproduce it. FAI, if it works, works on better hardware with software that is potentially provably correct, and you can copy that formula.
AI is often mocked because it has been "almost there" for about 50 years, and FAI is a new subset of that. Versions of FBI have been attempted for at least 4000 years, suggesting that the problem may be difficult.
Eliezer, what do you have against "Excession"? It's been a while since I last read them, but I thought it was the 2nd best of the Culture books after "Use of Weapons". I do agree that "Player of Games" is the best place to start though (I started with Consider Phlebas but found it a little dry).
Anyway, as for your actual point, I think it sounds reasonable at least on the surface, but I think considering this stuff too deeply may be putting the cart ahead of the horse somewhat when we're not even very sure what causes consciousness in the first place, or what the details of its workings are, and therefore to what extent a non-conscious yet correctly working FAI is even possible or desirable.
Eliezer:
"Narnia as a simplified case where the problem is especially stark."
I believe there are at least two significant differences:
Aslan was not created by humans, it does not represent the "story of intelligence" (quite contrary, lesser intelligence was created by Aslan, as long as you interpret it as God).
There is only single Aslan with single predetermined "goal" while there are millions of Culture minds, with no single "goal".
(actually, second point is what I dislike so much about the idea of singleton - it can turn into something like benevolent but oppressing God too easily. Aslan IS Narnia Singleton).
The concern expressed above over the consistency of the Culture universe seems unnecessary. The quality of construction of the Culture universe and it's stories is non-trivial, and hence, as with all things, one absorbs what is useful and progresses forward.
I read Amputation of Destiny and your subsequent replies with interest Eliezer, here's my contribution.
The Problem With The Minds could also read The Entire Reason For The Culture/Idiran War. The Idirans consider sentient machines an abomination or to quote Consider Phlebas;
'The fools in the Culture couldn't see that one day the Minds would start thinking how wasteful and inefficient the humans in the Culture themselves were".
It's not a plot flaw, it's a plot device and it occurs throughout the series.
Your Living By Your Own Strength Point I don't agr...
"If there's nothing more to life than eliminating suffering, you might as well eliminate life and be done."
This only applies if non-existence is considered a preferable state to existence. Obviously Culture AI's consider existence preferable, and thus strive to make human existence as suffering-free as possible.
If you need to live in a world where you are needed, then you go ahead and live there, but please send me to the Culture (I haven't read these books so I'm only going off your initial quote).
Or if the very existence of that option strips the meaning from your life, then you modify yourself. Not me.
So I wonder if when you're really good at something and you die and go to heaven, if there is some dude who was doing it 2000 years ago, who's been doing it heaven the whole time, who's like.. 2000 years better at it then you
and like.. you try to catch up, but it's like.. he's always 2000 years better
so you get really depressed and try to kill yourself, but you're already dead
reincarnation solves a lot of these problems
-- #perl
I think the proposed solution presented here is suboptimal and would lead to a race to the bottom, or alternatively lead to most people being excluded from the potential to ever do anything that they get to feel matters (and I think a much better solution exists):
If people can enhance themselves then it becomes impossible to earn any real status except via luck. Essentially it's like a modified version of that Syndrome quote "When everyone is exceptional and talented, then no one will be".
Alternatively if you restrict people's ability to self modify ...
Followup to: Nonsentient Optimizers, Can't Unbirth a Child
From Consider Phlebas by Iain M. Banks:
In practice as well as theory the Culture was beyond considerations of wealth or empire. The very concept of money—regarded by the Culture as a crude, over-complicated and inefficient form of rationing—was irrelevant within the society itself, where the capacity of its means of production ubiquitously and comprehensively exceeded every reasonable (and in some cases, perhaps, unreasonable) demand its not unimaginative citizens could make. These demands were satisfied, with one exception, from within the Culture itself. Living space was provided in abundance, chiefly on matter-cheap Orbitals; raw material existed in virtually inexhaustible quantities both between the stars and within stellar systems; and energy was, if anything, even more generally available, through fusion, annihilation, the Grid itself, or from stars (taken either indirectly, as radiation absorbed in space, or directly, tapped at the stellar core). Thus the Culture had no need to colonise, exploit, or enslave.
The only desire the Culture could not satisfy from within itself was one common to both the descendants of its original human stock and the machines they had (at however great a remove) brought into being: the urge not to feel useless. The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works; the secular evangelism of the Contact Section, not simply finding, cataloguing, investigating and analysing other, less advanced civilizations but—where the circumstances appeared to Contact to justify so doing—actually interfering (overtly or covertly) in the historical processes of those other cultures.
Raise the subject of science-fictional utopias in front of any halfway sophisticated audience, and someone will mention the Culture. Which is to say: Iain Banks is the one to beat.
Iain Banks's Culture could be called the apogee of hedonistic low-grade transhumanism. Its people are beautiful and fair, as pretty as they choose to be. Their bodies have been reengineered for swift adaptation to different gravities; and also reengineered for greater sexual endurance. Their brains contains glands that can emit various euphoric drugs on command. They live, in perfect health, for generally around four hundred years before choosing to die (I don't quite understand why they would, but this is low-grade transhumanism we're talking about). Their society is around eleven thousand years old, and held together by the Minds, artificial superintelligences decillions of bits big, that run their major ships and population centers.
Consider Phlebas, the first Culture novel, introduces all this from the perspective of an outside agent fighting the Culture—someone convinced that the Culture spells an end to life's meaning. Banks uses his novels to criticize the Culture along many dimensions, while simultaneously keeping the Culture a well-intentioned society of mostly happy people—an ambivalence which saves the literary quality of his books, avoiding either utopianism or dystopianism. Banks's books vary widely in quality; I would recommend starting with Player of Games, the quintessential Culture novel, which I would say achieves greatness.
From a fun-theoretic perspective, the Culture and its humaniform citizens have a number of problems, some already covered in this series, some not.
The Culture has deficiencies in High Challenge and Complex Novelty. There are incredibly complicated games, of course, but these are games—not things with enduring consequences, woven into the story of your life. Life itself, in the Culture, is neither especially challenging nor especially novel; your future is not an unpredictable thing about which to be curious.
Living By Your Own Strength is not a theme of the Culture. If you want something, you ask a Mind how to get it; and they will helpfully provide it, rather than saying "No, you figure out how to do it yourself." The people of the Culture have little use for personal formidability, nor for a wish to become stronger. To me, the notion of growing in strength seems obvious, and it also seems obvious that the humaniform citizens of the Culture ought to grow into Minds themselves, over time. But the people of the Culture do not seem to get any smarter as they age; and after four hundred years so, they displace themselves into a sun. These two literary points are probably related.
But the Culture's main problem, I would say, is...
...the same as Narnia's main problem, actually. Bear with me here.
If you read The Lion, the Witch, and the Wardrobe or saw the first Chronicles of Narnia movie, you'll recall—
—I suppose that if you don't want any spoilers, you should stop reading here, but since it's a children's story and based on Christian theology, I don't think I'll be giving away too much by saying—
—that the four human children who are the main characters, fight the White Witch and defeat her with the help of the great talking lion Aslan.
Well, to be precise, Aslan defeats the White Witch.
It's never explained why Aslan ever left Narnia a hundred years ago, allowing the White Witch to impose eternal winter and cruel tyranny on the inhabitants. Kind of an awful thing to do, wouldn't you say?
But once Aslan comes back, he kicks the White Witch out and everything is okay again. There's no obvious reason why Aslan actually needs the help of four snot-nosed human youngsters. Aslan could have led the armies. In fact, Aslan did muster the armies and lead them before the children showed up. Let's face it, the kids are just along for the ride.
The problem with Narnia... is Aslan.
C. S. Lewis never needed to write Aslan into the story. The plot makes far more sense without him. The children could show up in Narnia on their own, and lead the armies on their own.
But is poor Lewis alone to blame? Narnia was written as a Christian parable, and the Christian religion itself has exactly the same problem. All Narnia does is project the flaw in a stark, simplified light: this story has an extra lion.
And the problem with the Culture is the Minds.
"Well..." says the transhumanist SF fan, "Iain Banks did portray the Culture's Minds as 'cynical, amoral, and downright sneaky' in their altruistic way; and they do, in his stories, mess around with humans and use them as pawns. But that is mere fictional evidence. A better-organized society would have laws against big Minds messing with small ones without consent. Though if a Mind is truly wise and kind and utilitarian, it should know how to balance possible resentment against other gains, without needing a law. Anyway, the problem with the Culture is the meddling, not the Minds."
But that's not what I mean. What I mean is that if you could otherwise live in the same Culture—the same technology, the same lifespan and healthspan, the same wealth, freedom, and opportunity—
"I don't want to live in any version of the Culture. I don't want to live four hundred years in a biological body with a constant IQ and then die. Bleah!"
Fine, stipulate that problem solved. My point is that if you could otherwise get the same quality of life, in the same world, but without any Minds around to usurp the role of main character, wouldn't you prefer—
"What?" cry my transhumanist readers, incensed at this betrayal by one of their own. "Are you saying that we should never create any minds smarter than human, or keep them under lock and chain? Just because your soul is so small and mean that you can't bear the thought of anyone else being better than you?"
No, I'm not saying—
"Because that business about our souls shriveling up due to 'loss of meaning' is typical bioconservative neo-Luddite propaganda—"
Invalid argument: the world's greatest fool may say the sun is shining but that doesn't make it dark out. But in any case, that's not what I'm saying—
"It's a lost cause! You'll never prevent intelligent life from achieving its destiny!"
Trust me, I—
"And anyway it's a silly question to begin with, because you can't just remove the Minds and keep the same technology, wealth, and society."
So you admit the Culture's Minds are a necessary evil, then. A price to be paid.
"Wait, I didn't say that -"
And I didn't say all that stuff you're imputing to me!
Ahem.
My model already says we live in a Big World. In which case there are vast armies of minds out there in the immensity of Existence (not just Possibility) which are far more awesome than myself. Any shrivelable souls can already go ahead and shrivel.
And I just talked about people growing up into Minds over time, at some eudaimonic rate of intelligence increase. So clearly I'm not trying to 'prevent intelligent life from achieving its destiny', nor am I trying to enslave all Minds to biological humans scurrying around forever, nor am I etcetera. (I do wish people wouldn't be quite so fast to assume that I've suddenly turned to the Dark Side—though I suppose, in this day and era, it's never an implausible hypothesis.)
But I've already argued that we need a nonperson predicate—some way of knowing that some computations are definitely not people—to avert an AI from creating sentient simulations in its efforts to model people.
And trying to create a Very Powerful Optimization Process that lacks subjective experience and other aspects of personhood, is probably —though I still confess myself somewhat confused on this subject—probably substantially easier than coming up with a nonperson predicate.
This being the case, there are very strong reasons why a superintelligence should initially be designed to be knowably nonsentient, if at all possible. Creating a new kind of sentient mind is a huge and non-undoable act.
Now, this doesn't answer the question of whether a nonsentient Friendly superintelligence ought to make itself sentient, or whether an NFSI ought to immediately manufacture sentient Minds first thing in the morning, once it has adequate wisdom to make the decision.
But there is nothing except our own preferences, out of which to construct the Future. So though this piece of information is not conclusive, nonetheless it is highly informative:
If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?
Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?
Should existing human beings grow up at some eudaimonic rate of intelligence increase, and then eventually decide what sort of galaxy to create, and how to people it?
Or is it better for a nonsentient superintelligence to exercise that decision on our behalf, and start creating new powerful Minds right away?
If we don't have to do it one way or the other—if we have both options—and if there's no particular need for heroic self-sacrifice—then which do you like?
"I don't understand the point to what you're suggesting. Eventually, the galaxy is going to have Minds in it, right? We have to find a stable state that allows big Minds and little Minds to coexist. So what's the point in waiting?"
Well... you could have the humans grow up (at some eudaimonic rate of intelligence increase), and then when new people are created, they might be created as powerful Minds to start with. Or when you create new minds, they might have a different emotional makeup, which doesn't lead them to feel overshadowed if there are more powerful Minds above them. But we, as we exist already created—we might prefer to stay on as the main characters, for now, if given a choice.
"You are showing far too much concern for six billion squishy things who happen to be alive today, out of all the unthinkable vastness of space and time."
The Past contains enough tragedy, and has seen enough sacrifice already, I think. And I'm not sure that you can cleave off the Future so neatly from the Present.
So I will set out as I mean the future to continue: with concern for the living.
The sound of six billion faces being casually stepped on, does not seem to me like a good beginning. Even the Future should not be assumed to prefer that another chunk of pain be paid into its price.
So yes, I am concerned for those currently alive, because it is that concern—and not a casual attitude toward the welfare of sentient beings—which I wish to continue into the Future.
And I will not, if at all possible, give any other human being the least cause to think that someone else might spark a better Singularity. I can make no promises upon the future, but I will at least not close off desirable avenues through my own actions. I will not, on my own authority, create a sentient superintelligence which may already determine humanity as having passed on the torch. It is too much to do on my own, and too much harm to do on my own—to amputate someone else's destiny, and steal their main character status. That is yet another reason not to create a sentient superintelligence to start with. (And it's part of the logic behind the CEV proposal, which carefully avoids filling in any moral parameters not yet determined.)
But to return finally to the Culture and to Fun Theory:
The Minds in the Culture don't need the humans, and yet the humans need to be needed.
If you're going to have human-level minds with human emotional makeups, they shouldn't be competing on a level playing field with superintelligences. Either keep the superintelligences off the local playing field, or design the human-level minds with a different emotional makeup.
"The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works," writes Iain Banks. This indicates a rather unstable moral position. Either the life the population enjoys is eudaimonic enough to be its own justification, an end rather than a means; or else that life needs to be changed.
When people are in need of rescue, this is is a goal of the overriding-static-predicate sort, where you rescue them as fast as possible, and then you're done. Preventing suffering cannot provide a lasting meaning to life. What happens when you run out of victims? If there's nothing more to life than eliminating suffering, you might as well eliminate life and be done.
If the Culture isn't valuable enough for itself, even without its good works—then the Culture might as well not be. And when the Culture's Minds could do a better job and faster, "good works" can hardly justify the human existences within it.
The human-level people need a destiny to make for themselves, and they need the overshadowing Minds off their playing field while they make it. Having an external evangelism project, and being given cute little roles that any Mind could do better in a flash, so as to "supply meaning", isn't going to cut it.
That's far from the only thing the Culture is doing wrong, but it's at the top of my list.