luzr
luzr has not written any posts yet.

Eliezer:
"Narnia as a simplified case where the problem is especially stark."
I believe there are at least two significant differences:
Aslan was not created by humans, it does not represent the "story of intelligence" (quite contrary, lesser intelligence was created by Aslan, as long as you interpret it as God).
There is only single Aslan with single predetermined "goal" while there are millions of Culture minds, with no single "goal".
(actually, second point is what I dislike so much about the idea of singleton - it can turn into something like benevolent but oppressing God too easily. Aslan IS Narnia Singleton).
David:
"asks a Mind whether it could create symphonies as beautiful as it and how hard it would be"
On somewhat related note, there are still human chess players and competitions...
Eliezer:
It is really off-topic, and I do not have a copy of Consider Phlebas at hand now, but
http://en.wikipedia.org/wiki/Dra%27Azon
Even if Banks have not mentioned 'sublimed' in the first novel, the concept exactly fits Dra'Azon.
Besides, Culture is not really advancing its 'base' technology, but rather rebuilding its infrastructure to war-machine.
Eliezer (about Sublimation):
"Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened."
Just a technical (or fandom?) note:
Sublimed civilization is the central plot of Consider Phlebas (Schar's world, where Mind escapes, is "protected" by sublimed civilization - that is why direct military action by either Iridans or Culture is impossible).
Julian Morrison:
Or you can revert the issue once again. You can enjoy your time on obsolete skills (like sports, arts or carving table legs...).
There is no shortage of things to do, there is only a problem with your definition of "worthless".
"If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?"
Yes, definititely. If nothing else, it means diversity.
"Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?"
I do not care, as long as story continues.
And yes, I would like to hear the story - which is about the same thing I would get in case Minds are prohibited. I will not be the main character of the story anyway, so why should I care?
"Should existing human beings... (read more)
anon: "The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not."
I am quite aware of that. Anyway, using "cheescake" as placeholder adds a bias to the whole story.
"Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings."
Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that "big cheescake" in unlikely.
Thinking about it, AFAIK Eliezer considers himself rationalist. Is not a big part of rationalism involved in disputing values that are merely consequences of our long history?
"these trillions of people also cared, very strongly, about making giant cheesecakes."
Uh oh. IMO, that is fallacy. You introduce quite reasonable scenario, then inject some nonsense, without any logic or explanation, to make it look bad.
You should better explain when, on the way from single sentient AI to voting rights fot trillions, cheesecakes came into play. Is it like all sentients being are automatically programmed to like creating big cheescakes? Or anything equally bizzarre?
Subtract cheescakes and your scenario is quite OK with me, including 0,1% of galaxy for humans and 99.9% for AIs. 0.1% of galaxy is about 200 millions of stars...
BTW, it is most likely that without sentient AI, there will be no human (or human originated) presence outside solar system anyway.
Well, so far, my understanding is that your suggestion is to create nonsentient utility maximizer programmed to stop research in certain areas (especially research in creating sentient AI, right?). Thanks, I believe I have a better idea.
Uhm, maybe it is naive, but if you have a problem that your mind is too weak to decide, and you have real strong (friendly) superintelligent GAI, would not it be logical to use GAIs strong mental processes to resolve the problem?
Eliezer:
I am starting to be sort of frightened by your premises - especially considering that there is non-zero probablity of creating some nonsentient singleton that tries to realize your values.
Before going any further, I STRONGLY suggest that you think AGAIN what might be interesting in carving wooden legs.
Yes, I like to SEE MOVIES with strong main characters going through the hell. But I would not want any of that.
It does not matter that AI can do everything better than me. Right now, I am not the best carving the wood either. But working with wood is still fun. Or swimming, skiing, playing chess (despite the fact computer can beat you each time), caring about animals etc.
I do not need to do dangerous things to be happy. I am definitely sure about that.