Posts

Sorted by New

Wiki Contributions

Comments

TGGP216y00

Humans having this kind of tendency is a predictable result of what their design was optimized to do, and as such them having it doesn't imply much for minds from a completely different part of mind design space.
Eliezer seems to be saying his FAI will emulate his own mind, assuming it was much more knowledgeable and had heard all the arguments.

TGGP216y10

Then they don't know the true difference between the two possible lives, do they?
"True difference" gets me thinking of "no true Scotsman". Has there ever been anybody who truly knew the difference between two possible lives? Even if someone could be reincarnated and retain memories the order would likely alter their perceptions.

I'm very interested in how Eliezer gets from his meta-ethics to utilitarianism
He's not a strict utilitarian in the "happiness alone" sense. He has an aversion to wireheading, which maximizes the classic version of utility.

I know you frown upon mentioning evolutionary psychology, but is it really a huge stretch to surmise that the more even-keeled, loving and peaceful tribes of our ancestors would out-survive the wilder warmongers who killed each other out?
Yes, it is. The peaceful ones would be vulnerable to being wiped out by the more warlike ones. Or, more accurately (group selection isn't as big a factor given intergroup variance being smaller than intragroup variance), the members of the peaceful tribe more prone to violence would achieve dominance as hawks among doves do. Among the Yanonamo we find high reproductive success among men who have killed. The higher the bodycount, the more children. War and murder appear to be human universals.

Eliezer's obvious awareness of rationalization is encouraging
Awareness of biases can increase errors, so it's not encouraging enough given the stakes.

Finally, I would think there would be more than one AI programmer, reducing the risk of deliberate evil
I'm not really worried about that. No one is a villain in their own story, and people we would consider deviants would likely be filtered out of the Institute and would probably be attracted to other career paths anyway. The problem exists, but I'm more concerned with well-meaning designers creating something that goes off in directions we can't anticipate.

Caledonian, Eliezer never said anything about not bothering to look for arguments. His idea is to find out how he found respond if he were confronted with all arguments. He seems to assume that he (or the simulation of him) will correctly evaluate arguments. His point about no universal arguments is that he has to start with himself rather than some ghostly ideal behind a veil of ignorance or something like that.

TGGP216y10

I'm near Unknown's position. I don't trust any human being with too much power. No matter how nice they seem at first, history indicates to me that they inevitably abuse it. We've been told that a General AI will have power beyond any despot known to history. Am I supposed to have that much reliance on the essential goodness within Eliezer's heart? And in case anyone brings this up, I certainly don't trust the tyranny of the majority either. I don't recognize any moral obligation to stop it because I don't recognize any obligations at all. Also, I might not live to seem him or his followers immanentize the Eschaton.

Female circumcision is commonly carried out by women who've undergone the procedure themselves. So I don't think the Pygmy father will be convinced.

TGGP216y10

Hal Finney, I am reminded of Stephen Pinker's discussion of love between two individuals whose interests exactly coincide. He says that the two would come to form one organism, and they would be like multiple organs or cells within that individual organism, and so would not have to experience "love".

TGGP216y10

I was once impressed by the ability of natural selection to create incredibly complicated functioning living things that can even repair and make copies of themselves. I realized that this was the result of it having so much time and material to work with and relentlessly following an algorithm for attaining fitness that a human being with its biases would be apt to deviate from if consciously pursuing it, but I still felt impressed. I have never felt that way about beauty or emotion.

TGGP216y00

Since that's already what I believe, it wouldn't be a change at all. I must admit though that I didn't tip even when I believed in God, but I was different in a number of ways.

I think the world would change on the margin and that Voltaire was right when he warned of the servants stealing the silverware. The servants might also change their behavior in more desirable ways, but I don't know whether I'd prefer it on net and as it doesn't seem like a likely possibility in the foreseeable future I am content to be ignorant.

TGGP216y00

Sorry I'm late, but this is really a great opportunity to plug For the law, neuroscience changes nothing and everything.

TGGP216y20

Since when does science contain morality?

TGGP216y00

TGGP, are you familiar with the teachings of Jesus?
Yes, I was raised Christian and I've read the Gospels. I don't think they provide an objective standard of morality, just the jewish Pharisaic tradition filtered through a Hellenistic lens.

Matters of preference are entirely subjective, but for any evolved agent they are far from arbitrary, and subject to increasing agreement to the extent that they reflect increasingly fundamental values in common.
That is relevant to what ethics people may favor, but not to any truth or objective standard. Agreement among people is the result of subjective judgment.

Load More