Posts

Sorted by New

Wiki Contributions

Comments

Eliezer, after you realized that attempting to build a Friendly AI is harder and more dangerous than you thought, how far did you back-track in your decision tree? Specifically, did it cause you to re-evaluate general Singularity strategies to see if AI is still the best route? You wrote the following on Dec 9 2002, but it's hard to tell whether it's before or after your "late 2002" realization.

I for one would like to see research organizations pursuing human intelligence enhancement, and would be happy to offer all the ideas I thought up for human enhancement when I was searching through general Singularity strategies before specializing in AI, if anyone were willing to cough up, oh, at least a hundred million dollars per year to get started, and if there were some way to resolve all the legal problems with the FDA.

Hence the Singularity Institute "for Artificial Intelligence". Humanity is simply not paying enough attention to support human enhancement projects at this time, and Moore's Law goes on ticking.

Aha, a light bulb just went off in my head. Eliezer did reevaluate, and this blog is his human enhancement project!

My view is similar to Robin Brandt's, but I would say that technological progress has caused the appearance of moral progress, because we responded to past technological progress by changing our moral perceptions in roughly the same direction. But different kinds of future technological progress may cause further changes in orthogonal or even opposite directions. It's easy to imagine for example that slavery may make a comeback if a perfect mind control technology was invented.

Aaron, statistical mechanics also depend on particle physics being time-reversible, meaning that two different microstates at time t will never evolve to the same microstate at time t+1. If this assumption is violated then entropy can decrease over time.

Is there some reason why time-reversibility has to be true?

If we can imagine a universe where entropy can be made to decrease, then living beings in it will certainly evolve to take advantage of this. Why shouldn't it be the case that human beings are especially good at this, and that is what they are being used for by the machines?

Constant, if moral truths were mathematical truths, then ethics would be a branch of mathematics. There would be axiomatic formalizations of morality that do not fall apart when we try to explore their logical consequences. There would be mathematicians proving theorems about morality. We don't see any of this.

Isn't it simpler to suppose that morality was a hypothesis people used to explain their moral perceptions (such as "murder seems wrong") before we knew the real explanations, but now we find it hard to give up the word due to a kind of memetic inertia?

For those impatient to know where Eliezer is going with this series, it looks like he gaves us a sneak preview a little more than a year ago. The answer is morality-as-computation.

Eliezer, hope I didn't upset your plans by giving out the ending too early. When you do get to morality-as-computation, can you please explain what exactly is being computed by morality? You already told us what the outputs look like: "Killing is wrong" and "Flowers are beautiful", but what are the inputs?

Constant wrote: So one place where one could critique your argument is in the bit that goes: "conditioned on X being the case, then our beliefs are independent of Y". The critique is that X may in fact be a consequence of Y, in which case X is itself not independent of Y.

Good point, my argument did leave that possibility open. But, it seems pretty obvious, at least to me, that game theory, evolutionary psychology, and memetics are not contingent on anything except mathematics and the environment that we happened to evolve in.

So if I were to draw a Bayesian net diagram, it would look it this:

math ---   --- game theory ------------
\ /                            \
--- evolutionary psychology - moral perceptions
/ \                            /
environment --   --- memetics ---------------
Ok, one could argue that each node in this diagram actually represents thousands of nodes in the real Bayesian net, and each edges is actually millions of edges. So perhaps the following could represent a simplification, for a suitable choice of "morality":
math ---              - game theory ------------
\            /                          \
-- morality -- evolutionary psychology --- moral perceptions
/            \                          /
environment --              - memetics ---------------
Before I go on, do you actually believe this to be the case?

And to answer Obert's objection that Subhan's position doesn't quite add up to normality: before we knew game theory, evolutionary psychology, and memetics, nothing screened off our moral perceptions/intuitions from a hypothesized objective moral reality, so that was perhaps the best explanation available, given what we knew back then. And since that was most of human history, it's no surprise that morality-as-given feels like normality. But given what we know today, does it still make sense to insist that our meta-theory of morality add up to that normality?

Subhan: "You're not escaping that easily! How does a universe in which murder is wrong, differ from a universe in which murder is right? How can you detect the difference experimentally? If the answer to that is 'No', then how does any human being come to know that murder is wrong?" ... Obert: "Because it seems blue, just as murder seems wrong. Just don't ask me what the sky is, or how I can see it."

But we already know why murder seems wrong to us. It's completely explained by a combination of game theory, evolutionary psychology, and memetics. These explanations screen off our apparent moral perceptions from any other influence. In order words, conditioned on these explanations being true, our moral perceptions are independent of (i.e. uncorrelated with) any possible morality-as-given, even if it were to exist.

So there is a stronger argument against Obert than the one Subhan makes. It's not just that we don't know how we can know about what is right, but rather that we know we can't know, at least not through these apparent moral perceptions/intuitions.

Why is it a mystery (on the morality-as-preferences position) that our terminal values can change, and specifically can be influenced by arguments? Since our genes didn't design us with terminal values that coincide with its own (i.e., "maximize inclusive fitness"), there is no reason why they would have made those terminal values unchangeable.

We (in our environment of evolutionary adaptation) satisfied our genes' terminal value as a side-effect of trying to satisfy our own terminal values. The fact that our terminal values respond to moral arguments simply means that this side-effect was stronger if our terminal values could change in this way.

I think the important question is not whether persuasive moral arguments exist, but whether such arguments form a coherent, consistent philosophical system, one that should be amenable to logical and mathematical analysis without falling apart. The morality-as-given position implies that such a system exists. I think the fact that we still haven't found this system is a strong argument against this position.

Why doesn't Zaire just divide himself in half, let each half get 1/4 of the pie, then merge back together and be in possession of half of the pie?

Or, Zaire might say: Hey guys, my wife just called and told me that she made a blueberry pie this morning and put it in this forest for me to find. There's a label on the bottom of the plate if you don't believe me. Do you still think 'fair' = 'equal division'?

Or maybe Zaire came with his dog, and claims that the dog deserves an equal share.

I appreciate the distinction Eliezer is trying to draw between the object level and the meta level. But why the assumption that the object-level procedure will be simple?

Load More