Eliezer, exactly how many decibels of evidence would it require to persuade you that there is magic in the universe?
For example, see this claim of magic: http://www.clairval.com/lettres/en/2006/12/08/2061206.htm
How many times would a coin have to come up heads (if there were some way for it to test this) before there would be a chance you wouldn't defy the data in a case like this? If you saw 20 heads in a row, would you expect more of them? Or 40?
Basically, everyone knows that the probability of the LHC destroying the earth is greater than one in a million, but no one would do anything to stop the thing from running, for the same reason that no one would pay Pascal's Mugger. (My interests evidently haven't changed much!)
In fact, a superintelligent AI would easily see that the Pebble people are talking about prime numbers even if they didn't see that themselves, so as long as they programmed the AI to make "correct" heaps, it certainly would not make heaps of 8, 9, or 1957 pebbles. So if anything, this supports my position: if you program an AI that can actually communicate with human beings, you will naturally program it with a similar morality, without even trying.
Apart from that, this post seems to support TGGP's position. Even if there is some computation (i.e. primeness) which is actually determining the Pebble people, there is no particular reason to use that computation instead of some other. So if a random AI were programmed that purposely made non-prime heaps, there would be no objective problem with this. So Allan Crossman's claim that "it's positively dangerous to believe in an objective account of morality" is a completely subjective statement. It's dangerous in comparison to your subjective idea of which heaps are correct, yet, but objectively there is nothing dangerous about non-prime heaps. So there's no reason to program an AI without regard for Friendlieness. If there's something matters, it will find it, and if nothing matters, well then nothing matters, not even being made into paperclips.
Roko: it's good to see that there is at least one other human being here.
Carl, thanks for that answer, that makes sense. But actually I suspect that normal humans have bounded utility functions that do not increase indefinitely with, for example, cheese-cakes. Instead, their functions have an absolute maximum which is actually reachable, and nothing else that is done will actually increase it.
Michael Vassar: Actually in real life I do some EXTREMELY counterintuitive things. Also, I would be happy to know the actual consequences of my beliefs. I'm not afraid that I would have to act in any particular way, because I am quite aware that I am a human being and do not have to act according to the consequences of my beliefs unless I want to. I often hold beliefs without acting on them, in fact.
If there is a 90% chance that utility maximization is correct, and a 10% chance that Roko is correct (my approximate estimates), how should one act? You cannot simply "use the math", as you suggest, because conditional on the 10% chance, you shouldn't be using the math at all.
Nick, can you explain how that happens with bounded utility functions? I was thinking basically something like this: if your maximum utility is 1000, then something that has a probability of one in a million can't have a high expected value or disvalue, because it can't be multiplied by more than 1000, and so the expected value can't be more than 0.001.
This seems to me the way humans naturally think, and the reason that sufficiently low-probability events are simply ignored.
From Nick Bostrom's paper on infinite ethics:
"If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity. This stupendous sacrifice would be judged morally right even though it was practically certain to achieve no good. We are confronted here with what we may term the fanaticism problem."
Later:
"Aggregative consequentialism is often criticized for being too “coldly numerical” or too revisionist of common morality even in the more familiar finite context. Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism."
Exactly. Utility maximizing together with an unbounded utility function necessarily lead to what Nick calls fanaticism. This is the usual use of the term: people call other people fanatics when their utility functions seem to be unbounded.
As Eliezer has pointed out, it is a dangerous sign when many people agree that something is wrong without agreeing why; we see this happening in the case of Pascal's Wager and Pascal's Mugging. In reality, a utility maximizer with an unbounded utility function would accept both. The readers of this blog, being human, are not utility maximizers. But they are unwilling to admit it because certain criteria of rationality seem to require being such.
The "mistake" Michael is talking about it the belief that utility maximization can lead to counter intuitive actions, in particular actions that humanly speaking are bound to be useless, such as accepting a Wager or a Mugging.
This is in fact not a mistake at all, but a simple fact (as Carl Shulman and Nick Tarleton suspect.) The belief that it does not is simply a result of Anthropomorphic Optimism as Eliezer describes it; i.e. "This particular optimization process, especially because it satisfies certain criteria of rationality, must come to the same conclusions I do." Have you ever considered the possibility that your conclusions do not satisfy those criteria of rationality?
After thinking more about it, I might be wrong: actually the calculation might end up giving the same result for every human being.
Caledonian: what kind of motivations do you have?
As I've stated before, we are all morally obliged to prevent Eliezer from programming an AI. For according to this system, he is morally obliged to make his AI instantiate his personal morality. But it is quite impossible that the complicated calculation in Eliezer's brain should be exactly the same as the one in any of us: and so by our standards, Eliezer's morality is immoral. And this opinion is subjectively objective, i.e. his morality is immoral and would be even if all of us disagreed. So we are all morally obliged to prevent him from inflicting his immoral AI on us.
Comments