"3WC would be a terrible movie. "There's too much dialogue and not enough sex and explosions", they would say, and they'd be right."
Hmmm.. Maybe we should put together a play version of 3WC; plays can't have sex and explosions in any real sense, and dialogue is a much larger driver.
In case that wasn't a rhetorical question, you almost certainly did: your Introduction to Bayesian Reasoning is the fourth Google hit for "Bayesian", the third Google hit for "Bayes", and has a pagerank of 5, the same as the Cryonics Institute's main website.
"Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone's lover stops loving them?"
We already have an (admittedly limited) counterexample to this, in that many Westerners choose to seek out and do somewhat painful things (eg., climbing Everest), even when they are perfectly capable of choosing to avoid them, and even at considerable monetary cost.
"Some ordinary young man in college suddenly decides that everyone around them is staring at them because they're part of the conspiracy."
I don't think that this is at all crazy, assuming that "they" refers to you (people are staring at me because I'm part of the conspiracy), rather than everyone else (people are staring at me because everyone in the room is part of the conspiracy). Certainly it's happened to me.
"Poetry aside, a human being isn't the seed of a god."
A human isn't, but one could certainly argue that humanity is.
"But with a sufficient surplus of power, you could start doing things the eudaimonic way. Start rethinking the life experience as a road to internalizing new strengths, instead of just trying to keep people alive efficiently."
It should be noted that this doesn't make the phenomenon of borrowed strength go away, it just outsources it to the FAI. If anything, given the kind of perfect recall and easy access to information that an FAI would have, the ratio of cached historical information to newly created information should be much higher than that... (read more)
"By now, it's probably true that at least some people have eaten 162,329 potato chips in their lifetimes. That's even less novelty and challenge than carving 162,329 table legs."
Nitpick: it takes much less time and mental energy to eat a potato chip than to carve a table leg, so the total quantity of sphexishness is much smaller.
"Or, to make it somewhat less strong, as if I woke up one morning to find that banks were charging negative interest on loans?"
They already have, at least for a short while.
"We are currently living through a crisis that is in large part due to this lack of appreciation for emergent behavior. Not only people in general but trained economists, even Nobel laureates like Paul Krugman, lack the imagination to understand the emergent behavior of free monetary systems."
"Emergence", in this instance, is an empty buzzword, see http://lesswrong.com/lw/iv/the_futility_of_emergence/. "Imagination" also seems likely to be an empty buzzword, in the sense of http://lesswrong.com/lw/jb/applause_lights/.
"pre... (read more)
"It is not clear this can be shown to be true. 'Improvement' depends on what is valued, and what the context permits. In the real world, the value of an algorithm depends on not only its abstract mathematical properties but the costs of implementing it in an environment for which we have only imperfect knowledge."
Eliezer specifically noted this in the post:
"Sometimes it is too expensive to take advantage of all the knowledge that we could, in theory, acquire from previous tests. Moreover, a complete enumeration or interval-skipping algorith... (read more)
"This may not sound like a profound insight, since it is true by definition. But consider - how many comic books talk about "mutation" as if it were a source of power? Mutation is random. It's the selection part, not the mutation part, that explains the trends of evolution."
I think this is a specific case of people treating optimization power as if it just drops out of the sky at random. This is certainly true for some individual humans (eg., winning the lottery), but as you point out, it can't be true for the system as a whole.
"... (read more)
I will not be there due to a screwup by Continental Airlines, my apologies.
See everyone there.
"As far as my childhood goes I created a lot of problems for myself by trying to force myself into a mold which conflicted strongly with the way my brain was setup."
"It's interesting that others have shared this experience, trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it. I hadn't read of anyone else having this experience, until people started posting here."
For some mysterious reason, my younger self was so oblivious to the world that I never experienced (to my recollection) a massiv... (read more)
"Would you kill babies if it was the right thing to do? If no, under what circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?"
I would have answered "yes"; eg., I would have set off a bomb in Hitler's car in 1942, even if Hitler was surrounded by babies. This doesn't seem to be a case of corruption by unethical hardware; the benefit to me from setting off such a bomb is quite negative, as it greatly increases my chance of being tortured to death by the SS.
"But what if you were "optimistic" and only presented one side of the story, the better to fulfill that all-important goal of persuading people to your cause? Then you'll have a much harder time persuading them away from that idea you sold them originally - you've nailed their feet to the floor, which makes it difficult for them to follow if you yourself take another step forward."
Hmmm... if you don't need people following you, could it help you (from a rationality standpoint) to lie? Suppose that you read about AI technique X. Techniq... (read more)
"Human beings, who are not gods, often fail to imagine all the facts they would need to distort to tell a truly plausible lie."
One of my pet hobbies is constructing metaphors for reality which are blatantly, factually wrong, but which share enough of the deep structure of reality to be internally consistent. Suppose that you have good evidence for facts A, B, and C. If you think about A, B, and C, you can deduce facts D, E, F, and so forth. But given how tangled reality is, it's effectively impossible to come up with a complete list of humanly-de... (read more)
"I am willing to admit of the theoretical possibility that someone could beat the temptation of power and then end up with no ethical choice left, except to grab the crown. But there would be a large burden of skepticism to overcome."
If all people, including yourself, become corrupt when given power, then why shouldn't you seize power for yourself? On average, you'd be no worse than anyone else, and probably at least somewhat better; there should be some correlation between knowing that power corrupts and not being corrupted.
Two reasons occur to me:
First, you may be able to avoid anyone getting the power. When Eliezer decided against insulting the reporter, he did not leave the position open for someone else. When Washington was offered the crown, not only did refusing it not result in it going to someone else, accepting would have (eventually).
Second, it's possible that you can do more good while neither corrupt nor in power then you could if you are corrupt and in power.
I volunteer to be the Gatekeeper party. I'm reasonably confident that no human could convince me to release them; if anyone can convince me to let them out of the box, I'll send them $20. It's possible that I couldn't be convinced by a transhuman AI, but I wouldn't bet $20 on it, let alone the fate of the world.
"To accept this demand creates an awful tension in your mind, between the impossibility and the requirement to do it anyway. People will try to flee that awful tension."
More importantly, at least in me, that awful tension causes your brain to seize up and start panicking; do you have any suggestions on how to calm down, so one can think clearly?
"Eliezer2000 lives by the rule that you should always be ready to have your thoughts broadcast to the whole world at any time, without embarrassment."
I can understand most of the paths you followed during your youth, but I don't really get this. Even if it's a good idea for Eliezer_2000 to broadcast everything, wouldn't it be stupid for Eliezer_1200, who just discovered scientific materialism, to broadcast everything?
"If everyone were to live for others all the time, life would be like a procession of ants following each other around in a ci... (read more)
"And I wonder if that advice will turn out not to help most people, until they've personally blown off their own foot, saying to themselves all the while, correctly, "Clearly I'm winning this argument.""
I fell into this pattern for quite a while. My basic conception was that, if everyone presented their ideas and argued about them, the best ideas would win. Hence, arguing was beneficial for both me and the people on transhumanist forums- we both threw out mistaken ideas and accepted correct ones. Eliezer_2006 even seemed to support my p... (read more)
"Before anyone posts any angry comments: yes, the registration costs actual money this year."
For comparison: The Singularity Summit at Stanford cost $110K, all of which was provided by SIAI and sponsors. Singularity Summit 2007 undoubtedly cost more, and only $50K of that was raised through ticket sales. All ticket purchases for SS08 will be matched 2:1 by Peter Thiel and Brian Cartmell.
My apologies, but my browser screwed up my comment's formatting; could an admin please fix it, and then delete this? Thanks.
"Ask anyone, and they'll say the same thing: they're pretty open-minded, though they draw the line at things that are really wrong."
I generally find myself arguing against open-mindedness; because "open-mindedness" is a social virtue, a lot of people apply it indiscriminately, and so they wind up wasting time on long-debunked ideas.
"In the same way that we need statesmen to spare us the abjection of exercising power, we need scholars to spare us the abjection of learning."
How many people want to exercise government-type power ... (read more)
"In fact, if you're interested in the field, you should probably try counting the ways yourself, before I continue. And score yourself on how deeply you stated a problem, not just the number of specific cases."
I got #1, but I mushed #2 and #3 together into "The AI will rewire our brains into computationally cheap super-happy programs with humanesque neurology", as I was thinking of failure modes and not reasons for why failure modes would be bad.
"The real question is when "Because Eliezer said so!" became a valid moral argument."
You're confusing the algorithm Eliezer is trying to approximate with the real, physical Eliezer. If Eliezer was struck by a cosmic ray tomorrow and became a serial killer, me, you, and Eliezer would all agree that this doesn't make being a serial killer right.
"Tom McCabe: speaking as someone who morally disapproves of murder, I'd like to see the AI reprogram everyone back, or cryosuspend them all indefinitely, or upload them into a sub-matrix where they can think they're happily murdering each other without all the actual murder. Of course your hypothetical murder-lovers would call this immoral, but I'm not about to start taking the moral arguments of murder-lovers seriously."
Beware shutting yourself into a self-justifying memetic loop. If you had been born in 1800, and just recently moved here via ti... (read more)
"You perceive, of course, that this destroys the world."
If the AI modifies humans so that humans want whatever happens to already exist (say, diffuse clouds of hydrogen), then this is clearly a failure scenario.
But what if the Dark Lords of the Matrix reprogrammed everyone to like murder, from the perspective of both the murderer and the murderee? Should the AI use everyone's prior preferences as morality, and reprogram us again to hate murder? Should the AI use prior preferences, and forcibly stop everyone from murdering each other, even if this... (read more)
"However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren't in there."
Several years ago, I was attracted to pure libertarianism as a possible objective morality for precisely this reason. The idea that, eg., chocolate tastes good can't possibly be represented directly in an objective morality, as chocolate is unique to Earth and objective moralities need to apply everywhere. However, the idea of immora... (read more)
"is true except where general intelligence is at work. It probably takes more complexity to encode an organism that can multiply 7 by 8 and can multiply 432 by 8902 but cannot multiply 6 by 13 than to encode an organism that can do all three,"
This is just a property of algorithms in general, not of general intelligence specifically. Writing a Python/C/assembler program to multiply A and B is simpler than writing a program to multiply A and B unless A % B = 340. It depends on whether you're thinking of multiplication as an algorithm or a giant lookup table (http://lesswrong.com/lw/l9/artificial_addition/).
"Eventually, the good guys capture an evil alien ship, and go exploring inside it. The captain of the good guys finds the alien bridge, and on the bridge is a lever. "Ah," says the captain, "this must be the lever that makes the ship dematerialize!" So he pries up the control lever and carries it back to his ship, after which his ship can also dematerialize."
This type of thing is known to happen in real life, when technology gaps are so large that people have no idea what generates the magic. See http://en.wikipedia.org/wiki/Cargo_cult.
"You will find yourself saying, "If I wanted to kill someone - even if I thought it was right to kill someone - that wouldn't make it right." Why? Because what is right is a huge computational property- an abstract computation - not tied to the state of anyone's brain, including your own brain."
Coherent Extrapolated Volition (or any roughly similar system) protects against this failure for any specific human, but not in general. Eg., suppose that you use various lawmaking processes to approximate Right(x), and then one person tries to... (read more)
Wow, there's a lot of ground to cover. For everyone who hasn't read Eliezer's previous writings, he talks about something very similar in Creating Friendly Artificial Intelligence, all the way back in 2001 (link = http://www.singinst.org/upload/CFAI/design/structure/external.html). With reference to Andy Wood's comment:
"What claim could any person or group have to landing closer to the one-place function?"
Next obvious question: For purposes of Friendly AI, and for correcting mistaken intuitions, how do we approximate the rightness function? How d... (read more)
"I don't think I've ever touched anything that has endured in the world for longer than that church tower."
Nitpick: This probably holds true for things of human construction, but there are obviously rocks, bits of dirt, etc. that have endured for far longer than a thousand years.
"What concrete state of the world - which quarks in which positions - corresponds to "There are three apples on the table, and there could be four apples on the table"? Having trouble answering that? Next, say how that world-state is different from "There are three apples on the table, and there couldn't be four apples on the table.""
For the former: An ordinary kitchen table with three apples on it. For the latter: An ordinary kitchen table with three apples on it, wired to a pressure-sensitive detonator that will set off 10... (read more)
"But if we assume that Lenin made his decisions after the fashion of an ordinary human brain, and not by virtue of some alien mechanism seizing and overriding his decisions, then Lenin would still be exactly as much of a jerk as before."
I must admit that I still don't really understand this. It seems to violate what we usually mean by moral responsibility.
"When, in a highly sophisticated form of helpfulness, I project that you would-want lemonade if you knew everything I knew about the contents of the refrigerator, I do not thereby create a ... (read more)
"One of the things that always comes up in my mind regarding this is the concept of space relative to these other worlds. Does it make sense to say that they're "ontop of us" and out of phase so we can't see them, or do they propagate "sideways", or is it nonsensical to even talk about it?"
It's nonsensical. The space that we see is just an artifact of a lower level of reality. See http://www.acceleratingfuture.com/tom/?p=124.
"And you should always take joy in discovery, as long as you personally don't know a thing."
I... (read more)
I really, really hope that you aren't going to try and publish a theory of quantum gravity, for practical reasons; even if it's more elegant than every other theory yet proposed, the lack of experimental evidence and your lack of credentials will make you seem like a crackpot.
First of all, to Eliezer: Great post, but I think you'll need a few more examples of how stupid chimps are compared to VIs and how stupid Einsteins are compared to Jupiter Brains to convince most of the audience.
"Maybe he felt that the difference between Einstein and a village idiot was larger than between a village idiot and a chimp. Chimps can be pretty clever."
We see chimps as clever because we have very low expectations of animal intelligence. If a chimp were clever in human terms, it would be able to compete with humans in at least some area... (read more)
"Celeriac, the distinction is that Tom McCabe seemed to me to be suggesting that the search space was small to begin with - rather than realizing the work it took to cut the search space itself down."
The search space, within differential geometry, was fairly small by Einstein's day. It was a great deal of work to narrow the search space, but most of it was done by others (Conservation of Energy, various mathematical theorems, etc., were all known in 1910). The primary difficulties were in realizing that space could be described by differential ge... (read more)
"IIRC, Einstein wasn't the first to try to develop a curvature theory of gravity. Riemann himself apparently tried. And, IIRC, Einstein was one of Riemann's students. Einstein brought to the table the whole thing about having to deal with spacetime rather than space."
Riemann died in 1866, Einstein was born in 1879. Riemann was a mathematician: he developed the math of differential geometry, among a great deal of other things, so a lot of stuff is named after him. Einstein applied Riemann's geometry to the physical universe. So far as I know, none... (read more)
"Science tolerates errors, Bayescraft does not. Nobel laureate Robert Aumann, who first proved that Bayesians with the same priors cannot agree to disagree, is a believing Orthodox Jew."
I think there's a larger problem here. You can obviously make a great deal of progress by working with existing bodies of knowledge, but when some fundamental assumption breaks down, you start making nonsensical predictions if you can't get rid of that assumption gracefully. Aumann learned Science, and Science worked extremely well when applied to probability the... (read more)
"I figure that anyone who wants to paint me as a lunatic already has more than enough source material to misquote. Let them paint and be damned!"
The problem isn't individual nutcases wanting to paint you as a lunatic; their cause would be better served by SitS or other Singularity-related material. It's that people who haven't heard your ideas before- the largest audience numerically, if you publish this in book form- might classify you as a lunatic and then ignore the rest of your work. Einstein, when writing about SR, did not go on about how th... (read more)
"This is insanity. Does no one know what they're teaching?"
I doubt any systematic study has been done on the difference in curricula between MIT and Generic State U., even though it would be much easier, and MIT has 78 affiliated Nobel laureates while State U. probably has zero. You can argue from first principles (http://www.paulgraham.com/colleges.html) or experimental data (http://www.csis.gvsu.edu/~mcguire/worth_college_leagues.html) that elite colleges are selecting Nobel Prize winners rather than creating them, but I don't know how accurate... (read more)
"If scientific reasoning is merely Bayesian,"
Scientific reasoning is an imperfect approximation of Bayesian reasoning. Using your geometric analogy, science is the process of sketching a circle, while Bayesian reasoning is a compass.
"It seems to me that it is easy to represent strict standards of evidence within looser ones, but not vice versa."
If you already understand the strict standard, it's usually easy to understand the looser standard, but not vice-versa. Physicists would have a much easier time writing literature papers than lit... (read more)
"Of course I had more than just one reason for spending all that time posting about quantum physics. I like having lots of hidden motives, it's the closest I can ethically get to being a supervillain."
Your work on FAI is still pretty supervillain-esque to most SL0 and SL1 people. You are, essentially, talking about a human-engineered end to all of civilization.
"I wanted to present you with a nice, sharp dilemma between rejecting the scientific method, or embracing insanity. Why? I'll give you a hint: It's not just because I'm evil. If yo... (read more)
This. Is. Awesome. If you weren't busy with FAI, you could make a fortune selling this stuff to universities.
"The idea that density matrices summarize locally invariant entanglement information is certainly helpful, but I still don't know how to start with a density matrix and visualize some physical situation, nor can I take your proof and extract back out an argument that would complete the demonstration in this blog post. I confess this is strictly a defect of my own education, but..."
From what I understand (which is admittedly not much; I could well be wrong), a density matrix is the thingy that describes the probability distribution of the quantum ... (read more)
"If you furthermore had any thoughts about a particular "helium atom" being a factor in a subspace of an amplitude distribution that happens to factorize that way,"
If a helium atom is just an accidential, temporary factorization of an amplitude distribution, then why does it keep appearing over and over again when we look at the universe? If you throw a thousand electrons together, let them interact, zap them with laser radiation, etc., etc., at the end of the day you will still see a bunch of electrons with 511 keV rest mass and -1 cha... (read more)
"I was simply trying to figure out that if so, what's the "actual reality"?"
There is none, at least not in those terms. There is no "actual positional configuration space", any more than there's an "actual inertial reference frame" or "actual coordinate system"; they are all equivalent in the experimental world. Feel free to use whichever one you like.
"I'd thought the Hilbert space was uncountably dimensional because the number of functions of a real line is uncountable."
The number of functions of... (read more)