Eliezer Yudkowsky and Scott Aaronson - Percontations: Artificial Intelligence and Quantum Mechanics

Sections of the diavlog:

  • When will we build the first superintelligence?
  • Why quantum computing isn’t a recipe for robot apocalypse
  • How to guilt-trip a machine
  • The evolutionary psychology of artificial intelligence
  • Eliezer contends many-worlds is obviously correct
  • Scott contends many-worlds is ridiculous (but might still be true)


New Comment
103 comments, sorted by Click to highlight new comments since: Today at 3:16 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Upvoted, but it wasn't nearly as fascinating as I'd hoped, because it was all on our home turf. Eliezer reiterated familiar OB/LW arguments, Aaronson fought a rearguard action without saying anything game-changing. Supporting link for the first (and most interesting to me) disagreement: Aaronson's "The Singularity Is Far".

I have a significant disagreement with this from that link: Since destroying things is MUCH easier than building, if humans weren't substantially inclined toward helpful and constructive values, civilization would never have existed in the first place nor could it continue to exist at all.
Maybe I'm the only one, but I'd like to see a video of Eliezer alone. Just him talking about whatever he finds interesting these days. I'm suggesting this because so far all the 2-way dialogs I've seen end up with Eliezer talking about 1/4 of the time, and most of what he's saying is correcting what the other person has said. So we end up with not much original Eliezer, which is what I'd really be interested in hearing.
I agree. I stopped watching about five minutes into it when it became clear that EY and Scott were just going to spend a lot of time going back-and-forth. Nothing game-changing indeed. Debate someone who substantially disagrees with you, EY.
5Eliezer Yudkowsky15y
Sorry about that. Our first diavlog was better, IMHO, and included some material about whether rationality benefits a rationalist - but that diavlog was lost due to audio problems. Maybe we should do another for topics that would interest our respective readers. What would you want me to talk about with Scott?

I'd like you to talk about subjects that you firmly disagree on but think the other party has the best chance of persuading you of. To my mind, debates are more useful (and interesting) when arguments are conceded than when the debaters agree to disagree. Plus, I think that when smart, rational people are disadvantaged in a discussion, they are more likely to come up with fresh and compelling arguments. Find out where your weaknesses and Scott's strengths coincide (and vice versa) and you'll both come out of the debate stronger for it. I wouldn't suggest this to just anyone but I know that (unlike most debaters, unlike most people) you're both eager to admit when you're wrong.

(I dearly love to argue, and I'm probably too good at it for my own good, but oh how difficult it can be to admit defeat at the end of an argument even when I started silently agreeing with my opponent halfway through! I grew up in an argumentative household where winning the debate was everything and it was a big step for me when I started admitting I was wrong, and even bigger when I started doing it when I knew it, not a half hour and two-thousand words of bullshit later. I was having an argument with my fa... (read more)

Ok, that's a weird side-effect of watching the diavlog, now when I read your comments I can hear your voice in my mind.
I would like to see more discussion on the timing of artificial super intelligence (or human level intelligence). I really want to understand the mechanics of your disagreement.
It's okay. What do you disagree with Scott over? I don't regularly read Shtetl-Optimized, and the only thing I associate with him is a deep belief that P != NP. I don't really know much about his FAI/AGI leanings. I guess I'll go read his blog a bit.

At one point in the dialog, Scott raises what I think is a valid objection to the "nine people in the basement" picture of FAI's development. He points out that it's not how science progresses, and so not how he expects this novel development to happen.

If we consider FAI as a mathematical problem that requires a substantial depth of understanding beyond what's already there to get right, any isolated effort becomes likely hopeless. Mathematical progress is a global effort. I can sorta expect a basement scenario if most of the required math happen... (read more)

I would like to hear more from Eliezer on just how likely he thinks the 'nine people in the basement' development scenario is. My own impression would be that a more gradual development of GAI is more likely but that that 'basement development' is the only way there is even a remote possibility that the development will not lead to rapid human extinction. That would make the 'nine people in the basement picture' either wishful thinking or 'my best plan of action' depending on whether or not we are Eliezer.
"Breakthroughs" are not really how synthetic intelligence has progressed so far. Look at speech recognition, for example. So far, that has mostly been a long, gradual slog. Maybe we are doing it wrong - and there is an easier way. However, that's not an isolated example - and if there are easier ways, we don't seem to be very good at finding them.
Of course, "breakthroughs" is a cumulative impression: now you don't know how to solve the problem or even how to state it, and 10 years later you do.
The idea of a "breakthrough" denotes a sudden leap forwards. There have been some of those. One might cite back propagation, for example - but big breakthroughs seem rare, and most progress seems attributable to other factors - much as Robin Hanson claims happens in general: "in large systems most innovation value comes from many small innovations".

Well, that was interesting, if a little bland. I think the main problem was that Scott is the kind of guy who likes to find points of agreement more than points of disagreement, which works fine for everyday life, but not so well for this kind of debate.

By the way, I noticed that this was "sponsored" by the Templeton Foundation, which I and many other people who care about the truth find deeply repulsive.

In response to this comment, I just spent some time on the Templeton Foundation web page to see why you don't like them. Wow, interesting. It's clear why you wouldn't like them. They're like the Less Wrong antithesis. They seem to have a completely opposite POV, but judging from the comments I've read so far, are quite intellectual as well. I spent the summer reading Less Wrong... I think I'll give these guys some time (and form my own conclusion).
Why be repulsed at the Templeton Foundation? It seems like they're mostly on the up-and-up.
The purpose of the Templeton Foundation is to blur the line (in people's minds) between science and religion. I'm sure you know it goes: Science and religion are Different Ways Of Knowing The Same Truth, Blah Blah Blah™. A few years ago they were fairly straightforward about it (it was practically spelled out on their website), but after being subjected to a lot of criticism by secular scientists and philosophers, they've been going about it much more sneakily. They sponsor respectable events and fund respectable science to earn credibility, and spend that credibility on stuff like this and other sneaky attempts to make it seem like among first-rate scientists/philosophers/epistemic authority figures, there's only a small minority that views religion as utter hogwash. There's also the Templeton Prize, that rewards scientists who've said something appropriately respectful about religion, and many other lesser brib... I mean gifts. All of this hidden behind an interest in what they call "The Big Questions", by which they mean, "Questions to which the answer is God".
It doesn't seem to me they're doing anything terribly subversive. Even the thing you linked to didn't look too bad - they even have Christopher Hitchens up there. It seems like some sort of newagey softboiled ecumenical pantheism might just be the way to cut the knot between angry atheists and angry theists. Pragmatism moves me to think they're on the right side here.
Like I said, they're a sneaky bunch. Out of 13 contributors, they invite three or four forthright atheists, just to make it seem like they're being fair. The rest are theists (one Muslim and lots of Christians) or 'faitheists', agnostics and pantheists who believe in belief. First, the Templeton Foundation's current president, John Templeton Jr., is an evangelical Christian. The softboiled pantheism you think you're seeing is Christianity hidden by prodigious volumes of smoke. Second, whatever happened to caring about the truth? Would you also say that belief in a cube-shaped Earth might just be the way to cut the knot between angry round-Earthers and angry flat-Earthers?

It's interesting to compare the 1996 Templeton site:

The Templeton Prize for Progress in Religion (especially spiritual information through science) is awarded each year to a living person who shows extraordinary originality in advancing humankind's understanding of God.

to the current site:

The Prize is intended to recognize exemplary achievement in work related to life's spiritual dimension.

Another one. Old:

  • Create and fund projects forging stronger relationships and insights linking the sciences and all religions
  • Apply scientific methodology to the study of religious and spiritual subjects
  • Support progress in religion by increasing the body of spiritual information through scientific research
  • Encourage a greater appreciation of the importance of the free enterprise system and the values that support it
  • Promote character and value development in educational institutions


Established in 1987, the Foundation’s mission is to serve as a philanthropic catalyst for discovery in areas engaging life’s biggest questions. These questions range from explorations into the laws of nature and the universe to questions on the nature of love, gratitude, forgiveness and creativi

... (read more)
If you look at the history of the Templeton Prize and their other endeavors, you will find that they never gave an award or a grant to anybody who came up with the "wrong answers". I mean, if they were really interested in "engaging life's biggest questions" they would have given a Templeton to Dawkins for "The God Delusion".
Thank you!!! That's exactly what I've been looking for (on and off) for the last 20 minutes.
I did a little poking on Wikipedia. * An atheist, culturally Jewish * A Dominican friar * A Methodist * A possible Muslim, although the Wikipedia page doesn't come out and actually say it and there's some evidence that he is a non-theist and critical of Islam * A non-theist with a Christian upbringing and general theist sympathies * An atheist raised Orthodox Jewish * Christopher-freakin'-Hitchens * A Church of England priest * Another atheist * Unclear what Jerome Groopman is * Another atheist * A Catholic * A guy with a very nontraditional definition of God, sort of reminiscent of what byrnema has said Given the demographics of the population at large and the content of the question the contributors were answering, I think four actual Christians out of thirteen contributors is very modest.
Look at the past winners of the Templeton prize. If you look at the winners before 2000, a lot of them were evangelists who had nothing to do with science+religion: Pandurang Shastri Athavale, Bill Bright, Billy Graham, Chuck Colson, Kyung-Chik Han Mother Theresa.
Like I said, three or four forthright atheists (depending on what you think of Michael Shermer), the rest are theists or faitheists. I mean, just take a quick look at the essays (not the titles). Only three answer the question, "Does science make belief in God obsolete?" with a clear Yes. Shermer is less clear, but let's count him as a Yes. The remaining nine answer with No.
I must say, I'd answer "No" straightforwardly to that question. While it may be the case that belief in God is 'obsolete', I think what that question means at least needs some unpacking (How is a belief obsolete? Is that a category mistake?), and I don't think science is necessarily what makes that belief 'obsolete'. Reason, perhaps, or good philosophy, might do the trick.
The question was not, "Does science make it clear that it is an error to believe in God?" I have not read the essays, but if I were answering the question about whether religion is obsolete, I doubt my answer would be interpreted as an unambiguous Yes. Obsolescence isn't about accuracy, it's about consensus of historicity over contemporary usefulness.
Well most of the pantheism I've encountered comes from the Christian worldview. And that sounds like an ad-hominem to me... the Foundation doesn't seem to be coming from an evangelical Christian viewpoint in general, and it's certainly not its stated mission. If nothing really turned on the question of the Earth's shape, then sure. To give the classic Pragmatist example, people used to kill each other over the question of transubstantiation of the Eucharist. One side said that the Eucharist is just bread, symbolizing the body and blood of Christ. The other side said that the Eucharist is really the body and blood of Christ, but for all practical purposes (and under any scientific scrutiny) is indistinguishable from bread. It seems like insisting that one side or the other was wrong on this question is the wrong way to go, as nothing really turns on it and they're both saying roughly the same thing. Better to just 'live and let live' and let 'truth' go this time, in favor of actually making things better. If people do end up making 'God' mean something vacuous, then there's no harm in letting them say it.
Taking a person's most fundamental beliefs into account when trying to figure out what their true intentions are is not an ad hominem, it's common sense. That's short-sighted. Nothing may really turn on the question of transubstantiation, but a there's a lot that turns on the cognitive processes that led millions of people to believe that a cracker is the body a magical Jewish half-deity. I'm all in favor of "actually making things better", but the middle-of-the-road solution that the Templeton Foundation is (outwardly, deceitfully) espousing won't do that. Middle-of-the-road solutions are easy, they allow us to avoid sounding shrill, strident, and militant, but easiness is not effectiveness. There is harm, because people who don't mean something vacuous by 'God' like to give the impression that they do to shield themselves against criticism. And thanks to 'pragmatism', it usually works.
If theists need to pretend to be atheists to be taken seriously, then we've already won.
I didn't think that by a vacuous God you meant a non-existent God. Obviously, theists don't need to pretend to be atheists: Theism is respected by everyone except a small minority of neo-militant ultra-materialist fundamentalist atheists. To be taken seriously, theists merely need to be (or pretend to be, in the presence of critics) moderates, i.e. believers in a God that acts in a very subtle way and conforms to modern secular morality. So no, "we" haven't won. The limited form of insanity we call faith is still the norm and is still respected.
They seem like dark forces to me. The more dangerous for conveying an innoculous appearance. Religion in scientific clothing.
If 'seem like dark forces' is the best you can come up with, then it sounds like you're on no better ground than the theists. It doesn't seem to me that they're "religion in scientific clothing", but rather an institution that cares about lots of big questions, some of which have traditionally been (and are still) answered primarily by religious sources. You can't just excise a whole part of the human experience and not expect to lose something good. Diversity is sometimes far more valuable than optimality.

Saying that something is better than optimality is abuse of the term "optimality". There's an idea missing -- optimal what, exactly?

Right, well, I have limited resources to spend on criticising their particular perversion of science. The purpose of the Templeton Foundation is to blur the line between straightforward science and explicitly religious activity, making it seem like the two enterprises are part of one big undertaking. It's an enterprise I find noxious.
It was whaaaat? Where are you getting that from?
Click on the link and look to the right side of the video.
Thanks! I see! So that's these videos: http://www.bloggingheads.tv/percontations/ Ironically, the participants discuss the Templeton Foundation 18 minutes in - did they know? ;-) John Horgan explains how he rationalises taking the Templeton Foundation's money here: http://www.edge.org/3rd_culture/horgan06/horgan06_index.html
Wow! Are these folks all on the Templeton Foundation's payroll? http://www.templeton.org/evolution/ I wondered why Robert Wright had bothered to write a whole book about god! ;-)

I liked the discussion, especially the final part on the many world interpretation (MWI).

I had the impression that Eliezer had a better understanding of quantum mechanics (QM), however I found one of his remarks very misleading (and it also confused Scott rightly): Eliezer seemed to argue that MWI somehow resolves the difficulty of unifying QM with general relativity (GR) by resolving non-locality.

It is true that non-locality is resolved by Everett's interpretation, but the real problem with QM+GR is that the renormalization of the gravity wave function do... (read more)

5Eliezer Yudkowsky15y
"relativity" was meant to refer to SR not GR
Sorry, it seems I was too sloppy, I even must revise my opinion on Scott who seemed to represent a very reasonable point of view although (I agree with you) he tries to conform a bit too much for my taste as well. Still, I have a very special intutitive suspicions with the WMI: if the physics is so extremely generous and powerful that it spits out all those universes with ease, why does not it allow us to solve exponential problems? How comes that our world has such a very special physics that it allows us to constructs machines that are slightly more powerful than Turing machines (in an asymptotical sense) still not making exponential (or even NP-complete) problems tractable? It looks like a strange twist of nature that we have this really special physics that allows us to construct computational processes in this very narrow middle ground in asymptotic complexity. Generating all those exponentially increasing number of universes, but does not allow their inhabitants to exploit them algorithmically to the full extent. Can't it be that that our world still has to obey certain complexity limits and some of the universes have to be pruned away for some reason?
3Eliezer Yudkowsky15y
This is a fascinating way of looking at it. My first thought was to reply, "Yes, most worlds may need to be pruned a la Hanson's mangled worlds, but that doesn't mean you can end up with a single global world without violating Special Relativity, linearity, unitarity, continuity, CPT invariance, etc." But on second thought this seems to be arguing even further, for the sort of deep revolution in QM that Scott wants - a reformulation that would nakedly expose the computational limits, and make the ontology no more extravagant than the fastest computation it can manage within a single world's quantum computer. So this would have to reduce the proliferation of worlds to sub-exponential, if I understand it correctly, based on the strange reasoning that if we can't do exponential computations in one world then this should be nakedly revealed in a sub-exponential global universe. But you still cannot end up with a single world, for all the reasons already given - and quantum computers do not seem to be merely as powerful as classical computers, they do speed things up. So that argues that the ontology should be more than polynomial, even if sub-truly-exponential.
Thanks. I was not aware the Scott has the same concerns based on computational complexity that I have. I am not even sure that the ontology needs to rely on non-classical capabilities. If our multiverse is a super-sophisticated branch-and-bound type algorithm for some purpose, then it still could be fastest, albeit super-polynomial, algorithm.
2Eliezer Yudkowsky15y
Don't know if he does. I just mean that Scott wants a deep revolution in general, not that particular deep revolution.
Some other thoughts about the MWI, that come to my mind after a bit more thinking: Here is a version of the Schroedinger's cat experiment that would let anyone to test the MWI for himself: Let us devise a quantum process that has 99 percent probability of releasing a nerve-gas in a room that kills humans without any pain. If I'd be really sure of the MWI, I would have no problems going into the room and press the button to start the experiment. In my own experience I would simply come out of the room unscratched for certain as it will be the only world I would experience. OTOH, if I really get out of the room as if nothing happened I could deduce with high probability that the MWI is correct. (If not: just repeat the experiment for a couple of times...) I must admit, I am not really keen on doing the experiment. Why? Am I really so unconvinced about the MWI? What are my reasons not to perform it, even if I'd be 100% sure? Another variation of the above line of thoght: assume that we are in 2020 and we say that since 2008, year after year, the Large Hadron Collider had all kind of random-looking technical defects that prevented it from performing the planned experiments in the 7Tv scale. Finally a physicist comes up with a convincing calculation showing that the probability that the collider will produce a black hole is much much higher than anticipated and the chances that the earth is destroyed are significant. Would it be a convincing demonstration of the MWI? Even without the calculation, should we insist on trying to fix the LHC, if we experience the pattern of its breaking down for years?
See also: Wikipedia: quantum suicide LW/OB: LHC failures

I picked up a copy of Jaynes off of ebay for a good price ($35.98). There are 2 copies left in that auction. Someone here might be interested:


No need to vote this comment up or down.

I note that the Born probabilities were claimed to have been derived from decision theory for the MWI in 2007 by Wallace and Deutsch:

“Probabilities used to be regarded as the biggest -problem for Everett, but ironically, they are now its most powerful success” - David Deutsch.

"In a September 2007 conference David Wallace reported on what is claimed to be a proof by Deutsch and himself of the Born Rule starting from Everettian assumptions. The status of these arguments remains highly controversial."

Robot ant: http://www.youtube.com/watch?v=0jyBiECoS3Q

I would say real ants are currently waaay ahead of robot ant controllers.

On the other hand - like EY says - there's a whole bunch of things that we can do which ants can't. So it is not trivial to compare.

And robot ant controllers are not examples of our most powerful AI creations to date.

Thumbs up to Eliezer Yudkowsky for getting around to giving some actual timescales. They are incredibly vague timescales - but it is still a tricky thing to estimate the difficulty of - so that's OK, I guess.

This is gonna be so awesome, I just took a moment to try to make sure I'm not dreaming.

Good stuff. I was surprised that SA seemed so uncomfortable with the thought that reality should have a many-worlds structure, and I thought Eliezer made a powerful reply by pointing out that we might be simulated on multiple computers.

On the issue of many-world, I must just be slow because I can't see how it is "obviously" correct. It certainly seems both self consistent and consistent with observation, but I don't see how this in particular puts it so far ahead of other ways of understanding QM as to be the default view. If anyone knows of a really good summary for somebody who's actually studied physics on why MWI is so great (and sadly, Eliezer's posts here and on overcomingbias don't do it for me) I would greatly appreciate the pointer.

In particular, two things that I ha... (read more)

What if instead of talking about "many worlds" we just said "no collapse"? If there's just this state and it evolves according to Schroedinger's equation. Then then of course there's conservation of energy.
Sure, I'm certainly not saying that the Copenhagen interpretation is correct, and my understanding is that a decoherence view is both more useful and simpler. MWI (at least as I understand it) is a significantly stronger claim. When we take the probabilities that come from wave state amplitudes as observed frequencies among actually existing "worlds" then we are claiming that there are many different versions of me that actually exist. It's this last part that I find a bit of a stretch.
If many different versions of you existing bothers you, does Schroedinger's cat bother you? The extent to which MWI is a stronger claim than "no collapse," it's purely interpretative. It certainly doesn't posit any "splitting" beyond vanilla QM. Questions about conservation of energy suggest that you don't get this.
For energy conservation see: http://www.hedweb.com/manworld.htm#violate The main reason for following the MWI is Occam's razor: http://www.hedweb.com/manworld.htm#ockham%27s
Thank you, this is exactly the type of linking that I was looking for. Unfortunately, the FAQ that you so kindly provided isn't providing the rigor that I'm looking for. In fact, for the energy conservation portion, I think (although I'm by no means certain) that the argument has been simplified to the point that the explanation being offered isn't true. I guess what I'd really like is an explanation of MWI that actually ties the math and the explanations together closely. (I think that I'm expressing myself poorly, so I'm sorry if my point seems muddled, but I'd actually like to really understand what Eliezer seems to find so obvious.)
The first sentence lays out the issue: "the law conservation of energy is based on observations within each world. All observations within each world are consistent with conservation of energy, therefore energy is conserved." Conservation of energy takes place within worlds, not between them. FWIW, I first learned about the MWI from: Paul C.W. Davies' book: "Other Worlds" - waay back in the 1980s. It was quite readable - and one of the better popular books on QM from that era. It succeeded in conveying the "Occam" advantage of the theory.
OK, if that's really what it takes I guess I'll leave it at that. But I don't see the loss of generality from conservation laws operating on any closed system as a good thing, and I can't understand how weighting a world (that is claimed to actually exist) by a probability measure (that I've seen claimed to be meant as observed frequencies) is actually a reasonable thing to do. I would actually like to understand this, and I suspect strongly that I'm missing something basic. Unfortunately, I don't have the time to make my ignorance suitable for public consumption, but if anyone would like to help enlighten me privately, I'd be delighted.
Ok, but this isn't actually making the case for MWI better to my mind. Instead of mass and energy being conserved in any closed system it is now only conserved on closed systems up to the "size" of a "world". I don't see how this loss of generality (especially since "worlds" tend to "split" into things that must now be treated independently despite coming from the same source) is a good thing. I actually want to understand this correctly and I strongly suspect that I'm missing something basic. Unfortunately, I don't really have the time to express my ignorance well in a public forum, but if anyone is willing to discuss privately, I'd be delighted.
You say Eliezer's posts didn't do it for you, but how much of it did you read? In particular, the point about parsimony favoring MWI is explained in "Decoherence is Simple". As for the mechanism of world divergence, I think the answer is that "worlds" are not an ontologically basic element of the theory. Rather, the theory is about complex amplitude in configuration space, and then from our perspective embedded within the physics, the evolution of the wavefunction seems like "worlds" "splitting."
I have read every post on overcomingbias and I'm pretty sure I've ready every top-level post by Eliezer on less wrong. Although I very much enjoyed Eliezer's posts on the issue, they were intended for a wide audience and I'm looking for a technical discussion.
I think that the many world hypothesis is aesthetic because it doesn't break symmetry. Suppose that in some set-up a particle can move down one path to the right or another path to the left and there are exactly equal probabilities of either path being taken. Choosing one of the paths -- by any mechanism -- seems arbitrary. It is more logical that both paths are taken. But the two possibilities can't interact: two different worlds. In the world we experience, objects do occasionally move to the right. If there is not an alternate reality in which the object moved to the left, eventually, with either that object's movement, or the object that pushed it, or the object that pushed that, and so on, you have to explain how symmetry was ever broken in the first place. Physicists don't like spontaneous breaking of symmetry. So much so, that the idea of many worlds suddenly seems totally reasonable. Later edit: This is similar to the argument Eliezer made, in more detail and with more physics here.
In my understanding, what you have presented is an argument for why MWI is interesting (is has strong aesthetic appeal) and why it's worth looking into seriously (it doesn't seem to have spontaneous breaking of symmetry). What I'm looking for is a compilation of reasons that I should believe that it is true, basically a list of problems with other interpretations and how MWI fixes it along with refutations of common objections to MWI. I should also note that I'm explicitly asking for rigorous arguments (I actually am a physicist and I'd like to see the math) and not just casual arguments that make things seem plausible.
Many worlds is an interpretation of quantum mechanics. QM stays exactly the same; mathematics, evidence and everything. Whether an interpretation is plausible really just depends on what is aesthetic and what makes sense to you. I explained why some other physicists find Many Worlds reasonable. It's always going to be this nebulous opinion-based "support" because it's not a matter of empirical fact -- unless it ever turned out there is some way the worlds interact. You've made a distinction between MWI being aesthetic and MWI being worth looking into seriously, which makes it sounds like you view that the argument to avoid spontaneous breaking of symmetry is more than just an aesthetic one. Can you pinpoint the physical reason why we like to avoid it? (I was wondering before.) And then a question for the physical materialists: Why do you feel comfortable discussing multiple worlds; with it being an interpretation rather than an empirical fact? Or do you think there could ever be evidence one way or the other? (I just read Decoherence is Falsifiable and Testable and I believe Eliezer is saying that Many Worlds is a logical deduction of QM, so that having a non-many-world-theory would require additional postulates and evidence.)
Uh huh. See: "What unique predictions does many-worlds make?" * http://www.hedweb.com/manworld.htm#unique "Could we detect other Everett-worlds?" * http://www.hedweb.com/manworld.htm#detect "Many worlds is often referred to as a theory, rather than just an interpretation, by those who propose that many worlds can make testable predictions (such as David Deutsch) or is falsifiable (such as Everett)" * http://en.wikipedia.org/wiki/Many-worlds_interpretation
OK, thanks. I see that many-worlds could be falsifiable, if the many-worlds interact (or interfere). I really didn't know that was on the table.
It mostly revolves around the idea of collapse. There's no expermental evidence for a collapse. In the MWI, there's no collapse. If we find evidence for a collapse someday, we will have to discard the MWI. However, people have been looking for a while now - and there's no sign of a collapse so far. So, applying Occam's razor, you get the MWI - or something similar.

Dennett and Hofstadter have "extremely large" estimates of the time to intelligent machines as well. I expect such estimates will prove to be wrong - but it is true that we don't know much about the size of the target in the search space - or how rough that space is - so almost any estimate is defensible.

Time symmetry is probably not a big selling point of the classical formulation of the MWI. What with all those worlds in the future that don't exist in the past.

OK - no information is created or destroyed - so it's technically reversible - but that's not quite the same thing as temporal symmetry.

It would be better if it were formulated so there were lots of worlds in the past too. You don't lose anything that way - AFAICS.

The discussion got a bit sidetracked around about when EY asked something like:

If you are assuming that you can give the machine one value and have it stable, why assume that there are all these other values coming into it which you can't control.

...about 27 minutes in.

Scott said something about that being how humans work. That could be expanded on a bit:

In biology, it's hard to build values in explicitly, since the genes have limited control over the brain - since the brain is a big self-organising system. It's as though the genes can determine the initia... (read more)

I'm not sure the halved doubling time for quantum computers is right.

Maybe I'm not getting into the spirit of accepting the proposed counterfactuals - but is quantum computer performance doubling regularly at all? It seems more as though it is jammed up against decoherence problems already.

It's a purely theoretical counterfactual about the combination of Moore's law and Grover's algorithm. Moore's law says that the computer becomes twice as efficient in 18 months. Grover's algorithm says that the time taken by a quantum computer to solve SAT is the square root of the time required by a classical computer. Thus in 18 months, Moore's law of hardware should make the quantum computer 4 times as fast.
Assume the number of quantum gate-ops per second doubles every 18 months. Assume SAT is O(2^n) on a classical computer and O(2^(n/2)) by Grover's. Then the maximum feasible problem size on a classical computer increases by 1 every 18 months, and on a quantum computer increases by 2. No factors of anything involved. Alternately, if you measure a fixed problem size, then by assumption speed doubles for both. So where does 4x come from?
It just comes from treating classical computers as the correct measuring stick. It would be more precise to refer, as you do, to 18 months as the add one time than the doubling time. But if you do call it the doubling time, then for quantum computers, it becomes the 4x time. Of course, it's not uniform--it doesn't apply to problems in P.
With classical computers Moore's law improves serial and parallel performance simulataneously - by making components smaller. With quantum computers serial and parallel performance are decoupled - more qubits improves parallel performance and minaturisation has no effect on the number of qubits, but improves serial processing performance. So, there are two largely independent means of speeding up quantum computing. Which one supposedly doubles twice as fast as classical computers? Neither - AFAICS.
Sorry, my original response should have been "yes, you aren't getting into the spirit of the counterfactual."
Well, I can see what math was done. The problem is the false assertion. I learned in math classes that if you accept one false thing, you can prove everything, and consequently your understanding of the difference between what's true and what's not dwindles to zero. You can't just believe one false thing. If we actually "switched to quantum computers" it isn't clear we would get an exponential trajectory at all - due to the proximity of physical limits. If we did get an exponential trajectory, I can see no coherent reason for thinking the doubling time would relate to that of classical computers - because the technology is quite different. Currently, quantum computers grow mostly by adding qubits - not by the shrinking in component size that drives Moore's law in classical computers. That increases their quantum-parallelism, but doesn't affect their speed.
I guess that quantum computers halve the doubling time, as compared to a classical computer, because every extra qubit squares the available state space. This could give the factor two in the exponential of Moore's law. Quantum computing performance currently isn't doubling but it isn't jammed either. Decoherence is no longer considered to be a fundamental limit, it's more a practical inconvenience. The change that brought this about was the invention of quantum error correcting codes. However experimental physicists are still searching for the ideal practical implementation. You might compare the situation to that of the pre-silicon days of classical computing. Until this gets sorted I doubt there will be any Moore's law type growth.
I looked at: http://en.wikipedia.org/wiki/Quantum_error_correction The bit about the threshold theorem looks interesting. However, I would be more impressed by a working implementation ;-)

Scott cites the Doomsday Argument in his "The Singularity Is Far":


Surely that is a mistake. The Doomsday Argument may suggest that the days of humans like us may be numbered, but doesn't say much more than that - in particular it can't be used to argue against a long and rich future filled with angelic manifestations. So: it is poor evidence against a relatively near era of transcension.

Am I missing something here? EY and SA were discussing the advance of computer technology, the end of Moore's rule-of-thumb, quantum computing, BIg Blue, etc. It seems to me that AI is an epistemological problem not an issue of more computing power. Getting Big Blue to go down all the possible branches is not really intelligence at all. Don't we need a theory of knowledge first? I'm new here so this has probably already been discussed but what about freewill? How do AI researchers address that issue?

I'm with SA on the MWI of QM. I think EY is throwing ... (read more)

Neither consciousness nor mind are primary in the MWI - so I can't see where you are getting that from.
Its not an explicit form of Primacy of Consciousness like prayer or wishing. Its implicit in QM and its basic premises. One example of an implicit form of PoC is to project properties or aspects of consciousness onto reality and treating them as metaphysical and not epistemological factors. I think the ancient philosophers got hung up on this when debating whether a color like "red" was in the object or subject. This went round and round for a few hundred years until someone pointed out that its both (form/object distinction). Jaynes covers similar idea in his book and articles where he ascribes this error to traditional frequentists who hold probabilities as a property of things (a metaphysical concept) instead of a measure or property of our lack of knowledge (an epistemological, bayesian concept). Moreover, committing the PoC error will lead you to supernaturalism eventually so MWI is just a logical outcome of that error.
you mean like collapse?
So: you know all about the mind projection fallacy - but don't seem to be able to find a coherent way to link it to the MWI, even though you seem to want to do that. I don't know what your motives are - and so don't see the point.
Of course my motives are irrelevant here but for the record I am trying to understand epistomology and its application to my self and, ultimately to AI. How about you, what are your motives? Not knowing the exact details of where the PoC flaw is in QM is not a devastating criticism of my point, though your tone seems to suggest that you think it is. Why does the USPTO no longer accept applications for perpetual motion machines? Because it violates the first and/or second laws of thermo, no need to dig further into the details. This is just how principles work and once a fundamental error is identified then that's it, end of discussion.... unless I was a physicist and wanted to dig in and take a crack at resolving the QM quandries which I do not. Jaynes left us a pretty large clue that the PoC error probably lies in the mis-use of probability theory as he described. As a non physicist that's all (and more) than I need to know.
If you can't tell us why Primacy of Consciousness is necessary for MWI, then we have no grounds for doubting MWI on the basis of your argument. It's like saying that X is a perpetual motion machine and therefore impossible, and then when asked in what way is X a perpetual motion machine, replying that it's implicitly a perpetual motion machine and you can't relate the exact details.
What principle do you believe that MWI is violating that is analogous to a perpetual motion machine violating conservation of energy? In the case of the perpetual motion machine, it is easy to see that the described system violates energy conservation, because you can compare the energy in the system at different times. From this global violation, one can deduce that there was a mistake somewhere in the calculations that predicted it for a system that follows the physical laws that imply conservation of energy. So, what is the global problem with MWI that leads you to believe that it has a PoC flaw?
Probably mostly to learn things - though you would have to consult with my shrink for more details. Of course I'm not doing that in this thread - I guess that, here I'm trying to help you out on this issue while showing that I know what I'm talking about. Maybe someday, someone can return the favour - if they see me talking nonsense. Or maybe it's just a case of: http://mohel.dk/grafik/andet/Someone_Is_Wrong_On_The_Internet.jpg Jaynes' criticism doesn't apply to the MWI. The MWI doesn't involve probabilities - it's a deterministic theory: http://www.hedweb.com/manworld.htm#deterministic
1[comment deleted]15y