Scott Aaronson has a new 85 page essay up, titled "The Ghost in the Quantum Turing Machine". (Abstract here.) In Section 2.11 (Singulatarianism) he explicitly mentions Eliezer as an influence. But that's just a starting point, and he then moves in a direction that's very far from any kind of LW consensus. Among other things, he suggests that a crucial qualitative difference between a person and a digital upload is that the laws of physics prohibit making perfect copies of a person. Personally, I find the arguments completely unconvincing, but Aaronson is always thought-provoking and fun to read, and this is a good excuse to read about things like (I quote the abstract) "the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb's paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption". This is not just a shopping list of buzzwords, these are all important components of the author's main argument. It unfortunately still seems weak to me, but the time spent reading it is not wasted at all.
The main disagreement between Aaronson's idea and LW ideas seems to be this:
... (read more)Even if Aaronson's speculation that human minds are not copyable turns out to be correct, that doesn't rule out copyable minds being built in the future, either de novo AIs or what he (on page 58) calls "mockups" of human minds that are functionally close enough to the originals to fool their close friends. The philosophical problems with copyable minds will still be an issue for those minds, and therefore minds not being copyable can't be the only hope of avoiding these difficulties.
To put this another way, suppose Aaronson definitively shows that according to quantum physics, minds of biological humans can't be copied exactly. But how does he know that he is actually one of the original biological humans, and not for example a "mockup" living inside a digital simulation, and hence copyable? I think that is reason enough for him to directly attack the philosophical problems associated with copyable minds instead of trying to dodge them.
Wei, I completely agree that people should "directly attack the philosophical problems associated with copyable minds," and am glad that you, Eliezer, and others have been trying to do that! I also agree that I can't prove I'm not living in a simulation --- nor that that fact won't be revealed to me tomorrow by a being in the meta-world, who will also introduce me to dozens of copies of myself running in other simulations. But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates? What if the very empirical facts that we could copy a program, trace its execution, predict its outputs using an abacus, run the program backwards, in heavily-encrypted form, in one branch of a quantum computation, at one step per millennium, etc. etc., were to count as reductios that there's probably nothing that it's like to be that program --- or at any rate, nothing comprehensible to beings such as us?
Again, I certainly don't know that this is a reasonable way to think. I myself would probably have ridiculed it, before I realized that various things that confused me for years and that I dis... (read more)
If that turns out to be the case, I don't think it would much diminish either my intellectual curiosity about how problems associated with mind copying ought to be solved nor the practical importance of solving such problems (to help prepare for a future where most minds will probably be copyable, even if my own isn't).
It seems likely that in the future we'll be able to build minds that are very human-like, but copyable. For example we could take someone's gene sequence, put them inside a virtual embryo inside a digital simulation, let it grow into an infant and then raise it in a virtual environment similar to a biological human child's. I'm assuming that you don't dispute this will be possible (at least in principle), but are saying that... (read more)
I really don't like the term "LW consensus" (isn't there a LW post about how you should separate out bundles of ideas and consider them separately because there's no reason to expect the truth of one idea in a bundle to correlate strongly with the truth of the others? If there isn't, there should be). I've been using "LW memeplex" instead to emphasize that these ideas have been bundled together for not necessarily systematically good reasons.
I think that last paragraph you quote needs the following extra bit of context:
... because otherwise it looks as if Aaronson is saying something really silly, which he isn't.
If we could fax ourselves to Mars, or undergo uploading, then still wonder whether we're still "us" -- the same as we wonder now when such capabilities are just theoretical/hypotheticals -- that should count as a strong indication that such questions are not very practically relevant, contrary to Aaronson's assertion. Surely we'd need some legal rules, but the basis for those wouldn't be much different than any basis we have now -- we'd still be none the wiser about what identity means, even standing around with our clones.
For example, if we were to wonder about a question of "what effect will a foom-able AI have on our civilization", surely asking after the fact would yield different answers to asking before. With copies / uploads etc., you and your perfect copy could hold a meeting contemplating who stays married with the wife, and still start from the same basis with the same difficulty of finding the "true" answer as if you'd discussed the topic with a pal roleplaying your clone, in the present time.
This paper has some useful comments on methodology that seem relevant to some recent criticism of MIRI's recent research, e.g. the discussion in Section 2.2 about replacing questions with other questions, which is arguably what both the Löb paper and the prisoner's dilemma paper do.
In particular:
... (read more)I'm not a perfect copy of myself from one moment to the next, so I just don't see the force of his objection.
Fundamentally, those willing to teleport themselves will and those unwilling won't. Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive. Practically, it will be convenient for both the teleporters and the nonteleporters to treat the teleporters as if they have continuous identity.
Sometimes you don't need copying to get a tricky decision problem, amnesia or invisible coinflips are enough. For example, we have the Sleeping Beauty problem, the Absent-Minded Driver which is a good test case for LW ideas, or Psy-Kosh's problem which doesn't even need amnesia.
But it's not. (In the link, I use fiction to defang the bugbear and break the intuition pumps associating prediction and unfreedom.) ETA: Aaronson writes
But that's not a problem for Bob's freedom or free will, even if Bob finds it annoying. That's the point of my story.
"Knightian freedom" is a misnomer, in something like the way "a ... (read more)
"But calling this Knightian unpredictability 'free will' just confuses both issues."
torekp, a quick clarification: I never DO identify Knightian unpredictability with "free will" in the essay. On the contrary, precisely because "free will" has too many overloaded meanings, I make a point of separating out what I'm talking about, and of referring to it as "freedom," "Knightian freedom," or "Knightian unpredictability," but never free will.
On the other hand, I also offer arguments for why I think unpredictability IS at least indirectly relevant to what most people want to know about when they discuss "free will" -- in much the same way that intelligent behavior (e.g., passing the Turing Test) is relevant to what people want to know about when they discuss consciousness. It's not that I'm unaware of the arguments that there's no connection whatsoever between the two; it's just that I disagree with them!
A better summary of Aaronson's paper:
EY is mentioned once, for his work in popularizing cryonics, and not for anything fundamental to the paper. Several other LW luminaries like Silas Barta and Jaan Tallinn show up in the acknowledgements.
If you have... (read more)
That Aaronson mentions EY isn't exactly a surprise; the two shared a well-known discussion on AI and MWI several years ago. EY mentions it in the Sequences.
Rancor commonly arises when STEM discussions in general, and discussions of quantum mechanics in particular, focus upon personal beliefs and/or personal aesthetic sensibilities, as contrasted with verifiable mathematical arguments and/or experimental evidence and/or practical applications.
In this regard, a pertinent quotation is the self-proclaimed "personal belief" that Scott asserts on page 46:
... (read more)I feel that his rebuttal of the Lisbet-like experiments (paragraph 2.12) is strikingly weak, exactly where it should have been one of the strongest point. Scott says:
What? Just because predicting human behaviour one minute before it's happening with 99% accuracy is more impressive, it doesn't mean that it involves any kind of different process than predicting human behaviour 5 seconds before with 60% accurateness. Admittedly, it might imply different kind, maybe ... (read more)
I like his causal answer to Newcomb's problem:
... (read more)If he says:
"In this essay I’ll argue strongly for a different perspective: that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question [scanning of minds possible?] is yes, and other such worlds where the answer is no."
and he's right, then LW consensus is religion (in other words, you made up your mind too early).
I'm not quite sure what you mean here. Do you mean that if he's right, then LW consensus is wrong, and that makes LW consensus a religion?
That seems both wrong and rather mean to both LW consensus and religion.
Absolutely, here's the relevant quote:
"The question also has an “empirical core” that could turn out one way or another, depending on details of the brain’s physical organization that are not yet known. In particular, does the brain possess what one could call a clean digital abstraction layer: that is, a set of macroscopic degrees of freedom that
(1) encode everything relevant to memory and cognition,
(2) can be accurately modeled as performing a classical digital computation, and
(3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure randomnumber sources, generating noise according to prescribed probability distributions?"
You could do worse things with your time than read the whole thing, in my opinion.