Followup toIs That Your True Rejection?

I expected from the beginning, that the difficult part of two rationalists reconciling a persistent disagreement, would be for them to expose the true sources of their beliefs.

One suspects that this will only work if each party takes responsibility for their own end; it's very hard to see inside someone else's head.  Yesterday I exhausted myself mentally while out on my daily walk, asking myself the Question "What do you think you know, and why do you think you know it?" with respect to "How much of the AI problem compresses to large insights, and how much of it is unavoidable nitty-gritty?"  Trying to either understand why my brain believed what it believed, or else force my brain to experience enough genuine doubt that I could reconsider the question and arrive at a real justification that way.  It's hard to see how Robin Hanson could have done any of this work for me.

Presumably a symmetrical fact holds about my lack of access to the real reasons why Robin believes what he believes.  To understand the true source of a disagreement, you have to know why both sides believe what they believe - one reason why disagreements are hard to resolve.

Nonetheless, here's my guess as to what this Disagreement is about:

If I had to pinpoint a single thing that strikes me as "disagree-able" about the way Robin frames his analyses, it's that there are a lot of opaque agents running around, little black boxes assumed to be similar to humans, but there are more of them and they're less expensive to build/teach/run.  They aren't even any faster, let alone smarter.  (I don't think that standard economics says that doubling the population halves the doubling time, so it matters whether you're making more minds or faster ones.)

This is Robin's model for uploads/ems, and his model for AIs doesn't seem to look any different.  So that world looks like this one, except that the cost of "human capital" and labor is dropping according to (exogenous) Moore's Law , and it ends up that economic growth doubles every month instead of every sixteen years - but that's it.  Being, myself, not an economist, this does look to me like a viewpoint with a distinctly economic zeitgeist.

In my world, you look inside the black box.  (And, to be symmetrical, I don't spend much time thinking about more than one box at a time - if I have more hardware, it means I have to figure out how to scale a bigger brain.)

The human brain is a haphazard thing, thrown together by idiot evolution, as an incremental layer of icing on a chimpanzee cake that never evolved to be generally intelligent, adapted in a distant world devoid of elaborate scientific arguments or computer programs or professional specializations.

It's amazing we can get anywhere using the damn thing.  But it's worth remembering that if there were any smaller modification of a chimpanzee that spontaneously gave rise to a technological civilization, we would be having this conversation at that lower level of intelligence instead.

Human neurons run at less than a millionth the speed of transistors, transmit spikes at less than a millionth the speed of light, and dissipate around a million times the heat per synaptic operation as the thermodynamic minimum for a one-bit operation at room temperature.  Physically speaking, it ought to be possible to run a brain at a million times the speed without shrinking it, cooling it, or invoking reversible computing or quantum computing.

There's no reason to think that the brain's software is any closer to the limits of the possible than its hardware, and indeed, if you've been following along on Overcoming Bias this whole time, you should be well aware of the manifold known ways in which our high-level thought processes fumble even the simplest problems.

Most of these are not deep, inherent flaws of intelligence, or limits of what you can do with a mere hundred trillion computing elements.  They are the results of a really stupid process that designed the retina backward, slapping together a brain we now use in contexts way outside its ancestral environment.

Ten thousand researchers working for one year cannot do the same work as a hundred researchers working for a hundred years; a chimpanzee is one-fourth the volume of a human's but four chimps do not equal one human; a chimpanzee shares 95% of our DNA but a chimpanzee cannot understand 95% of what a human can.  The scaling law for population is not the scaling law for time is not the scaling law for brain size is not the scaling law for mind design.

There's a parable I sometimes use, about how the first replicator was not quite the end of the era of stable accidents, because the pattern of the first replicator was, of necessity, something that could happen by accident.  It is only the second replicating pattern that you would never have seen without many copies of the first replicator around to give birth to it; only the second replicator that was part of the world of evolution, something you wouldn't see in a world of accidents.

That first replicator must have looked like one of the most bizarre things in the whole history of time - this replicator created purely by chance.  But the history of time could never have been set in motion, otherwise.

And what a bizarre thing a human must be, a mind born entirely of evolution, a mind that was not created by another mind.

We haven't yet begun to see the shape of the era of intelligence.

Most of the universe is far more extreme than this gentle place, Earth's cradle.  Cold vacuum or the interior of stars; either is far more common than the temperate weather of Earth's surface, where life first arose, in the balance between the extremes.  And most possible intelligences are not balanced, like these first humans, in that strange small region of temperate weather between an amoeba and a Jupiter Brain.

This is the challenge of my own profession - to break yourself loose of the tiny human dot in mind design space, in which we have lived our whole lives, our imaginations lulled to sleep by too-narrow experiences.

For example, Robin says:

Eliezer guesses that within a few weeks a single AI could grow via largely internal means from weak and unnoticed to so strong it takes over the world [his italics]

I suppose that to a human a "week" sounds like a temporal constant describing a "short period of time", but it's actually 10^49 Planck intervals, or enough time for a population of 2GHz processor cores to perform 10^15 serial operations one after the other.

Perhaps the thesis would sound less shocking if Robin had said, "Eliezer guesses that 10^15 sequential operations might be enough to..."

One should also bear in mind that the human brain, which is not designed for the primary purpose of scientific insights, does not spend its power efficiently on having many insights in minimum time, but this issue is harder to understand than CPU clock speeds.

Robin says he doesn't like "unvetted abstractions".  Okay.  That's a strong point.  I get it.  Unvetted abstractions go kerplooie, yes they do indeed.  But something's wrong with using that as a justification for models where there are lots of little black boxes just like humans scurrying around, and we never pry open the black box and scale the brain bigger or redesign its software or even just speed up the damn thing.  The interesting part of the problem is harder to analyze, yes - more distant from the safety rails of overwhelming evidence - but this is no excuse for refusing to take it into account.

And in truth I do suspect that a strict policy against "unvetted abstractions" is not the real issue here.  I constructed a simple model of an upload civilization running on the computers their economy creates:  If a non-upload civilization has an exponential Moore's Law, y = e^t, then, naively, an upload civilization ought to have dy/dt = e^y -> y = -ln(C - t).  Not necessarily up to infinity, but for as long as Moore's Law would otherwise stay exponential in a biological civilization.  I walked though the implications of this model, showing that in many senses it behaves "just like we would expect" for describing a civilization running on its own computers.

Compare this to Robin Hanson's "Economic Growth Given Machine Intelligence", which Robin describes as using "one of the simplest endogenous growth models to explore how Moore's Law changes with computer-based workers.  It is an early but crude attempt, but it is the sort of approach I think promising."  Take a quick look at that paper.

Now, consider the abstractions used in my Moore's Researchers scenario, versus the abstractions used in Hanson's paper above, and ask yourself only the question of which looks more "vetted by experience" - given that both are models of a sort that haven't been used before, in domains not actually observed, and that both give results quite different from the world we see and that would probably cause the vast majority of actual economists to say "Naaaah."

Moore's Researchers versus Economic Growth Given Machine Intelligence - if you didn't think about the conclusions in advance of the reasoning; and if you also neglected that one of these has been written up in a way that is more impressive to economics journals; and you just asked the question, "To what extent is the math used here, constrained by our prior experience?" then I would think that the race would at best be even.  Or possibly favoring "Moore's Researchers" as being more simple and intuitive, and involving less novel math as measured in additional quantities and laws introduced.

I ask in all humility if Robin's true rejection is a strictly evenhandedly applied rule that rejects unvetted abstractions.  Or if, in fact, Robin finds my conclusions, and the sort of premises I use, to be objectionable for other reasons - which, so far as we know at this point, may well be valid objections - and so it appears to him that my abstractions bear a larger burden of proof than the sort of mathematical steps he takes in "Economic Growth Given Machine Intelligence".  But rather than offering the reasons why the burden of proof appears larger to him, he says instead that it is "not vetted enough".

One should understand that "Your abstractions are unvetted!" makes it difficult for me to engage properly.  The core of my argument has to do with what happens when you pry open the black boxes that are your economic agents, and start fiddling with their brain designs, and leave the tiny human dot in mind design space.  If all such possibilities are rejected on the basis of their being "unvetted" by experience, it doesn't leave me with much to talk about.

Why not just accept the rejection?  Because I expect that to give the wrong answer - I expect it to ignore the dominating factor in the Future, even if the dominating factor is harder to analyze.

It shouldn't be surprising if a persistent disagreement ends up resting on that point where your attempt to take into account the other person's view, runs up against some question of simple fact where, it seems to you, you know that can't possibly be right.

For me, that point is reached when trying to visualize a model of interacting black boxes that behave like humans except they're cheaper to make.  The world, which shattered once with the with the first replicator, and shattered for the second time with the emergence of human intelligence, somehow does not shatter a third time.  Even in the face of blowups of brain size far greater than the size transition from chimpanzee brain to human brain; and changes in design far larger than the design transition from chimpanzee brains to human brains; and simple serial thinking speeds that are, maybe even right from the beginning, thousands or millions of times faster.

That's the point where I, having spent my career trying to look inside the black box, trying to wrap my tiny brain around the rest of mind design space that isn't like our small region of temperate weather, just can't make myself believe that the Robin-world is really truly actually the way the future will be.

There are other things that seem like probable nodes of disagreement:

Robin Hanson's description of Friendly AI development as "total war" that is harmful to even discuss, or his description of a realized Friendly AI as "a God to rule us all".  Robin must be visualizing an in-practice outcome very different from what I do, and this seems like a likely source of emotional fuel for the disagreement as well.

Conversely, Robin Hanson seems to approve of a scenario where lots of AIs, of arbitrary motives, constitute the vast part of the economic productivity of the Solar System, because he thinks that humans will be protected under the legacy legal system that grew continuously out of the modern world, and that the AIs will be unable to coordinate to transgress the legacy legal system for fear of losing their own legal protections.  I tend to visualize a somewhat different outcome, to put it mildly; and would symmetrically be suspected of emotional unwillingness to accept that outcome as inexorable.

Robin doesn't dismiss Cyc out of hand and even "hearts" it, which implies that we have an extremely different picture of how intelligence works.

Like Robin, I'm also feeling burned on this conversation, and I doubt we'll finish it; but I should write at least two more posts to try to describe what I've learned, and some of the rules that I think I've been following.

New Comment
53 comments, sorted by Click to highlight new comments since:

I'm going to nitpick (mainly because of how much reading I've been doing about thermodynamics and information theory since your engines of cognition post):

Human neurons ... dissipate around a million times the heat per synaptic operation as the thermodynamic minimum for a one-bit operation at room temperature. ... it ought to be possible to run a brain at a million times the speed without ... invoking reversible computing or quantum computing.

I think you mean neurons dissipate a million times the thermodynamic minimum for an irreversible one-bit operation at room temperature, though perhaps it was clear you were talking about irreversible operations from the next sentence. A reversible operation can be made arbitrarily close to dissipating zero heat.

Even then, a million might be a low estimate. By Landauer's Principle a one-bit irreversible operation requires only kTln2 = 2.9e-21 J at 25 degrees C. Does the brain use more than 2.9e-15 J per synaptic operation?

Also, how can a truly one-bit digital operation be irreversible? The only such operations that both input and output one bit are the identity and inversion gates, both of which are reversible.

I know, I know, tangential to your point...

...we never pry open the black box and scale the brain bigger or redesign its software or even just speed up the damn thing.
How would this be done? In our current economy, humans all run similar software on similar hardware. Yet we still have difficulty understanding each other, and even two tradesmen of the same culture, gender, and who grew up in the same neighborhood have knowledge of their trade the other cannot even likely understand (or they may not even know that they have). We're far from being able to tinker around in each other's heads. Even if we had the physical ability to alter others' thought processes, its not clear that doing so (outside of increasing connectivity; I would love to have my brain wired up to a PC and the Internet) would produce good results. Even cognitive biases which are so obviously irrational have purposes which are often beneficial (even if only individually), and I don't think we could predict the effects of eliminating them en mass.

AI could presumably be much more flexible than human minds. An AI which specialized in designing new computer processors probably wouldn't have any concept of the wind, sunshine, biological reproduction or the solar system. Who would improve upon it? Being that it specialized in computer hardware, it would likely be able to make other AIs (and itself) faster by upgrading their hardware, but could it improve upon their logical processes? Beyond improving speed, could it do anything to an AI which designed solar panels?

In short, I see the amount of "Hayekian" knowledge in an AI society to be far, far greater than a human one, due to the flexibility of hardware and software that AI would allow over the single blueprint of a human mind. AIs would have to agree on a general set of norms in order to work together, norms most might not have any understanding of beyond the need to follow them. I think this could produce a society where humans are protected.

Though I don't know anything about the plausibility of a single self-improving AGI being able to compete with (or conquer?) a myriad of specialized AIs. I can't see how the AGI would be able to make other AIs smarter, but I could see how it might manipulate or control them.

But it's worth remembering that if there were any smaller modification of a chimpanzee that spontaneously gave rise to a technological civilization, we would be having this conversation at that lower level of intelligence instead.

I do not think that follows. The human brain has some probability per unit time of spontaneously evolving high technology, if conditions are right for it. There could plausibly be less-intelligent brains with some lower probability per unit time; at the sme time they have some probability of mutating into the smarter human configuration. If you adjust the probabilities right, you could make it so that, starting with the less-intelligent brains, there are equal probabilities of getting human-level brains before technology, or vice-versa.

Not that this has anything to do with the point you were making, I just thought it was interesting.

They aren't even any faster, let alone smarter.

I definitely thought Robin held that EMs could be run at faster than human speeds, or would be once moore's law/software optimizations caught up. I don't see how being able to scale only the number of EMs makes any sense.

Like Robin, I'm also feeling burned on this conversation, and I doubt we'll finish it;

I understand this, but it annoys me. I'm the kind of person who takes arguments all the way to the end. (which isn't always a good thing.)

And what a bizarre thing a human must be, a mind born entirely of evolution, a mind that was not created by another mind.

Minds were involved on the causal pathway leading to human minds. Trivially, if you scoop out a mother's brain at birth or a father's brain at birth and they stop being mothers and fathers - since they fail to reproduce. Minds are evidently causally involved in the process of creating minds - as this simple experiment illustrates - so this idea seems like bad poetry to me - or is at least very misleading.

The idea that humans were created by some kind of mindless process is totally wrong.

OK, Tim Tyler's link is interesting. I don't know every much about evolution (basically what I've read here plus a little bit); can someone who knows more say whether this is an idea worth paying attention to? And if it's not, why is it confused?

Tim: Obviously also an AI would not be able to create a new mind because they don't have a beating heart. Obviously hearts are required to do mind design.

OK, Tim Tyler's link is interesting. I don't know every much about evolution (basically what I've read here plus a little bit); can someone who knows more say whether this is an idea worth paying attention to? And if it's not, why is it confused?

Re: Obviously hearts are required to do mind design.

Huh? I can't see an analogy or point here that isn't terminally busted. If there was one, maybe spell it out?

Michael: "the problem of consciousness is inherently unsolvable." ??

Not sure if you've been following along on this blob, but Eliezer has a while back quite thoroughly taken apart zombieism/property dualism/epiphenominalism.

The point is: "we really don't understand it right now. But that doesn't give us justification for thinking it's beyond understanding. Further, although it seems completely nonobvious as to how this could be, there does seem to be plenty of reason to believe that it really is tied nontrivialy to physical processes, in a way that you couldn't simply have a 'zombie world' that's the same except for lack of consciousness."

Tim:

His point is that if you scoop out the hearts of the mother and father, they also won't do any reproducing. Which may or may not be just a cute rejoinder to your earlier post, the cuteness of which is also hard to descry.

But, Tim, honestly, brother, aren't we arguing semantics? Whether we call the process of evolution "blind" or "intelligent" up to this point, certainly it will be a new event in evolutionary history if the kind of self-improving AI that Eliezer is talking about takes off... "10^49 Planck intervals, or enough time for a population of 2GHz processor cores to perform 10^15 serial operations one after the other."

10^15 deliberate serial operations in a week is a VASTLY new way of self-evolving. Now, if you think Eliezer is wrong in thinking this will happen, I am interested to hear why... I certainly don't know if he is right or wrong and want to consider more arguments. But your argument (repeated many, many times on this blog now) is clever, but there's no relevant content.

At what point does raising a clever objection cross over into trolling and self-promotion being a crank? Sorry for the cross-talk everyone, but it's grating.

"""it is quite likely that the universe may not be able to support vast orders of magnitude of intelligence"""

Why?

Daniel, good point. Tim Tyler, you've made your point about sexual selection. Enough.

Misc points:

  • I guessed a week to month doubling time, not six months,
  • I've talked explicitly about integrated communities of faster ems,
  • I used a learning-by-doing modeling approach to endogenize Moore's law,
  • any model of minds useable for forecasting world trends must leave out detail,
  • most people complain economists using game theory to model humans ignores too much human detail; what excess human detail do you think economists retain?
  • research labs hiring workers, e.g., Intel, are willing to trade off worker speed, i.e., hours per week, for worker salary, experience, etc.; a model that says Intel cares only about worker speed misses an awful lot.

I personally find the comparison between spike frequency and clockspeed unconvincing. It glosses over all sorts of questions of whether the system can keep all the working memory it needs in 2MB or whatever processor cache it has. Neurons have the advantage of having local memory, no need for the round trip off chip.

We also have no idea how neurons really work, there has been recent work on the role of methylation of dna in memory. Perhaps it would be better to view neural firing as communication between mini computers, rather than processing in itself.

I'm also unimpressed with large numbers, 10^15 operations is not enough to process the positions of 1 gram of hydrogen atoms, in fact it would take 20 million years for it to do so (assuming one op per atom). So this is what we have to worry about planning to atomically change our world to the optimal form. Sure it is far more than we can consciously do, and quite possibly a lot more than we can do unconsciously as well. But it is not mindboglingly huge compared to the real world.

Psy-K, try as I might to come up with a way to do it, I can see no possibility of an objective test for subjective experience.

There's really no way to defeat dualism except through faith in materialism. If you don't take it as a given that science can be complete (despite Godel's lesson that mathematics can't be) you can't actually reach the point where the case is proven. I've seen dozens of different approaches to handwaving this problem away and they are all deeply irrational appeals to "emergent processes" that mean nothing beyond "I choose to beg the question."

The methods of science work because they are objective. The problem of consciousness is about the existence or non-existence of subjective experience. The gap cannot be bridged by objective methods. It's all guesswork, not because we haven't found the secret clue yet, but because even if we could find it, we could never prove it was the right clue.

Tiiba challenges my suggestion that "it is quite likely that the universe may not be able to support vast orders of magnitude of intelligence".

Tiiba, I am not the one making the extravagant claim here. What would vast orders of magnitude of greater intelligence be like? What purpose would it serve?

Things are as predictable as they are and not more so. What purpose would the additional processing power serve? It seems to me that if one would reach the limits of computability of a given practical dilemma in a nanosecond rather than, say, in an hour, the only advantage would be 59.999999... minutes of leisure, not a better decision.

Robin, I found different guesses at the doubling time listed in different places, so I just used one from "Economic Growth Given Machine Intelligence." I'll change the text.

"""Things are as predictable as they are and not more so."""

Michael, Eliezer has spent the last two years giving example after example of humans underusing the natural predictability of nature.

"""Psy-K, try as I might to come up with a way to do it, I can see no possibility of an objective test for subjective experience."""

I bet it's because you don't have a coherent definition for it. It's like looking for a hubgalopus.

The idea that subjective experience can be dismissed as a hubgalopus strikes me as pretty weird. Maybe there's nobody home at the Tiiba residence but that's not how it is around here.

Subjective experience is defined by Daniel Dennett as the existence of an answer to the "what-is-it-like-to-be-a" question. That is, I know from direct observation that it is like something to be Michael Tobis. I suspect from extrapolation that it is like something to be Tiiba or to be Eliezer Yudkowsy, or, indeed, a cat. I suspect from observation that it is not like anything to be a chair or a football or a coffee cup, or to use Dennett's favorite example, a cheeseburger.

I am uncertain about whether it is like something to be a Turing-Test-passing device. Oddly, I find it fairly easy to construct reasonably compelling arguments either way. I consider the question indeterminate, and I consider its indeterminacy to present serious ethical problems for anyone proposing to construct such a device.

That I cannot define this property of subjective experience in an objectively testable way is not just conceded but claimed. That is part of my point; objective methods are necessarily incomplete regarding a very salient aspect of the universe.

But it is certainly subjectively testable (by me in the case of the entity which I, in fact, am and by you in the case of the entity which you yourself are, presuming in fact you are one). At least one subjectivity exists, by subjective observation. Yes, you in there. That perceived whiteness on the screen around these symbols. That feeling of wooliness in your socks. The taste of stale coffee in that mug by your side. All that sensation. Those "qualia". If they don't exists for you, I am talking to myself, but I promise you I've got a bunch of experiences happening right now.

I find no grand theory compelling which cannot account for this central and salient fact. Yet this fact, the existence of at least one subjective experience, cannot actually be accounted for objectively, because, again there can be no objective test for subjectivity.

Accordingly I find no grand theory compelling at all.

I am comfortable with this undecidability of the nature of the universe, (in fact I find it rather attractive and elegant) but many people are not. It really wouldn't matter much and I would happily keep this model of the fundamental incompleteness of reason to myself, except that all you transhumanists are starting to play with fire as if you knew what is going on.

I believe you do not know and I believe you cannot know.

Strong AI is consequently, in my view, a very bad idea and so I devoutly hope you fail miserably; this despite the fact that I have a lot in common with you people and that I rather like most of you.

[-][anonymous]00

Robin: Most people complain economists using game theory to model humans ignores too much human detail; what excess human detail do you think economists retain?

Humans have a Coordination Problem. AIs have a Coordination Option. (How do I include a link to 'True Prisoner's Dilemma'?) The assumptions embedded in that simple label are huge.

[-]Tiiba2-10

Tobis: That which makes you suspect that bricks don't have qualia is probably the objective test you're looking for.

Eliezer had a post titled "How An Algorithm Feels From Inside": http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/

Its subject was different, but in my opinion, that's what qualia are - what it feels like from the inside to see red. You cannot describe it because "red" is the most fundamental category that the brain perceives directly. It does not tell you what that means. With a different mind design, you might have had qualia for frequency. Then that would feel like something fundamental, something that could never be explained to a machine.

But the fact is that if you tell the machine under what circumstances you say that you see red, that is all the information it needs to serve you or even impersonate you. It doesn't NEED anything else, it hasn't lost anything of value. Which is, of course, what the Turing Test is all about.

Come to think of it, it seems that with this definition, it might even be possible - albeit pointless - to create a robot that has exactly a human's qualia. Just make it so it would place colors into discrete buckets, and then fail to connect these buckets with its knowledge of the electromagnetic spectrum.

Also, what I meant by "hubgalopus" is not that subjective experience is one. I meant that when you find yourself unable to decide whether an object has a trait, it's probably because you have no inklink what the hell you're looking for. Is is a dot? Or is it a speck? When it's underwater, does it get wet?.. Choose a frickin definition, and then "does it exist?" will be a simple boolean-valued question.

"inklink" = "inkling"

Something I forgot. Eliezer will probably have me arrested if I just tell you to come up with a definition. He advises that you "carve reality at its joints":

http://lesswrong.com/lw/o0/where_to_draw_the_boundary/

(I wish, I wish, O shooting star, that OB permitted editing.)

Nitpick nitpick:

"Also, how can a truly one-bit digital operation be irreversible? The only such operations that both input and output one bit are the identity and inversion gates, both of which are reversible."

Setting or clearing a bit register regardless of what was there before is a one-bit irreversible operation (the other two one-bit input, one-bit output functions are constant 1 and constant 0).

Tiiba,

Existence or nonexistence of subjectivity in a given system is a well-defined and well-posed boolean-valued question. It is admittedly an undecideable question in cases other than one's own.

We already know from Godel that well-posed questions can be undecidable. Decidability is not identical to well-posedness.

As for nature's joints, I am not alone in the view that the mind/matter dichotomy is the clearest and most crucial one of all. The problem is not that the line is drawn in the wrong place. The problem is that the relationship between the subjective and the objective is mysterious.

Where I'm a bit idiosyncratic, I think, is this:

I don't think the persistence of the mind/matter dichotomy is some peculiar and minor gap in science that will soon be filled in. I strongly suspect, rather, that it's a deeply fundamental fact of the universe. In other words, the phenomenon of mind, i.e., subjectivity itself, cannot be fully reduced to objectively observable processes, and any attempt to do so is necessarily logically flawed.

Mine is not a widely held opinion, but I'm not sure why not. If there were a purely objective explanation of how subjective experience arises from an inanimate universe, what could it possibly look like?

Michael: Yes, we don't know how to test for it quite yet. So?

Consider this: Our very physical brains control stuff like, well, us talking about consciousness.

Am I to assume then that my conscious experiences, and what I say about them have no connection? My consciousness isn't in the chain of causality leading to be, well, typing about it? If it is beyond the material, then how is it that my physical brain managed to notice it, hrmmm?

It seems reasonable to suppose that consciousness, among other things, can be in part described as some of the processes involved in, well, leading us to talk about consciousness.

I fully admit, I can't really imagine even what form the actual explanation of consciousness may take. I fully admit that I'm profoundly confused about this, that there doesn't seem to be any type of explanation that I can imagine.

And yet there seems a rather compelling argument in favor of the idea that somehow it arises from physical processes, and not in a removable property-dualism/epiphenominalism way.

Michael Tobis, suppose a whole brain emulation of someone is created. You have a long, involved, normal-seeming conversation with the upload, and she claims to have qualia. Even if it is conceded that there's no definitive objective test for consciousness, doesn't it still seem like a pretty good guess that the upload is conscious? Like, a really really good guess?

ZM, it's clear where your biases are. What do you propose to do to overcome them?

The stakes are very high for this "guess". The ethical implications of getting it wrong are huge. There are strong arguments both ways, as I expect you know. (Chinese room argument, for instance.)

The designers of the simulation or emulation fully intend to pass the Turing test; that is, it is the explicit purpose of the designers of the software to fool the interviewer. That alone makes me doubt the reliability of my own judgment on the matter.

By the way, I have been talking about this stuff with non-AI-obsessed people the last few days. Several of them independently pointed out that some humans would fail the Turing test. Does that mean that it is OK to turn them off?

Ultimately the Turing test is about the interviewer as much as the interviewee; it's about what it takes to kick off the interviewer's empathy circuits.

The idea of asking anyone besides an AI-obsessed person whether they "have qualia" is amusing by the way. The best Turing-Test passing answer, most likely, is "huh?"

Psy-K: "And yet there seems a rather compelling argument in favor of the idea that somehow it arises from physical processes, and not in a removable property-dualism/epiphenominalism way."

Probably, I guess.

Maybe it's chemical, as Penrose suggests, which would fit in to your constraints. For the purposes of considering the ethics of strong AI, even if I accept your "seemingly compelling argument" it's not obviously algorithmic.

I simply say it's an undecidable proposition, though.

Which doesn't make me an epiphenomenalist but an epiphenom-agnostic. It still leaves me as a diehard dualist. I cannot imagine even a reduction of consciousness to physics that is even coherent, never mind correct.

I see a lot of handwaving but nothing resembling a testable hypothesis anywhere. Surprise me and show me some science.

I don't see why the burden of proof should be on me. You guys are the ones who want to plug this damned thing in and see what it does. I want to see more than wild guesses and vague gestures mimicking "emergent processes" before I think that is anything but a very bad idea.

So, then, how is my reduction flawed? (Oh, there are probably holes in it... But I suspect it contains a kernel of the truth.)

You know, we haven't had a true blue, self-proclaimed mystic here in a while. It's kind of an honor. Here's the red carpet: [I originally posted a huge number of links to Eliezer's posts, but the filter thought they're spam. So I'll just name the articles. You can find them through Google.] Mysterious Answers to Mysterious Questions Excluding the Supernatural Trust in Math Explain/Worship/Ignore? Mind Projection Fallacy Wrong Questions Righting a Wrong Question

I have read the Chinese Room paper and concluded that it is a POS. Searle runs around, points at things that are obviously intelligent, asks "it that intelligence?", and then answers, matter of factly, "no, it isn't". Bah.

What Searle's argument amounts to

The Turing test is not claimed as a necessary precondition for consciousness, but a sufficient one.

"You guys are the ones who want to plug this damned thing in and see what it does."

That's just plain false. Eliezer dedicated his life to making this not so.

"The stakes are very high for this 'guess'. The ethical implications of getting it wrong are huge." True. "The designers of the simulation or emulation fully intend to pass the Turing test; that is, it is the explicit purpose of the designers of the software to fool the interviewer."

To clarify, I'm talking about something like a Moravec transfer, not a chatbot. Maybe a really sophisticated chatbot could pass the Turing test, but if we know that a given program was designed simply to game the Turing test, then we won't be impressed by its passing the test. The designers aren't trying to fool the interviewer; they're trying to build a brain (or something that does the same kind of thing). We know that brains exist.

"I don't see why the burden of proof should be on me."

The reason is that the human brain is not magic. It's doing something, and whatever that something is, it would incredibly surprising if it's the only structure in the vastness of the space of all possible things that could do it. Yes, consciousness is a mystery unto me, and I'm waving my hands. I don't know how to build a person. But the burden of proof is still on you.

Michael, it seems that you are unaware of Eliezer's work. Basically, he agrees with you that vague appeals to "emergence" will destroy the world. He has written a series of posts that show why almost all possible superintelligent AIs are dangerous. So he has created a theory, called Coherent Extrapolated Volition, that he thinks is a decent recipe for a "Friendly AI". I think it needs some polish, but I assume that he won't program it as it is now. He's actually holding off getting into implementation, specifically because he's afraid of messing up.

Tiiba, you're really overstating Eliezer and SIAI's current abilities. CEV is a sketch, not a theory, and there's a big difference between "being concerned about Friendliness" and "actually knowing how to build a working superintelligence right now, but holding back due to Friendliness concerns."

It's true that - according to my "scooping experiment", hearts contribute to making the next generation of hearts in much the same way that brains contribute to making the next generation of brains. I'm fine with that.

There are also some dis-analogies, though. Selection by brains did much of the selective work in human evolution - and so they are pretty intimately involved in the transmission of design information about all aspects of humans from one generation to the next. Whereas selection by hearts mostly only affects design information about hearts, and it mostly affects that in a rather negative way: by killing their owners when there are malfunctions.

IOW, the brain is in a more pivotal position to affect the broad course of human evolution - due to its functional role as sensory-motor nexus.

We need an officially endorsed forum.

Cameron, I have no idea what you are talking about.

Eliezer, most readers of this blog are not in a position to evaluate which model looks more vetted. The whole point is that a community of thousands of specialists has developed over decades vetting models of total system growth, and they are in the best position to judge. I have in fact not just talked about vetting, but have offered more detailed reasons why your model seems unsatisfactory.

"Tiiba, you're really overstating Eliezer and SIAI's current abilities. CEV is a sketch, not a theory, and there's a big difference between "being concerned about Friendliness" and "actually knowing how to build a working superintelligence right now, but holding back due to Friendliness concerns.""

That's what I meant.

Please shift the consciousness comments to any one of the appropriate posts linked.

Robin, should we ask James Miller then? I have no problem with the detailed reasons you offer, it's just the "insufficiently vetted" part of the argument that I find difficult to engage with - unless I actually find members of this community and ask them which specific pieces are "vetted" in their view, by what evidence, and which not. I wouldn't necessarily trust them, to be frank, because it was never a condition of their profession that they should deal with nonhumans. But at least I would have some idea of what those laws were under which I was being judged.

It's hard for me to accept as normative the part of this argument that is an appeal to authority (professional community that has learned good norms about constructing growth models) rather than an appeal to evidence (look at how well the evidence fits these specific growth models). It's not that I reject authority in general, but these people's professional experience is entirely about humans, and it's hard for me to believe that they have taken into account the considerations involved in extrapolating narrow experience to non-narrow experience when various basic assumptions are potentially broken. I would expect them to have norms that worked for describing humans, full stop.

Eliezer, I'm not sure James Miller has done much econ growth research. How about my colleague Garrett Jones, who specializes on intelligence and growth?

It would be more worthwhile to add to an old thread, if there was some way to let people know there was something going on there. The New Comments list is far too short for the number of comments the blog gets if someone only checks once or twice a day.

Robin, I'd be interested, but I'd ask whether you've discussed this particular issue with Jones before. (I.e., the same reason I don't cite Peter Cheeseman as support for e.g. the idea that general AI mostly doesn't work if you don't have all the parts, and then undergoes something like a chimp->human transition as soon as all the parts are in place. So far as I can tell, Cheeseman had this idea before I met him; but he still wouldn't be an unbiased choice of referee, because I already know many of his opinions and have explicitly contaminated him on some points.)

Khyre: Setting or clearing a bit register regardless of what was there before is a one-bit irreversible operation (the other two one-bit input, one-bit output functions are constant 1 and constant 0).

face-palm I can't believe I missed that. Thanks for the correction :-)

Anyway, with that in mind, Landauer's principle has the strange implication that resetting anything to a known state, in such a way that the previous can't be retrieved, necessarily releases heat, and the more information the state conveys to the observer, the more heat is released. Okay, end threadjack...

Eliezer, Garrett has seen and likes my growth paper, but he and I have not talked at all about your concepts. I sent him a link once to this post of yours; I'll email you his reply.

I can't help but smirk when Robin asks Eliezer to incorporate "models" into his extrapolations. Robin, do you agree with Eliezer on the point of the immensity of mind-design space?

Tim, honestly, brother, aren't we arguing semantics? Whether we call the process of evolution "blind" or "intelligent" up to this point, certainly it will be a new event in evolutionary history if the kind of self-improving AI that Eliezer is talking about takes off...

The issue I was addressing was how best to view such a phenomenon.

Is it simply intelligence "looping back on itself", creating a strange loop?

I would say no: intelligence has been "looping back on itself" for hundreds of millions of years.

In the past, we have had "intelligent selection". Look closely, and there have also been "intelligent mutations" - and deductive and inductive inference. Those are the essential elements of intelligent design. So, what is new? Previously, any "intelligent mutations" have had to be transferred into the germ line via the Baldwin effect - and that is a slow process - since direct germ line mutations have been effectively undirected.

In my view, the single biggest and most significant change involved is the origin of new writable heritable materials: human cultural inheritance - the new replicators. That led to big brains, language, farming, society, morality, writing, science, technology - and soon superintelligence.

Even if you paint intelligent design as the important innovation, that's been going on for thousands of years. Engineering computing machinery is not really something new - we've been doing it for decades. Machines working on designing other machines isn't really new either.

These changes all lie in the past. What will happen in the forseeable future is mostly the ramifications of shifts that started long ago gradually playing themselves out.

Tim, you've been going on about this through multiple posts, you have been requested to stop, please do so.

Robin, email reply looks fine.

Tim: There is a huge difference between an organ being a necessary part of the chain of causality leading to reproduction, and that organ doing optimization work on something.

Choosing a mate who is good at making things is a very different kind of thinking than choosing how to make things. Yes sexual selection affected our brain design, But I think the amount of information that could have been created by mate selection choices is dwarfed by the information that comes from being able to make things better.

Tim, you've been going on about this through multiple posts, you have been requested to stop, please do so.

Eleizer, er, what are you talking about?!? I have received no such request.

If you are now telling me that you are not prepared to have me discuss some issue here, please be more specific - what are you forbiding me from saying - and perhaps more significantly - why are you doing that?

Friend, yes of course, the space of "minds" is large.

OK - I hadn't seen that. One problem that I see is that the conversation continues - from my perspective - as though the point has not been made. Also, the second post you objected to contains barely a hint of sexual selection.

You had better ban me from mentioning The Baldwin Effect as well - or else, I'll be tempted to shift to those grounds - since that provides similar mechanisms.

Indeed, plain natural selection is often selection by intelligence as well - male combat leaves you dead, for example, not just unable to reproduce - and males get to choose whether they stand and fight or run and hide, with their brains - and information relating to the consequences of their choices thus winds up in the germ line.

Intelligence is so deeply embedded into the history of human evolution, it's hard to discuss the phenomenon sensibly at all without paying the idea lip service - but I'll give it a whirl for a while.