"But you can't prove it's impossible for my mind to spontaneously generate a belief that happens to be correct!"
Whether the belief happens to be true is irrelevant. What matters is whether the person can justify the belief. If the conviction is spontaneously generated, the person doesn't have a rational argument that shows how the claim arises from previously-accepted statements. Thus, asserting that claim is wrong, regardless of whether it happens to be true or not.
It's not about truth! It's about justification!
I mean, there's got to be more to it than inputs and outputs.
Otherwise even a GLUT would be conscious, right?
Eliezer, I suspect you are not being 100% honest here. I don't have any problems with a GLUT being conscious.
"Otherwise even a GLUT would be conscious, right?"
I have to admit that this sounds crazy, and that I don't really understand what's going on. But it looks like it's logically necessary that lookup tables can be conscious. As far as we know, the Universe, and everything in it, can be simulated on a giant Turing machine. What is a Turing machine, if not a lookup table? Granted, most Turing machines use a much smaller set of symbols than a GLUT- base 5 or base 10 instead of base 10^10^50- but how would that change a system from being "non-conscious" to being "conscious"? And while a Turing machine has a state register, this can be simulated by just using N lookup tables instead of one lookup table. It seems like we have to believe that 1), the mathematical structure of a UTM relative to a giant lookup table, which is very minimal indeed, is the key element required for consciousness, or 2), the Universe is not Turing-computable, or 3), consciousness does not exist.
Eliezer, I suspect you are not being 100% honest here. I don't have any problems with a GLUT being conscious.I have problems with a GLUT being conscious. (Actually, the GLUT fails dramatically to satisfy the graph-theoretic requirements for consciousness that I alluded to but did not describe earlier today, but I wouldn't believe that a GLUT could be conscious even if that weren't the case.)
Hrm... as far as no one actually willing to jump in and say "a glut can be/is conscious"... What about Moravec and Egan? (Egan in Permutation City, Moravec in Simulation, Consciousness, Existance)... I don't recall them explicitly coming out and saying it, but it does seem to have been implied.
Anyways, I think I'm about to argue it... Or at least argue that there's something here that's seriously confusing me:
Okay, so you say that it's the generating process of the GLUT that has the associated consciousness, rather than the GLUT itself. Fine...
But exactly where is the breakdown between that and, say, the process that generates a human equivalent AI? Why not say that process is where the consciousness resides rather than the AI itself? if one takes at least some level of functionalism, allowing some optimizations and so on in the internal computations, then the internal "levers" can end up looking algorithmically very very different than the external, even if the behavior is identical.
In other words, as I start with the "correct" rods and levers to produce consciousness, then optimize various bits of it incramentally... when does the optimization proces...
Hi Caledonian. Hi Stephen. If I remember correctly, this is where the program that is the three of us having college bull sessions goes HALT and we never get any further, is it not? Once again, Eliezer says clearly what Caledonian was thinking and articulated through metaphor in one-on-one conversations (namely "Well, then it wouldn't be conscious. IMHO." ) but is predictably not understood by same, while I am far from sure. Eliezer: You don't know how much I wanted to see you type essentially the line "Ordinarily, when we're talking to...
"The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing."
You begin by saying that you are using "zombie" in a broader-than-usual sense, to denote something that "behave[s] exactly like a human without being conscious". The GLUT was con...
Isn't the state-space of similar such problems known to exceed the number of atoms in the Universe? There is a term for problems which are rendered unsolvable because there just isn't enough possible state-storing matter to represent them, but I can't think of it now.
Pardon me if this is a stupid question, my experience with AI is limited. Funny Eliezer should mention Haskell, I've got to get back to trying to wrap my brain around 'monads'.
I'm not sure what you mean by a GLUT? A static table obviously wouldn't be conscious, since whatever the details consciousness is obviously a process. But, the way you use GLUT suggests that you are including algorithms for processing the look-ups, how would that be different from other algorithmic reasoning systems using stored data (memories)?
And while a Turing machine has a state register, this can be simulated by just using N lookup tables instead of one lookup table. It seems like we have to believe that 1), the mathematical structure of a UTM relative to a giant lookup table, which is very minimal indeed, is the key element required for consciousness, ...TMs also have the notable ability to not halt for some inputs. And if you wanted to precompute those results, writing NULL values into your GLUT, I'd really like to know where the heck you got your Halting Oracle from. The mathematical str...
There was something like a random-yet-working GLUT picked out by sheer luck - abiogenesis. And it did eventually become conscious. The original improbability is a small jump (comparatively) and the rest of the improbability was pumped in by evolution. Still, it's an existence proof of sorts - I don't think you can argue conscious origin as necessary for consciousness. There needs to be an optimizer, or enough time for luck. There doesn't really need to be any mind per se.
A simple GLUT cannot be conscious and or intelligent because it has no working memory or internal states. For example, suppose the GLUT was written at t = 0. At t = 1, the system has to remember that "x = 4". No operation is taken since the GLUT is already set. At t = 2 the system is queried "what is x?". Since the GLUT was written before the information that "x = 4" was supplied, the GLUT cannot know what x is. If the GLUT somehow has the correct answer then the GLUT goes beyond just having precomputed outputs to precomputed ...
The rule of the rationalist's game is that every improbable-seeming belief needs an equivalent amount of evidence to justify it.
Aren't you already breaking it allowing what you consider improbable GLUTs with no evidence?
Also how would you play this game with someone with a vastly different prior?
Any process can be replaced by a sufficiently-large lookup table with the right elements.
If you accept that a process can be conscious, you must acknowledge that lookup tables can be.
There is no alternative. Resistance is useless.
Let me be the first in this thread to suggest that, for the purposes of GLUTs, we should taboo the word "conscious." This post, in my opinion, is a shining example of Eliezer’s ability to verbally carve reality at its joints. After a remarkably clear discussion of the real problem, the question of “conscious” GLUTs seems like a silly near-boundary case.
Is there a technical reason I should think otherwise?
PK is right. I don't think a GLUT can be intelligent, since it can't remember what it's done. If you let it write notes in the sand and then use those notes as part of the future stimulus, then it's a Turing machine.
The notion that a GLUT could be intelligent is predicated on the good-old-fashioned AI idea that intelligence is a function that computes a response from a stimulus. This idea, most of us in this century now believe, is wrong.
Wow, a lot of things to say at this point.
Eliezer Yudkowsky: First, as I started reading, I was going to correct you and point out that Daniel Dennett thinks a GLUT can be conscious, as that is exactly his response to Searle's Chinese Room argument, thinking that I didn't need to read further. Fortunately, I did read the whole thing and find out, when I look at the substance of what the two of you believe, it's the same. While Dennett would say that the GLUT running in the Chinese Room is conscious, what you were really asking was, what is the source of ...
"Any process can be replaced by a sufficiently-large lookup table with the right elements."
That misses my point. A process is needed to do the look-ups or the table just sits there.
If you abstract away the low-level details of how neurons work, couldn't the brain be considered a very large, multidimensional look-up table with a few rules regarding linkages and how to modify strengths of connections?
Phil: Gluts can certainly learn. A GLUT's program is this:
while (true) { x = sensory input y, z = GLUT(y, x) muscle control output = z }
Everything a GLUT has learned is encoded into y. Human GLUTS are so big that even their indices are huge.
Is the entity that results from gerrymandering together neural firings from different people's brains, so as to produce a pattern of neural firings similar to a brain but not corresponding to any "real person" in this Everett branch, conscious? How about gerrymandering together instructions occurring in different CPUs? Atomic motions in random rocks?
Consider a tiny look-up table mapping a few (input sentence, state) pairs to (output sentence, state) pairs - one small enough to practically be constructed, even. So long as you stick to the few sentences it accepts in the current state, it behaves exactly like a GLUT. If a GLUT is conscious, either this smaller table is conscious too, or it's the never activated entries that make the GLUT conscious.
Personally my response to the one would be similar to Caledonian's; perhaps more extreme. I think the linguistic analysis of philosophers is essentially worthless. Language is a means of communication and the referents a word has a matter of convention; meaning is a psychological property of no particular value. What concerns me is the person doing the communication. Where have they been and what have they done? You can, of course, follow the improbability on that. But my maxim is just,
Maxim: Language is a means of communication.
If somebody comes to you wi...
"But suppose someone actually did reach into a GLUT-bin and by genuinely pure chance pulled out a GLUT that wrote philosophy papers?"
That misses my point. A process is needed to do the look-ups or the table just sits there.
Ah, I see you're not familiar with the works of Jorge Luis Borges. Permit me to hyperlink: The Library of Babel
PK, Phil Goetz, and Larry D'Anna are making a crucial point here but I'm afraid it is somewhat getting lost in the noise. The point is (in my words) that lookup tables are a philosophical red herring. To emulate a human being they can't just map external inputs to external outputs. They also have to map a big internal state to the next version of that big external state. (That's what Larry's equations mean.)
If there was no internal state like this, a GLUT couldn't emulate a person with any memory at all. But by hypothesis, it does emulate a person (pe...
Internal state is not necessary. Consider a function f mapping strings to strings by means of a lookup table. Here are some examples of f evaluated with well-chosen inputs:
f("Hi, Dr. S here, how are you now that you're a lookup table?") = "Very well, thank you. I notice no difference."
f("Hi, Dr. S here, how are you now that you're a lookup table? Really, none at all?") = "Yes, really no differences at all."
f("Hi, Dr. S here, how are you now that you're a lookup table? You have insulted my entire family!") = "I know you well enough to know that my last reply could not possibly have insulted you; someone must be feeding me fake input histories again."
There should probably be timestamps in the input histories but that's an implementation detail. For what it's worth, I hold that f is conscious.
Of course a GLUT can be conscious. A problem some may have with it would be that it is not self-modifying, for the table is set in stone, right? Well, consider it from this perspective:
First of all, I assume that all or some of the output is fed back into the input, directly or indirectly (or is that cheating? why?). Then, we can divide the GLUT in two parts, A and B, that differ only in one input: the fact that the "zombie" has previously heard a particular phrase, for example "You are not conscious, you ugly zombie!".
There is no need ...
People who want to read more about this topic online may find that it is sometimes referred to as a "humongous" (slang for huge) lookup table or HLUT. Googling on that term will find some additional hits.
Psy-Kosh's point about implementations that use lookup tables internally of various sizes I think echos Moravec's point in Mind Children. The idea is that you could replace various sub-parts of your conscious AI with LUTs, ranging all the way from trivial substitutions up to a GLUT for the whole thing. Then as he says, when and where is the consc...
The more I think about it, the more I am convinced that if any GLUT could ever be made it would be an unspeakably horrible abomination. To explicitly represent the brain states of all the worst things that could happen to a person is a terrible thing. Weather the "internal state" variable is actually pointing at one doesn't seem to make a big moral difference. GLUTs are torture. They are the worst form of torture I've ever heard of. I'm glad they're almost certainly impossible.
I recall several years back Eliezer writing on these topics and at the time he saw this as a major stumbling block for functionalism. I would be interested in hearing how his thoughts have evolved, and I hope he can write about this soon.
Very, very strongly seconded.
Larry gives me another idea. Say the GLUT is implemented as a giant book with a person following instructions a la the Chinese Room. In the course of looking up the current (sentence, state) pair in the book, many other entries will inevitably impinge on the operator's retinas and enter their m...
Hal: Yeah, I actually am inclined toward thinking that something like Permutation City style cosmology/consciousness is actually valid... HOWEVER
If so, that seems to seperate consciousness and material reality to the point that one may as well say "what material reality?"
But then, one could say
"hrm, okay, so let's say that physics as we know it is the wrong reduction, and instead there's some other principle that ends up implying/producing consciousness, and something about that fundamental principle and so on causes statistical patterns/reg...
Greg Egan says, in the Permutation City FAQ:
I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.
Nick: oh, hey, cool, thanks. Didn't know about the existance of such a FAQ
Yeah, the uniformity thing (which I thought of in terms of existance of structure in experience) does seem to be a hit against it, and something I've spent time thinking about, still without conclusion though.
On the other hand, the chain of reasoning leading to it seems hard to argue against.
ie, what would have to be true for for something like the dust theory to be false? I have trouble thinking of any way of having the dust theory be false and yet also keeping anything like zombies...
Incidentally, I note that the uniformity/structure problem is also, near as I can tell, a hit against Tegmark style "all possible mathematical structures" multiverse
Not necessarily. Tegmark suggests that mathematical structures with higher algorithmic complexity [in what encoding?] have lower weight [is there a Mangled Worlds-like phenomenon that turns this weight into discrete objective frequencies?], and that laws producing an orderly universe have lower complexity than chaotic universes or especially encodings of specific chaotic experiences.
Does Tegmark provide any justification for the lower weight thing or is it a flat out "it could work if in some sense higher complexity realities have lower weight"?
For that matter, what would it even mean for them to be lower weight?
I'd, frankly, expect the reverse. The more "tunable parameters", the more patterns of values they could take on, so...
For that matter, if some means of different weights/measures/whatever could be applied to the different algorithm's, why disallow that sort of thing being applied to different "dust interpretations"?
And any thoughts at all on why it seems like I'm not (at least, most of me seemingly isn't) a Boltzmann brain?
Well, the first point is to discard the idea that orderly perceptions are less probable than chaotic ones in the Dust.
The second is to recognize that probability doesn't matter to the anthropic principle at all. You don't exist in the chaotic perspectives, so you never see them.
Psy-Kosh:
Does Tegmark provide any justification for the lower weight thing or is it a flat out "it could work if in some sense higher complexity realities have lower weight"?
It's the same justification as for the Kolmogorov prior: if you use a prefix-free code to generate random objects, less complex objects will come up more frequently. Descriptions of worlds with more tunable parameters must include those parameters, which adds complexity. (But, yes, if complexity/weight/frequency is ignored, there are infinitely more worlds above any complexit...
Psy-Kosh : "Yeah, the uniformity thing (which I thought of in terms of existance of structure in experience) does seem to be a hit against it, and something I've spent time thinking about, still without conclusion though.
On the other hand, the chain of reasoning leading to it seems hard to argue against.
ie, what would have to be true for for something like the dust theory to be false? I have trouble thinking of any way of having the dust theory be false and yet also keeping anything like zombies disallowed."
Psy-Kosh, that isn't a chain of reasoni...
It seems that the dust should generate observer-moments with probability according to their algorithmic complexity, which would produce many more chaotic than normal ones.
The full version of the Library of Babel can be generated by "walking" through the versions with a limited number of texts, each of finite length. It contains every possible string that can be composed of a given set of symbols - infinitely many strings, each infinitely long. Any finite string that can appear in the Library, does appear - infinitely many times.
In the Englis...
It's interesting that Eliezer never heard anyone say that a GLUT is conscious before now, but now nearly all the commenters are saying that GLUT is conscious. What is the meaning of this?
Unknown: I was unclear. I meant "rejecting the assumptions involved in the chain of reasoning that leads to the dust hypothesis would seem to require accepting things very much like zombies, and in ways that seem rather preposterous, at least to me"
Yes, obviously if ~zombie -> dust, then ~dust->zombie. Either way, I know I'm very confused about this whole matter.
Caledonian: Yes, AB will be more common than CDEFG as a substring. but ABABABABABAB will be less common than AB(insert-random-sequence-here)
In other words, the number of "me&qu...
In the FULL version, "AB" and "CDEFG" are equally probable. Each appears infinitely often, but the order of the category of infinities that they belong to is the same.
Would you argue that odd numbers are as probable as even numbers in the set of natural numbers, because the order of the category of infinities that they belong to is the same?
How about squares (1, 2, 4, 9, 16, ...) versus non-square numbers? Prime numbers versus composite numbers?
It depends on how you order it. With the natural numbers in ascending order, squares are less common. Interleaving them like {1, 2, 4, 3, 9, 5, 16, 6, 25, 7, ...}, they're equally common. With a different order type like {2, 3, 5, 6, 7, ..., 1, 4, 9, 16, 25, ...}, I have no idea. This is a problem.
See also Nick Bostrom's Infinite Ethics [PDF].
Would you argue that odd numbers are as probable as even numbers in the set of natural numbers, because the order of the category of infinities that they belong to is the same? How about squares (1, 2, 4, 9, 16, ...) versus non-square numbers? Prime numbers versus composite numbers?
As far as I understand, the sets of odd numbers, squares, and primes are all countable.
As such, a one-to-one correspondence can be established between them and the counting numbers. Therefore, considered across infinity, there are just as many primes as there are odd numbers...
Caledonian,
The part I have a problem with is where you go from the cardinality of the sets to a judgment of "equally probable".
Let me put it this way: you wrote,
In the English version, in any of the truncated (and sufficiently long) versions of the Library, the sequence "AB" is much more common than "CDEFG". It doesn't matter whether the texts are ten thousand letters long, or ten billion - the first is less complex and thus more probable than the second.
The "any" is the problem. I can construct a truncated versio...
My statement doesn't hold in ANY truncated version of the Library - it's not difficult to construct an example, because any finite version automatically serves.
But we're not DEALING with a finite version of the Library. We are dealing with the infinite version. And infinity wreaks some pretty serious havoc on conventional concepts of probability.
So why do you say that all sentences have equal probability, rather than that the probability is undefined, which would seem to be the default option?
Hmmmm...
The set of Turing machines is countably infinite.
If I ran a computer program that systematically emulated every Turing machine, would I thereby create every possible universe?
For example:
n=1;
max = 1;
while (1) {
emulate_one_instruction(n);
n = n+1;
if (n > max)
{max = max + 1; n = 1;}
}
(In other words, the pattern of execution goes 1,1,2,1,2,3,1,2,3,4, and so on. If you wait long enough, this sequence will eventually repeat any number you specify as many times as you specify.)
Of course, you'd need infinite resources to run this for an infinite number of steps...
Some of those instructions won't halt, so eventually you'll get hung up in an infinite loop without outputting anything. And the Halting Problem has no general solution...
a "logically possible" but fantastic being â a descendent of Ned Block's Giant Lookup Table fantasy...
First, I haven't seen how this figures into an argument, and I see that Eliezer has already taken this in another direction, but...
What immediately occurs to me is that there's a big risk of a faulty intuition pump here. He's describing, I assume, a lookup table large enough to describe your response to every distinguishable sensory input you could conceivably experience during your life. The number of entries is unimaginable. But I sus...
Cyan: not true. As you can see, the non-halting processes don't prevent the others from running; they slow them down, but who cares when you have an infinite computer?
Tom: what do you think of my previous comment about a tiny look-up table?
As far as infinities, well, I think I'll for now stick with the advice of only bringing in infinities via well defined limits unless absolutely needed otherwise.
That's a good strategy and I recommend you stick to it.
The infinities are absolutely needed, here.
In "The Unimagined Preposterousness of Zombies", Daniel Dennett says:
A Giant Lookup Table, in programmer's parlance, is when you implement a function as a giant table of inputs and outputs, usually to save on runtime computation. If my program needs to know the multiplicative product of two inputs between 1 and 100, I can write a multiplication algorithm that computes each time the function is called, or I can precompute a Giant Lookup Table with 10,000 entries and two indices. There are times when you do want to do this, though not for multiplication—times when you're going to reuse the function a lot and it doesn't have many possible inputs; or when clock cycles are cheap while you're initializing, but very expensive while executing.
Giant Lookup Tables get very large, very fast. A GLUT of all possible twenty-ply conversations with ten words per remark, using only 850-word Basic English, would require 7.6 * 10585 entries.
Replacing a human brain with a Giant Lookup Table of all possible sense inputs and motor outputs (relative to some fine-grained digitization scheme) would require an unreasonably large amount of memory storage. But "in principle", as philosophers are fond of saying, it could be done.
The GLUT is not a zombie in the classic sense, because it is microphysically dissimilar to a human. (In fact, a GLUT can't really run on the same physics as a human; it's too large to fit in our universe. For philosophical purposes, we shall ignore this and suppose a supply of unlimited memory storage.)
But is the GLUT a zombie at all? That is, does it behave exactly like a human without being conscious?
The GLUT-ed body's tongue talks about consciousness. Its fingers write philosophy papers. In every way, so long as you don't peer inside the skull, the GLUT seems just like a human... which certainly seems like a valid example of a zombie: it behaves just like a human, but there's no one home.
Unless the GLUT is conscious, in which case it wouldn't be a valid example.
I can't recall ever seeing anyone claim that a GLUT is conscious. (Admittedly my reading in this area is not up to professional grade; feel free to correct me.) Even people who are accused of being (gasp!) functionalists don't claim that GLUTs can be conscious.
GLUTs are the reductio ad absurdum to anyone who suggests that consciousness is simply an input-output pattern, thereby disposing of all troublesome worries about what goes on inside.
So what does the Generalized Anti-Zombie Principle (GAZP) say about the Giant Lookup Table (GLUT)?
At first glance, it would seem that a GLUT is the very archetype of a Zombie Master—a distinct, additional, detectable, non-conscious system that animates a zombie and makes it talk about consciousness for different reasons.
In the interior of the GLUT, there's merely a very simple computer program that looks up inputs and retrieves outputs. Even talking about a "simple computer program" is overshooting the mark, in a case like this. A GLUT is more like ROM than a CPU. We could equally well talk about a series of switched tracks by which some balls roll out of a previously stored stack and into a trough—period; that's all the GLUT does.
A spokesperson from People for the Ethical Treatment of Zombies replies: "Oh, that's what all the anti-mechanists say, isn't it? That when you look in the brain, you just find a bunch of neurotransmitters opening ion channels? If ion channels can be conscious, why not levers and balls rolling into bins?"
"The problem isn't the levers," replies the functionalist, "the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling... Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it's possible to program a conscious being in Haskell."
"I don't know about that," says the PETZ spokesperson, "all I know is that this so-called zombie writes philosophical papers about consciousness. Where do these philosophy papers come from, if not from consciousness?"
Good question! Let us ponder it deeply.
There's a game in physics called Follow-The-Energy. Richard Feynman's father played it with young Richard:
When you get a little older, you learn that energy is conserved, never created or destroyed, so the notion of using up energy doesn't make much sense. You can never change the total amount of energy, so in what sense are you using it?
So when physicists grow up, they learn to play a new game called Follow-The-Negentropy—which is really the same game they were playing all along; only the rules are mathier, the game is more useful, and the principles are harder to wrap your mind around conceptually.
Rationalists learn a game called Follow-The-Improbability, the grownup version of "How Do You Know?" The rule of the rationalist's game is that every improbable-seeming belief needs an equivalent amount of evidence to justify it. (This game has amazingly similar rules to Follow-The-Negentropy.)
Whenever someone violates the rules of the rationalist's game, you can find a place in their argument where a quantity of improbability appears from nowhere; and this is as much a sign of a problem as, oh, say, an ingenious design of linked wheels and gears that keeps itself running forever.
The one comes to you and says: "I believe with firm and abiding faith that there's an object in the asteroid belt, one foot across and composed entirely of chocolate cake; you can't prove that this is impossible." But, unless the one had access to some kind of evidence for this belief, it would be highly improbable for a correct belief to form spontaneously. So either the one can point to evidence, or the belief won't turn out to be true. "But you can't prove it's impossible for my mind to spontaneously generate a belief that happens to be correct!" No, but that kind of spontaneous generation is highly improbable, just like, oh, say, an egg unscrambling itself.
In Follow-The-Improbability, it's highly suspicious to even talk about a specific hypothesis without having had enough evidence to narrow down the space of possible hypotheses. Why aren't you giving equal air time to a decillion other equally plausible hypotheses? You need sufficient evidence to find the "chocolate cake in the asteroid belt" hypothesis in the hypothesis space—otherwise there's no reason to give it more air time than a trillion other candidates like "There's a wooden dresser in the asteroid belt" or "The Flying Spaghetti Monster threw up on my sneakers."
In Follow-The-Improbability, you are not allowed to pull out big complicated specific hypotheses from thin air without already having a corresponding amount of evidence; because it's not realistic to suppose that you could spontaneously start discussing the true hypothesis by pure coincidence.
A philosopher says, "This zombie's skull contains a Giant Lookup Table of all the inputs and outputs for some human's brain." This is a very large improbability. So you ask, "How did this improbable event occur? Where did the GLUT come from?"
Now this is not standard philosophical procedure for thought experiments. In standard philosophical procedure, you are allowed to postulate things like "Suppose you were riding a beam of light..." without worrying about physical possibility, let alone mere improbability. But in this case, the origin of the GLUT matters; and that's why it's important to understand the motivating question, "Where did the improbability come from?"
The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (Thereby creating uncounted googols of human beings, some of them in extreme pain, the supermajority gone quite mad in a universe of chaos where inputs bear no relation to outputs. But damn the ethics, this is for philosophy.)
In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.
"All right," says the philosopher, "the GLUT was generated randomly, and just happens to have the same input-output relations as some reference human."
How, exactly, did you randomly generate the GLUT?
"We used a true randomness source—a quantum device."
But a quantum device just implements the Branch Both Ways instruction; when you generate a bit from a quantum randomness source, the deterministic result is that one set of universe-branches (locally connected amplitude clouds) see 1, and another set of universes see 0. Do it 4 times, create 16 (sets of) universes.
So, really, this is like saying that you got the GLUT by writing down all possible GLUT-sized sequences of 0s and 1s, in a really damn huge bin of lookup tables; and then reaching into the bin, and somehow pulling out a GLUT that happened to correspond to a human brain-specification. Where did the improbability come from?
Because if this wasn't just a coincidence—if you had some reach-into-the-bin function that pulled out a human-corresponding GLUT by design, not just chance—then that reach-into-the-bin function is probably conscious, and so the GLUT is again a cellphone, not a zombie. It's connected to a human at two removes, instead of one, but it's still a cellphone! Nice try at concealing the source of the improbability there!
Now behold where Follow-The-Improbability has taken us: where is the source of this body's tongue talking about an inner listener? The consciousness isn't in the lookup table. The consciousness isn't in the factory that manufactures lots of possible lookup tables. The consciousness was in whatever pointed to one particular already-manufactured lookup table, and said, "Use that one!"
You can see why I introduced the game of Follow-The-Improbability. Ordinarily, when we're talking to a person, we tend to think that whatever is inside the skull, must be "where the consciousness is". It's only by playing Follow-The-Improbability that we can realize that the real source of the conversation we're having, is that-which-is-responsible-for the improbability of the conversation—however distant in time or space, as the Sun moves a wind-up toy.
"No, no!" says the philosopher. "In the thought experiment, they aren't randomly generating lots of GLUTs, and then using a conscious algorithm to pick out one GLUT that seems humanlike! I am specifying that, in this thought experiment, they reach into the inconceivably vast GLUT bin, and by pure chance pull out a GLUT that is identical to a human brain's inputs and outputs! There! I've got you cornered now! You can't play Follow-The-Improbability any further!"
Oh. So your specification is the source of the improbability here.
When we play Follow-The-Improbability again, we end up outside the thought experiment, looking at the philosopher.
That which points to the one GLUT that talks about consciousness, out of all the vast space of possibilities, is now... the conscious person asking us to imagine this whole scenario. And our own brains, which will fill in the blank when we imagine, "What will this GLUT say in response to 'Talk about your inner listener'?"
The moral of this story is that when you follow back discourse about "consciousness", you generally find consciousness. It's not always right in front of you. Sometimes it's very cleverly hidden. But it's there. Hence the Generalized Anti-Zombie Principle.
If there is a Zombie Master in the form of a chatbot that processes and remixes amateur human discourse about "consciousness", the humans who generated the original text corpus are conscious.
If someday you come to understand consciousness, and look back, and see that there's a program you can write which will output confused philosophical discourse that sounds an awful lot like humans without itself being conscious—then when I ask "How did this program come to sound similar to humans?" the answer is that you wrote it to sound similar to conscious humans, rather than choosing on the criterion of similarity to something else. This doesn't mean your little Zombie Master is conscious—but it does mean I can find consciousness somewhere in the universe by tracing back the chain of causality, which means we're not entirely in the Zombie World.
But suppose someone actually did reach into a GLUT-bin and by genuinely pure chance pulled out a GLUT that wrote philosophy papers?
Well, then it wouldn't be conscious. IMHO.
I mean, there's got to be more to it than inputs and outputs.
Otherwise even a GLUT would be conscious, right?
Oh, and for those of you wondering how this sort of thing relates to my day job...
In this line of business you meet an awful lot of people who think that an arbitrarily generated powerful AI will be "moral". They can't agree among themselves on why, or what they mean by the word "moral"; but they all agree that doing Friendly AI theory is unnecessary. And when you ask them how an arbitrarily generated AI ends up with moral outputs, they proffer elaborate rationalizations aimed at AIs of that which they deem "moral"; and there are all sorts of problems with this, but the number one problem is, "Are you sure the AI would follow the same line of thought you invented to argue human morals, when, unlike you, the AI doesn't start out knowing what you want it to rationalize?" You could call the counter-principle Follow-The-Decision-Information, or something along those lines. You can account for an AI that does improbably nice things by telling me how you chose the AI's design from a huge space of possibilities, but otherwise the improbability is being pulled out of nowhere—though more and more heavily disguised, as rationalized premises are rationalized in turn.
So I've already done a whole series of posts which I myself generated using Follow-The-Improbability. But I didn't spell out the rules explicitly at that time, because I hadn't done the thermodynamic posts yet...
Just thought I'd mention that. It's amazing how many of my Overcoming Bias posts would coincidentally turn out to include ideas surprisingly relevant to discussion of Friendly AI theory... if you believe in coincidence.