Sorted by New

Wiki Contributions


Interesting, and a very compelling point of view. 

My first thought is that this is nothing like what we've been doing lately.

In the most celebrated corners of our society the word "disruption" is uttered these days with eagerness and ambition.

Psy-K: "And yet there seems a rather compelling argument in favor of the idea that somehow it arises from physical processes, and not in a removable property-dualism/epiphenominalism way."

Probably, I guess.

Maybe it's chemical, as Penrose suggests, which would fit in to your constraints. For the purposes of considering the ethics of strong AI, even if I accept your "seemingly compelling argument" it's not obviously algorithmic.

I simply say it's an undecidable proposition, though.

Which doesn't make me an epiphenomenalist but an epiphenom-agnostic. It still leaves me as a diehard dualist. I cannot imagine even a reduction of consciousness to physics that is even coherent, never mind correct.

I see a lot of handwaving but nothing resembling a testable hypothesis anywhere. Surprise me and show me some science.

I don't see why the burden of proof should be on me. You guys are the ones who want to plug this damned thing in and see what it does. I want to see more than wild guesses and vague gestures mimicking "emergent processes" before I think that is anything but a very bad idea.

ZM, it's clear where your biases are. What do you propose to do to overcome them?

The stakes are very high for this "guess". The ethical implications of getting it wrong are huge. There are strong arguments both ways, as I expect you know. (Chinese room argument, for instance.)

The designers of the simulation or emulation fully intend to pass the Turing test; that is, it is the explicit purpose of the designers of the software to fool the interviewer. That alone makes me doubt the reliability of my own judgment on the matter.

By the way, I have been talking about this stuff with non-AI-obsessed people the last few days. Several of them independently pointed out that some humans would fail the Turing test. Does that mean that it is OK to turn them off?

Ultimately the Turing test is about the interviewer as much as the interviewee; it's about what it takes to kick off the interviewer's empathy circuits.

The idea of asking anyone besides an AI-obsessed person whether they "have qualia" is amusing by the way. The best Turing-Test passing answer, most likely, is "huh?"


Existence or nonexistence of subjectivity in a given system is a well-defined and well-posed boolean-valued question. It is admittedly an undecideable question in cases other than one's own.

We already know from Godel that well-posed questions can be undecidable. Decidability is not identical to well-posedness.

As for nature's joints, I am not alone in the view that the mind/matter dichotomy is the clearest and most crucial one of all. The problem is not that the line is drawn in the wrong place. The problem is that the relationship between the subjective and the objective is mysterious.

Where I'm a bit idiosyncratic, I think, is this:

I don't think the persistence of the mind/matter dichotomy is some peculiar and minor gap in science that will soon be filled in. I strongly suspect, rather, that it's a deeply fundamental fact of the universe. In other words, the phenomenon of mind, i.e., subjectivity itself, cannot be fully reduced to objectively observable processes, and any attempt to do so is necessarily logically flawed.

Mine is not a widely held opinion, but I'm not sure why not. If there were a purely objective explanation of how subjective experience arises from an inanimate universe, what could it possibly look like?

The idea that subjective experience can be dismissed as a hubgalopus strikes me as pretty weird. Maybe there's nobody home at the Tiiba residence but that's not how it is around here.

Subjective experience is defined by Daniel Dennett as the existence of an answer to the "what-is-it-like-to-be-a" question. That is, I know from direct observation that it is like something to be Michael Tobis. I suspect from extrapolation that it is like something to be Tiiba or to be Eliezer Yudkowsy, or, indeed, a cat. I suspect from observation that it is not like anything to be a chair or a football or a coffee cup, or to use Dennett's favorite example, a cheeseburger.

I am uncertain about whether it is like something to be a Turing-Test-passing device. Oddly, I find it fairly easy to construct reasonably compelling arguments either way. I consider the question indeterminate, and I consider its indeterminacy to present serious ethical problems for anyone proposing to construct such a device.

That I cannot define this property of subjective experience in an objectively testable way is not just conceded but claimed. That is part of my point; objective methods are necessarily incomplete regarding a very salient aspect of the universe.

But it is certainly subjectively testable (by me in the case of the entity which I, in fact, am and by you in the case of the entity which you yourself are, presuming in fact you are one). At least one subjectivity exists, by subjective observation. Yes, you in there. That perceived whiteness on the screen around these symbols. That feeling of wooliness in your socks. The taste of stale coffee in that mug by your side. All that sensation. Those "qualia". If they don't exists for you, I am talking to myself, but I promise you I've got a bunch of experiences happening right now.

I find no grand theory compelling which cannot account for this central and salient fact. Yet this fact, the existence of at least one subjective experience, cannot actually be accounted for objectively, because, again there can be no objective test for subjectivity.

Accordingly I find no grand theory compelling at all.

I am comfortable with this undecidability of the nature of the universe, (in fact I find it rather attractive and elegant) but many people are not. It really wouldn't matter much and I would happily keep this model of the fundamental incompleteness of reason to myself, except that all you transhumanists are starting to play with fire as if you knew what is going on.

I believe you do not know and I believe you cannot know.

Strong AI is consequently, in my view, a very bad idea and so I devoutly hope you fail miserably; this despite the fact that I have a lot in common with you people and that I rather like most of you.

Tiiba challenges my suggestion that "it is quite likely that the universe may not be able to support vast orders of magnitude of intelligence".

Tiiba, I am not the one making the extravagant claim here. What would vast orders of magnitude of greater intelligence be like? What purpose would it serve?

Things are as predictable as they are and not more so. What purpose would the additional processing power serve? It seems to me that if one would reach the limits of computability of a given practical dilemma in a nanosecond rather than, say, in an hour, the only advantage would be 59.999999... minutes of leisure, not a better decision.

Psy-K, try as I might to come up with a way to do it, I can see no possibility of an objective test for subjective experience.

There's really no way to defeat dualism except through faith in materialism. If you don't take it as a given that science can be complete (despite Godel's lesson that mathematics can't be) you can't actually reach the point where the case is proven. I've seen dozens of different approaches to handwaving this problem away and they are all deeply irrational appeals to "emergent processes" that mean nothing beyond "I choose to beg the question."

The methods of science work because they are objective. The problem of consciousness is about the existence or non-existence of subjective experience. The gap cannot be bridged by objective methods. It's all guesswork, not because we haven't found the secret clue yet, but because even if we could find it, we could never prove it was the right clue.

As usual your terrifying position is defended well and with very clever observations. "But it's worth remembering that if there were any smaller modification of a chimpanzee that spontaneously gave rise to a technological civilization, we would be having this conversation at that lower level of intelligence instead." Quite so, and a perspicacious observation indeed.

I content myself with the belief that what you are proposing is likely impossible; it is quite likely that the universe may not be able to support vast orders of magnitude of intelligence. Failing this hope, I can fall back on a hope that because you are such a clever philosopher you won't have enough energy left over for the actual implementation of your ideas. Please reassure me that you sleep sometimes.

In any case, what you are describing as your "profession" is a very very (limit N->inf very^N) bad idea, because the problem of consciousness is inherently unsolvable. This being the case, you and your crowd risk replacing humanity with an unconscious entity; the risk of such a prospect is even more astonishingly huge than your ambitions. If our planet is the sole location of consciousness in the universe you run the risk of snuffing out existence entirely!

A very interesting analysis, though I hope your overstatement is for effect...

It is in fact an overstatement, though the tendencies you describe are surely strong. It is interesting that my field, climatology, is often accused of drumming up an existential threat to preserve our funding, which would be almost as bad of a moral failure as ignoring one. Of course, the situation is somewhat different as physicists advocate bizarre experiments while we are suggesting a bizarre experiment come to an end as soon as possible...