Why is it that philosophical zombies are unlikely to exist? In Eliezer's article Zombies! Zombies?, it seemed to mostly be an argument against epiphenomenalism. In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?

Also, what are LessWrong's views on the idea of a continuous consciousness? CGPGrey brought up this issue in The Trouble with Transporters. Does a continuous self exist at all, or is our perception of being a continuous conscious entity existing throughout time just an illusion?

Showing 3 of 6 replies (Click to show all)
13Vaniver1yNope, your "in other words" summary is incorrect. A philosophical zombie is not any entity without consciousness; it is an entity without consciousness that falsely perceives itself as having consciousness. An entity that perceives itself as not having consciousness (or not having qualia or whatever) is a different thing entirely.
-2ImmortalRationalist1yThis is mostly just arguing over semantics. Just replace "philosophical zombie" with whatever your preferred term is for a physical human who lacks any qualia.
This is mostly just arguing over semantics.

If an argument is about semantics, this is not a good response. That is...

Just replace "philosophical zombie" with whatever your preferred term is for

An important part of normal human conversations is error correction. Suppose I say "three, as an even number, ..."; the typical thing to do is to silently think "probably he meant odd instead of even; I will simply edit my memory of the sentence accordingly and continue to listen." But in technical contexts, this is often a mistake; if... (read more)

Open and Welcome Thread December 2018

by Ben Pace 1 min read4th Dec 201822 comments


If it’s worth saying, but not worth its own post, then it goes here.

Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, and seeing if there are any meetups in your area.

As well as trying out combining welcome threads and open threads, I thought I'd try highlighting some frontpage comments I found especially insightful in the last month, for further discussion:

  • Scott Garrabrant wrote a comment on how Embedded Agency and Agent Foundations research are like science in relation to ML approaches to AI alignment which are more like engineering. The comment helped me think about how I go about formalising and solving problems more generally.
  • Rohin Shah wrote a comment on basic definitions of the alignment problem, contrasting a motivation-competence split versus a definition-optimization split. (It is then followed by a convo on definitions between Paul and Wei which gets pretty deep into the weeds - I'd love to read a summary here from anyone else who followed along.)