Another month has passed and here is a new rationality quotes thread. The usual rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
  • No more than 5 quotes per person per monthly thread, please.
New Comment
792 comments, sorted by Click to highlight new comments since: Today at 10:10 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hofstadter on the necessary strangeness of scientific explanations:

It is no accident, I would maintain, that quantum mechanics is so wildly counterintuitive. Part of the nature of explanation is that it must eventually hit some point where further probing only increases opacity rather than decreasing it. Consider the problem of understanding the nature of solids. You might wonder where solidity comes form. What if someone said to you, "The ultimate basis of this brick's solidity is that it is composed of a stupendous number of eensy weensy bricklike objects that themselves are rock-solid"? You might be interested to learn that bricks are composed of micro-bricks, but the initial question - "What accounts for solidity?" - has been thoroughly begged. What we ultimately want is for solidity to vanish, to dissolve, to disintegrate into some totally different kind of phenomenon with which we have no experience. Only then, when we have reached some completely novel, alien level will we feel that we have really made progress in explaining the top-level phenomenon.


I first saw this thought expressed in the stimulating book Patterns of Discovery by Norwood Russell

... (read more)

Why Opium produces sleep: ... Because there is in it a dormitive power.

Moliere, Le Malade Imaginere (1673), Act III, sc. iii.

A lesson here is that if you ask "Why X?" then any answer of the form "Because " is not actually progress toward understanding.

Synonyms are not good for explaining... because there is no explanatory power in them.

I found your post funny... because it amused me.

I upvoted your comment, because I wished for it to have more upvotes.
"greeness" -> "greenness"
Does this imply that there's no bottom level, just layer after layer of explanations with each layer being very different from the ones above? If there is a bottom level below which no further explanation is possible, can you tell whether you've reached it?
I want to point out that in this post, you were quoting sediment quoting Hofstadter who was referencing Hanson's quoting of Heisenberg. Pretty sure even Inception didn't go that deep.
The principle here is that an attribute x of an entity A is not explained by reference to a constituent entity B that has the same property. The strength of an arch is a property of arches, for example, not of the things from which arches are constituted. That doesn't imply that theremust be a B in the first place, merely that whether there is or not, referring to B.x in order to explain A.x leaves x unexplained. (Of course, if there is no B, referring to B.x has other problems as well.) I suspect the "top"/"bottom"/"level" analogy is misleading here. I would be surprised if there were a coherent "bottom level," actually. But if there is, I suppose the sign that I've reached it is that all the observable attributes it has are fully explainable without reference to other "levels," and all the observable attributes of other "levels" are fully (if impractically) explainable in terms of it. At any level of description, there are observable attributes of entities that are best explained by reference to other levels of description, but I'm not sure there's always a clear rank-ordering of those levels.

Why is there that knee-jerk rejection of any effort to "overthink" pop culture? Why would you ever be afraid that looking too hard at something will ruin it? If the government built a huge, mysterious device in the middle of your town and immediately surrounded it with a fence that said, "NOTHING TO SEE HERE!" I'm pretty damned sure you wouldn't rest until you knew what the hell that was -- the fact that they don't want you to know means it can't be good.

Well, when any idea in your brain defends itself with "Just relax! Don't look too close!" you should immediately be just as suspicious. It usually means something ugly is hiding there.

Ah, David Wong. A few movies in the post-9/11 era begin using terrorism and asymmetric warfare as a plot point? Proof that Hollywood no longer favors the underdog. Meanwhile he ignores... Daredevil, Elektra, V for Vendetta, X-Men, Kickass, Punisher, and Captain America, just to name the superhero movies I've seen which buck the trend he references, and within the movies he himself mentions, he intentionally glosses over 90% of the plots in order to make his point "stick." In some cases (James Bond, Sherlock Holmes) he treats the fact that the protagonists win as the proof that they weren't the underdog at all (something which would hold in reality but not in fiction, and a standard which he -doesn't- apply when it suits his purpose, a la his comments about the first three Die Hard movies being about an underdog whereas the most recent movie isn't).

Yeah. Not all that impressed with David Wong. His articles always come across as propaganda, carefully and deliberately choosing what evidence to showcase. And in this case he's deliberately treating the MST3K Mantra as some kind of propaganda-hiding tool? Really?

These movies don't get made because Hollywood billionaires... (read more)

Not that this directly relates to your quote, but I find David Wong to be consistently so deliberate about producing propaganda out of nothing that I cannot take him seriously as a champion of rationality.

It is worth pointing out that this page is about quotes, not people, or even articles. I thought the quote was worth upvoting for:

Well, when any idea in your brain defends itself with "Just relax! Don't look too close!" you should immediately be just as suspicious. It usually means something ugly is hiding there.

Why is there that knee-jerk rejection of any effort to "overthink" pop culture? Why would you ever be afraid that looking too hard at something will ruin it?

I think it's because enjoying fiction involves being in a trance, and analyzing the fiction breaks the trance. I suspect that analysis is also a trance, but it's a different sort of trance.

The term for that is suspension of disbelief [].
Any chance you could expand on "analysis is also a trance"?
I don't know about anyone else, but if I'm analyzing, my internal monologue is the main thing in my consciousness.

Your what?

No, I'm not letting it go this time. I've heard people talking about internal monologues before, but I've never been quite sure what those are - I'm pretty sure I don't have one. Could you try to define the term?

Gosh. New item added to my list of "Not everyone does that."

...I have difficulty imagining what it would be to be like someone who isn't the little voice in their own head, though. Seriously, who's posting that comment?

I may be in a somewhat unique position to address this question, as one of the many many many weird transient neurological things that happened to me after my stroke was a period I can best describe as my internal monologue going away.

So I know what it's like to be the voice in my head, and what it's like not to be.

And it's still godawful difficult to describe the difference in words.

One way I can try is this: have you ever experienced the difference between "I know what I'm going to say, and here I am saying it" and "words are coming out of my mouth, and I'm kind of surprised by what I'm hearing myself say"?

If so, I think I can say that losing my "little voice" is similar to that difference.
If not, I suspect the explanation will be just as inaccessible as the phenomenon it purported to explain, but I can try again.

8CCC10y, I haven't. I'm always in the state of "I know what I'm going to say, and here I am saying it" (sometimes modified very soon afterwards by "on second thoughts, that was a very poor way to phrase it and I've probably been misunderstood").

...what? Wow!

I'm dying to know whether we're stumbling on a difference in the way we think or the way we describe what we think, here. To me, the first state sounds like rehearsing what I'm going to say in my head before I say it, which I only do when I'm racking my brains on eg how to put something tactfully, where the latter sounds like what I do in conversation all the time, which is simply to let the words fall out of my mouth and find out what I've said.

My internal monologue is a lot faster than the words can get out of my mouth (when I was younger, I tried to speak as fast as I think, with the result that no-one could understand me; of course, to speak that fast, I needed to drop significant parts of most of the words, which didn't help). I don't always plan out every sentence in advance; but thinking about it, I think I do plan out every phrase in advance, relying on the speed of my internal monologue to produce the next phrase before or at worst very shortly after I complete the current phrase. (It often helps to include a brief pause at the end of a phrase in any case). It's very much a just-in-time thing. If I'm making a special effort to be tactful, then I'll produce and consider a full sentence inside my head before saying it out loud. Incidentally, I'm also a member of Toastmasters, and one thing that Toastmasters has is impromptu speaking, when a person is asked to give a one-to-two minute speech and is told the topic just before stepping up to give the speech. The topic could be anything (I've had "common sense", "stick", and "nail", among others). Most people seem to be scared of this, apparently seeing it as an opportunity to stand up and be embarrassed; I find that I enjoy it. I often start an impromptu speech with very little idea of how it's going to end; I usually make some sort of pun about the topic (I changed 'common sense' into a very snooty, upper-crust type of person complaining about commoners with money - 'common cents'), and often talk more-or-less total nonsense. But, through the whole speech, I always know what I am saying. I am not surprised by my own words (no matter how surprised other people may be by the idea of 'common cents'). I don't think I know how to be surprised at what I am saying. (Of course, my words are not always well-considered, in hindsight; and sometimes I will be surprised at someone else's interpretation of my words, and be forced to explain that that's not what I
I'm the same - except occasionally, when I'm 'flowing' in conversation, I'll find that my inner monologue fails to produce what I think it can, and my mouth just halts without input from it
I find that happens to me sometimes when I talk in Afrikaans; my Afrikaans vocabulary is poor enough that I often get halfway through a sentance and find that I can't remember the word for what I want to say.
It occasionally happens to me in any language. I usually manage to rephrase the sentence on the flight or to replace the word with something generic like “thing” and let the listener figure it out from the context, without much trouble.
Something that occurred to me on this topic; reading has a lot to do with the inner monologue. Writing is, in my view, a code of symbols on a piece of paper (or a screen) which tell the reader what their inner monologue should say. Reading, therefore, is the voluntary (and temporary) replacement of the reader's internal monologue with an internal monologue supplied and encoded by the author. At least, that's what happens when I read. Do other people have the same experience?
Inner monologue test: I. like. how. when. you. read. this. the. little. voice. in. your. head. takes. pauses.. Does anyone find that the periods don't make the sentence sound different?
Let's make it a poll: When you read NancyLebovitz's sentence (quoted above) do the periods make it sound different? [pollid:470] (If anyone picks any option except 'Yes' or 'No', could you please elaborate?)
Hypothesis: Since I am more used to read sentences without a full stop after each word than sentences like that, of course I will read the former more quickly -- because it takes less effort. Experiment to test this hypothesis: Ilikehowwhenyoureadthisthelittlevoiceinyourheadspeaksveryquickly. Result of the experiment: at least for me, my hypothesis is wrong. YMMV.
As far as I can tell, I started reading the test phrase more slowly than normal, then "shifted gears" and sped up, perhaps to faster than normal.
Same here, for both test sentences.
The little voice in my head speaks quickly for that experimental phrase, yes. It should be taking slightly longer to decode - since the information on word borders is missing - which suggests that the voice in my head is doing special effects. I think that that is becausewordslikethis can be used in fiction as the voice of someone who is speaking quickly; so if the voice in my head speeds up when reading it, then that makes the story more immersive.
Hypothesisconfirmedforme.Perhapstoomanyhourslisteningtoaudiobooksatfivetimesspeed. Normalspeedheadvoicejustseemssoslow.
That sounds in my head like the voice in Italian TV ads for medicines reading the disclaimers required (I guess) by law (ultra-fast words, but pauses between sentences of nearly normal length).
I can parse it both ways. Actually, on further experimentation, it appears to be tied directly to my eye-scanning speed! If I force my eyes to scan over the line quickly from left-to-right, I read it without pause; if I read the way I normally do (by staring at the 'When' to take a "snapshot" of I, like, how, when, you, and read all at once; then staring at the space between "little" and "voice" to take a snapshot of this, the, little, voice, in, and your all at once, then staring at the "pauses" to take a snapshot of head, takes, and pauses), then the pauses get inserted - but not as normal sentence stops; more like... a clipped robot.
Huh. You read in a different way to what I do; I normally scan the line left-to-right. And I insert the pauses when I do so. It sounds like a clipped robot to me too.
Yeah, something clicked while I was reading an old encyclopedia sometime around age 7; I remember it quite vividly. My brain started being able to process chunks of text at a time instead of single words, so I could sort of focus on the middle of a short sentence or phrase and read the whole thing at once. I went from reading at about one-quarter conversation speed, to about ten times conversation speed, over the course of a few minutes. I still don't quite understand what the process was that enabled the change; I just sort of experienced it happening. One trade-off is that I don't have full conscious recall of each word when I read things that quickly - but I do tend to be able to pull up a reasonable paraphrasing of the information later if I need to.
I can see both pros and cons to this talent. The pro is obvious; faster reading. The con is that it may cause trouble parsing subtly-worded legal contracts; the sort where one misplaced word may potentially land up with both parties arguing the matter in court. Or anything else where exact wording is important, like preparing a wish for a genie. Of course, since it seems that you can choose when to use this, um, snapshot reading and when not to, you can gain the full benefit of the pros most of the time while carefully removing the cons in any situation where they become important.
I call that "skimming", but maybe that's something else?
Assuming you're literally talking about subvocalization [], it depends on what I'm reading (I do it more with poetry than with academic papers), on how quickly I'm reading (I don't do that as much when skimming), on whether I know what the author's voice sounds like (in which case I subvocalize in their voice -- which slows me down a great deal if I'm reading stuff by someone who speaks slowly and with a strong foreign accent e.g. Benedict XVI), and possibly on something else I'm overlooking at the moment.
I do not notice that I am subvocalising when I read, even when I am looking for it (I tested this on the wiki page that you linked to). I do notice, however, that it mentions that subvocalising is often not detectable by the person doing the subvocalising. More specifically, if I place my hand lightly on my throat while reading, I feel no movement of the muscles; and I am able to continue reading while swallowing. So, no, I don't think I'm talking about subvocalising. I'm talking about an imaginary voice in my head that narrates my thought processes. Hmmm... my inner monologue does not tend to speak in the voice of someone whose voice I know. I can get it to speak in other peoples' voices, or in what I imagine other people's voices to sound like, if I try to, but it defaults to a sort of neutral gear which, now that I think about it, sounds like a voice but not quite like my (external) voice. Similar, but not the same. (And, of course, the way that I hear my voice when I speak differs from how I hear it when recorded on tape - my inner monologue sounds more like the way I hear my voice, but still somewhat different) ...this is strange. I don't know who my inner monologue sounds like, if anyone.
Mine usually sounds more or less like I'm whispering.
My inner monologue definitely doesn't sound like whispering; it's a voice, speaking normally. I think I can best describe it by saying that it sounds more like I imagine myself sounding than like I actually sound to myself; but I suspect that's recursive, i.e. I imagine myself sounding like that because that's what my inner monologue sounds like.
Does your inner voice sound different depending on your mood or emotional state?
Yes. If my mood or emotional state is sufficiently severe, then my inner voice will sound different; both in choice of phrasing and in tone of voice. It's not an audible voice, as such; I think the best way that I can describe it is to say that it's very much like a memory of a voice, except that it's generated on-the-fly instead of being, well, remembered. As such, it has most of the properties of an audible voice (except actual audibility) - including such markers as 'tone of voice'. This tone changes with my emotional state in reasonable ways; that is, if I am sufficiently angry, then my inner voice may take on an angry, menacing tone. If my emotional state is not sufficiently severe, then I am unable to notice any change in my inner-voice tone. I also note that my spoken voice shows a noticeable change of tone at significantly lower emotional severity than my inner voice does.
I was about to say that it's the same for me, but then I remember that at least for me actual memories of voices can be very vivid (especially in hypnagogic state or when I'm reading stuff written by that person), whereas my inner voice seldom is. (And memories of voices can also be generated on-the-fly -- I can pick a sentence and imagine a bunch of people I know each saying it, even if I can't remember hearing any of them actually ever saying that sentence.)
Huh. Either my memories of voices are less vivid than yours, or my inner monologue is more vivid. Quite possibly both. Of course, when I remember someone saying something, it can include information aside from the voice (e.g. where it happened, the surroundings at the time) which is never included in my inner monologue. I consider these details to be seperate from the voice-memory; the voice-memory is merely a part of the whole "what-he-said" memory.
BTW, I think I have one kind of memory for people's timbre, rate of speech, volume, accent, etc., and one for sequences of phonemes, and when recalling what a person sounded like when saying a given sentence I combine the two on the flight.
My experience is that I generally have some kind of fuzzy idea of what I'm going to say before I say it. When I actually speak, sometimes it comes out as a coherent and streamlined sentence whose contents I figure out as a I speak it. At other times - particularly if I'm feeling nervous, or trying to communicate a complicated concept that I haven't expressed in speech before - my fuzzy idea seems to disintegrate at the moment I start talking, and even if I had carefully rehearsed a line many times in my mind, I forget most of it. Out comes either what feels to me like an incoherent jumble, or a lot of "umm, no, wait". Writing feels a lot easier, possibly because I have the stuff-that-I've-already-written right in front of me and I only need to keep the stuff that I'm about to say in memory, instead of also needing to constantly remind myself about what I've said so far. ETA: Here's [] an earlier explanation of how writing sometimes feels like to me.
The parts of your brain that generate speech and the part that generate your internal sense-of-self are less integrated than CCC's. An interesting experiment might be to stop ascribing ownership to your words when you find yourself surprised by them - i.e., instead of framing the phenomenon as "I said that", frame it as "my brain generated those words". Learn to recognize that the parts of your brain that handle text generation and output are no more "you" than the parts of your brain that handle motor reflex control. EDIT: Is there a problem with this post?

Learn to recognize that the parts of your brain that handle text generation and output are no more "you" than the parts of your brain that handle motor reflex control.

No! The parts of my brain that handle text generation are the only parts that... *slap*... Ow. Nevermind. It seems we have reached an 'understanding'.

I mean, I do realize you're being funny, but pretty much exactly this.

I don't recommend aphasia as a way of shock-treating this presumption, but I will admit it's effective. At some point I had the epiphany that my language-generating systems were offline but I was still there; I was still thinking the way I always did, I just wasn't using language to do it.

Which sounds almost reasonable expressed that way, but it was just about as creepy as the experience of moving my arm around normally while the flesh and bone of my arm lay immobile on the bed.

A good way I've found to reach this state is to start to describe a concept in your internal monologue but "cancel" the monologue right at the start - the concept will probably have been already synthesized and will just be hanging around in your mind, undescribed and unspoken but still recognizable. [edit] Afaict the key step is noticing that you've started a monologue, and sort of interrupting yourself mentally.
So, FWIW, after about 20 minutes spent trying to do this I wasn't in a recognizably different state than I was when I started. I can kind of see what you're getting at, though.
Right, I mean as a way of realizing that there's something noticeable going on in your head that precedes the internal monologue. I wrote that comment wrong. Sorry for wasting your time.
Ah! I get you now. (nods) Yeah, that makes sense.
That's... hm. I'm not sure I know what you mean. I'll experiment with behaving as if I did when I'm not in an airport waiting lounge and see what happens.
I've had this happen to me semi-accidentally, the resulting state is extremely unpleasant.
A smash equilibrium.
It's a bit rude to try to change others' definition of themselves unasked.
1. Where does that intersect with "that which can be destroyed by the truth, should be"? 2. "I'm dying to know whether we're stumbling on a difference in the way we think or the way we describe what we think, here." wasn't asking?
1. The problem is that "what is part of you" at the interconnectedness-level of the brain is largely a matter of preference, imo; that is, treating it as truth implies taking a more authoritive position than is reasonable. Same goes for 2) - there's a difference between telling somebody what you think and outright stating that their subjective self-image is factually incorrect.
I appear to be confused. Are you implying that subjective self-image is something that we should respect rather than analyze?
I think there's a difference between analysis and authoritive-sounding statements like "X is not actually a part of you, you are wrong about this", especially when it comes to personal attributes like selfness, especially in a thread demonstrating the folly of the typical-mind assumption.
Interesting. It was not my intent to sound any more authoritative than typical. Are there particular signals that indicate abnormally authoritarian-sounding statements that I should watch out for? And are there protocols that I should be aware of here that determine who is allowed to sound more or less authoritarian than whom, and under what circumstances?
I should have mentioned this earlier, but I did not downvote you so this is somewhat conjectured. In my opinion it's not a question of who but of topic - specifically, and this holds in a more general sense, you might want to be cautious when correcting people about beliefs that are part of their self-image. Couch it in terms like "I don't think", "I believe", "in my opinion", "personally speaking". That'll make it sound less like you think you know their minds better than they do.
FWIW, I understood you in the first place to be saying that this was a choice, and it was good to be aware of it as a choice, rather than making authoritarian statements about what choice to make.
I'd certainly call them much more significant to my identity than a e.g. my deltoid muscle, or some motor function parts of my brain.
It may be useful to recognize that this is a choice, rather than an innate principle of identity. The parts that speak are just modules, just like the parts that handle motor control. They can (and often do) run autonomously, and then the module that handles generating a coherent narrative stitches together an explanation of why you "decided" to cause whatever they happened to generate.
This sounds like a theory of identity as epiphenomenal homunculus. A module whose job is to sit there weaving a narrative, but which has no effect on anything outside itself (except to make the speech module utter its narrative from time to time). "Mr Volition", as Greg Egan calls it in one of his stories. Is that your view?
More or less, yes. It does have some effect on things outside itself, of course, in that its 'narrative' tends to influence our emotional investment in situations, which in turn influences our reactions.
It seems to me that the Mr. Volition theory suffers from the same logical flaw as p-zombies. How would a non-conscious entity, a p-zombie, come to talk about consciousness? And how does an epiphenomenon come to think it's in charge, how does it even arrive at the very idea of "being in charge", if it was never in charge of anything? An illusion has to be an illusion of something real. Fake gold can exist only because there is such a thing as real gold. There is no such thing as fake mithril, because there is no such thing as real mithril.
By that analogy, then, fake gods can exist only because there is such a thing as real gods; fake ghosts can only exist because there is such a thing as real ghosts; fake magic can only exist because there is such a thing as real magic. It's perfectly possible to be ontologically mistaken about the nature of one's world.
Indeed. There is real agency, so people have imagined really big agents that created and rule the world. People's consciousness persists, even after the interruptions of sleep, and they imagine it persists even after death. People's actions appear to happen purely by their intention, and they imagine doing arbitrary things purely by intention. These are the real things that the fakes, pretences, or errors are based on. But how do the p-zombie and the homunculus even get to the point of having their mistaken ontology?
The p-zombie doesn't, because the p-zombie is not a logically consistent concept. Imagine if there was a word that meant "four-sided triangle" - that's the level of absurdity that the 'p-zombie' idea represents. On the other hand, the epiphenomenal consciousness (for which I'll accept the appelature 'homunculus' until a more consistent and accurate one occurs to me) is simply mistaken in that it is drawing too large a boundary in some respects, and too small a boundary in others. It's drawing a line around certain phenomena and ascribing a causal relationship between those and its own so-called 'agency', while excluding others. The algorithm that draws those lines doesn't have a particularly strong map-territory correlation; it just happens to be one of those evo-psych things that developed and self-reinforced because it worked in the ancestral environment. Note that I never claimed that "agency" and "volition" are nonexistent on the whole; merely that the vast majority of what people internally consider "agency" and "volition", aren't. EDIT: And I see that you've added some to the comment I'm replying to, here. In particular, this stood out: I don't believe that "my" consciousness persists after sleep. I believe that a new consciousness generates itself upon waking, and pieces itself together using the memories it has access to as a consequence of being generated by "my" brain; but I don't think that the creature that will wake up tomorrow is "me" in the same way that I am. I continue to use words like "me" and "I" for two reasons: 1. Social convenience - it's damn hard to get along with other hominids without at least pretending to share their cultural assumptions 2. It is, admittedly, an incredibly persistent illusion. However, it is a logically incoherent illusion, and I have upon occasion pierced it and seen others pierce it, so I'm not entirely inclined to give it ontological reality with p=1.0 anymore.
Do you believe that the creature you are now (as you read this parenthetical expression) is "you" in the same way as the creature you are now (as you read this parenthetical expression)? If so, on what basis?
Yes(ish), on the basis that the change between me(expr1) and me(expr2) is small enough that assigning them a single consistent identity is more convenient than acknowledging the differences. But if I'm operating in a more rigorous context, then no; under most circumstances that appear to require epistemological rigor, it seems better to taboo concepts like "I" and "is" altogether.
(nods) Fair enough. I share something like this attitude, but in normal non-rigorous contexts I treat me-before-sleep and me-after-sleep as equally me in much the same way as you do me(expr1) and me(expr2). More generally, my non-rigorous standard for "me" is such that all of my remembered states when I wasn't sleeping, delirious, or younger than 16 or so unambiguously qualify for "me"dom, despite varying rather broadly amongst themselves. This is mostly because the maximum variation along salient parameters among that set of states seems significantly smaller than the minimum variations between that set and the various other sets of states I observe others demonstrating. (If I lived in a community seeded by copies of myself-as-of-five-minutes ago who could transfer memories among one another, I can imagine my notion of "I" changing radically.)
Nice! I like that reasoning. I personally experience a somewhat less coherent sense of self, and what sense of self I do experience seems particularly maladaptive to my environment, so we definitely seem to have different epistemological and pragmatic goals - but I think we're applying very similar reasoning to arrive at our premises.
So in the following sentence... "I am a construction worker" Can you taboo 'I' and "am' for me?
This body works construction. Jobs are a particularly egregious case where tabooing "is" seems like a good idea - do you find the idea that people "are" their jobs a particularly useful encapsulation of the human experience? Do you, personally find your self fully encapsulated by the ritualized economic actions you perform?
But if 'I' differ day to day, then doesn't this body differ day to day too? I am fully and happily encapsulated by my job, though I think I may have the only job where this really possible.
Certainly. How far do you want to go? Maps are not territories, but some maps provide useful representations of territories for certain contexts and purposes. The danger represented by "I" and "is" come from their tendency to blow away the map-territory relation, and convince the reader that an identity exists between a particular concept and a particular phenomenon.
Is the camel's nose the same thing as his tail? Are the nose and the tail parts of the same thing? What needs tabooing is "same" and "thing".
I have also found that process useful (although like 'I', there are contexts where it is very cumbersome to get around using them).
Suppose I am standing next to a wall so high that I am left with the subjective impression that it just goes on forever and ever, with no upper bound. Or next to a chasm so deep that I am left with the subjective impression that it's bottomless. Would you say these subjective impressions are impossible? If possible, would you say they aren't illusory? My own answer would be that such subjective impressions are both illusory and possible, but that this is not evidence of the existence of such things as real bottomless pits and infinitely tall walls. Rather, they are indications that my imagination is capable of creating synthetic/composite data structures.
Mesh mail "mithril" vest, $335 []. Setting aside the question of whether this [] is fake iron man armor, or a real costume of the fake iron man, or a fake costume designed after the fake iron man portrayed by special effects artists in the movies, I think an illusion can be anything that triggers a category recognition [] by matching some of the features strongly enough to trigger the recognition, while failing to match on a significant amount of the other features that are harder to detect at first.
That's not fake mithril, it's pretend mithril []. To have the recognotion, there must have already been a category to recognise.
A tape recorder is a non-conscious entity. I can get a tape recorder to talk about consciousness quite easily. Or are you asking how it would decide to talk about consciousness? It's a bit ambiguous.
I think it's not an epiphenomenon, it's just wired in more circuitously than people believe. It has effects; it just doesn't have some effects that we tend to ascribe to it, like decisionmaking and highlevel thought.
.> How would a non-conscious entity, a p-zombie, come to talk about consciousness? By functional equivalence. A zombie Chalmers is bound to will utter sentences asserting its possession of qualia, a zombie Dennett will utter sentences denying the same. [] The only getout is to claim that it is not really talking at all.
The epiphenomenal homunculus theory claims that there's nothing but p-zombies, so there are no conscious beings for them to be functionally equivalent to. After all, as the alien that has just materialised on my monitor has pointed out to me, no humans have zardlequeep (approximate transcription), and they don't go around insisting that they do. They don't even have the concept to talk about.
The theory that there is nothing but zombies runs into the difficulty of explaining why many of them would believe they are non-zombies. The standard p-zombie argument, that you can have qualia-less functional duplicates of non-zombies does not have that problem.
The theory that there is nothing but zombies runs into the much bigger difficulty of explaining to myself why I'm a zombie. When I poke myself with a needle, I sure as hell have the qualia of pain. And don't tell me it's an illusion - any illusion is a qualia by itself.
Don't tell me tell Dennett
The standard p-zombie argument still has a problem explaining why p-zombies claim to be conscious. It leaves no role for consciousness in explaining why conscious humans talk of being conscious. It's a short road (for a philosopher) to then argue that consciousness plays no role, and we're back with consciousness as either an epiphenomenon or non-existent, and the problem of why -- especially when consciousness is conceded to exist, but cause nothing -- the non-conscious system claims to be conscious.
Even worse, the question of how the word "conscious" can possibly even refer to this thing that is claimed to be epiphenomenal, since the word can't have been invented in response to the existence or observations of consciousness (since there aren't any observations). And in fact there is nothing to allow a human to distinguish between this thing, and every other thing that has never been observed, so in a way the claim that a person is "conscious" is perfectly empty. ETA: Well, of course one can argue that it is defined intensionally, like "a unicorn is a horse with a single horn extending from its head, and [various magical properties]" which does define a meaningful predicate even if a unicorn has never been seen. But in that case any human's claim to have a consciousness is perfectly evidence-free, since there are no observations of it with which to verify that it (to the extent that you can even refer to a particular unobservable thing) has the relevant properties.
Yes. Thats the standard epiphenomenalism objection. Often a bit too short.
I scrawl on a rock "I am conscious." Is the rock talking about consciousness?
No, you are.
I run a program that randomly outputs strings. One day it outputs the string "I am conscious." Is the program talking about consciousness? Am I?
No, see nsheppard's comment [].
Maybe I'm being unnecessarily cryptic. My point is that when you say that something is "talking about consciousness," you're assigning meaning to what is ultimately a particular sequence of vibrations of the air (or a particular pattern of pigment on a rock, or a particular sequence of ASCII characters on a screen). I don't need a soul to "talk about souls," and I don't need to be conscious to "talk about consciousness": it just needs to happen to be the case that my mouth emits a particular sequence of vibrations in the air that you're inclined to interpret in a particular way (but that interpretation is in your map, not the territory). In other words, I'm trying to dissolve the question you're asking. Am I making sense?
Not yet. I really think you need to read the GLUT [] post that nsheppard linked to. You do need to have those concepts, though, and concepts cannot arise without there being something that gave rise to them. That something may not have all the properties one ascribes to it (e.g. magical powers), but discovering that that one was mistaken about some aspects does not allow one to conclude that there is no such thing. One still has to discover what the right account of it is. If consciousness is an illusion, what experiences the illusion? This falls foul of the GAZP v. GLUT thing. It cannot "just happen to be the case". When you pull out for attention the case where a random process generates something that appears to be about consciousness, out of all the other random strings, you've used your own concept of consciousness to do that.
I've read GLUT. Have you read The Zombie Preacher of Somerset []?
I think so; at least, I have now. (I don't know why someone would downvote your comment, it wasn't me.) So, something went wrong in his head, to the point that asking "was he, or was he not, conscious" is too abstract a question to ask. Nowadays, we'd want to do science [] to someone like that, to try to find out what was physically going on.
Sure, I'm happy with that interpretation.
That is not obvious. You do need to be a langue-user to use language, you do need to know English to communicate in English, and so on. If consciousness involves things like self-reflection and volition, you do need to be conscious to interntionally use language to express your reflections on your own consciousness.
In the same way that a philosophy paper does... yes. Of course, the rock is just a medium for your attempt at communication.
I write a computer program that outputs every possible sequence of 16 characters to a different monitor. Is the monitor which outputs 'I am conscious' talking about consciousness in the same way the rock is? Whose attempt at communication is it a medium for?
Your decision to point out the particular monitor displaying this message as an example of something imparts information about your mental state in exactly the same way that your decision to pick a particular sequence of 16 characters out of platonia to engrave on a rock does. See also: on GLUTs [].
The reader's. Paradolia is a signal-processing system's attempt to find a signal. On a long enough timeline, all random noise generators become hidden word puzzles.
Why would we have these modules that seem quite complex, and likely to negatively effect fitness (thinking's expensive), if they don't do anything? What are the odds of this becoming a prevalent without a favourable selection pressure?
High, if they happen to be foundational. Sometimes you get spandrels, and sometimes you get systems built on foundations that are no longer what we would call "adaptive", but that can't be removed without crashing systems that are adaptive.
Evo-psych just-so stories are cheap. Here's one: it turns out that ascribing consistent identity to nominal entities is a side-effect of one of the most easily constructed implementations of "predict the behavior of my environment." Predicting the behavior of my environment is enormously useful, so the first mutant to construct this implementation had a huge advantage. Pretty soon everyone was doing it, and competing for who could do it best, and we had foreclosed the evolutionary paths that allowed environmental prediction without identity-ascribing. So the selection pressure for environmental prediction also produced (as an incidental side-effect) selection pressure for identity-ascribing, despite the identity-ascribing itself being basically useless, and here we are. I have no idea if that story is true or not; I'm not sure what I'd expect to see differentially were it true or false. My point is more that I'm skeptical of "why would our brains do this if it weren't a useful thing to do?" as a reason for believing that everything my brain does is useful.
(nods) Yeah, OK. Take 2. It's also broadly similar to the difference between explicit and implicit knowledge. Have you ever practiced a skill enough that it goes from being something where you hold the "outline" of the skill in explicit memory as you perform it, to being something where you simply perform it without that "outline"? For example, driving to an unfamiliar location and thinking "ok, turn right here, turn left here" vs. just turning in the correct direction at each intersection, or something similar to that?
Yes, I have. Driving is such a skill; when I was first learning to drive, I had to think about driving ("...need to change gear, which was the clutch again? Ordered CBA, so on the left..."). Now that I am more practiced, I can just think about changing gear and change gear, without having to examine my actions in so much detail. Which allows my internal monologue to wonder into other directions. On a couple of occasions, as a result of this thread, I've tried just quietening down my internal monologue - just saying nothing for a bit - and observing my own thought processes. I find that the result is that I pay a lot more attention to audio cues - if I hear a bird in the distance, I picture a bird. There's associations going on inside my head that I'd never paid much attention to before.
Is this still true under significant influence of alcohol?
I wouldn't know, I don't drink alcohol.
Well, if you ever did want to experience what TheOtherDave describes, that might be a good way to induce it.
I've found I can quiet my internal monologue if I try. (It's tricky, though; the monologue starts up again at the slightest provocation - I try to observe my own though processes without the monologue, and as soon as something odd happens, the internal monologue says "That's odd... ooops.") I'm not sure if I can talk without the monologue automatically starting up again, but I'll try that first.
I wasn't to add another data point, but I'm not sure the one I got can even be called that: I have no consistent memory on this subject. I am notoriously horrible at luminosity and introspection. When I do try to ask my brain, I receive a model/metaphor based of what I already know for neuroscience which may or may not contain data I couldn't access otherwise, and which is presented as a machine I can manipulate in the hopes of trying to manipulate the states of distant brains. The machine is clearly based on whatever concepts happen to be primed and the results would probably be completely different in every way if I tried this an hour later. Note that the usage of the word "I" here is inconsistent and ill-defined. This might be related to the fact this brain is self-diagnosed with posible ego-death (in the good way). Edit: it is also noticed that like seemingly the case with most attempts to introspection, the act of observation strongly and aversely influence the functioning of the relevant circuity, in this case heavily altering my speech-patterns.
Huh. They way you describe attempting introspection is exactly the way our brain behaves when we try to access any personal memories outside of working memory. This doesn't seem to be as effective as whatever the typical way is, as our personal memory's notoriously atrocious compared with others. I don't seem to have any sort of ego death. Vigil might have something similar, though.
Hmm, this seems related to another datapoint: reportedly, when I'm asked about my current mood and distracts, I answer "I can't remember". A more tenuously related datapoint is that in fiction, I try to design BMIs around emulating having memorized GLUTs. And some other thing come to think of it: I do have abnormal memory function in a bunch of various ways. Basically; maybe a much larger chunk of my cognition passes through memory machinery for some reason?
What are GLUTs? I'm guessing you're not talking about Glucose Transporters. This seems like a plausible hypothesis. Alternatively, perhaps your working memory is less differentiated from your long-term memory. Hm. I have the same reaction if I'm asked what I'm thinking about, but I don't think it's because my thoughts are running through my long-term memory, so much as my train of thought usually gets flushed out of working memory when other people are talking.
GLUT=Giant Look-Up Table. Basically, implementing multiplication by memorizing the multiplication tables up to 2 147 483 647. Hmm, that's an interesting theory. They are not necessarily mutually exclusive. And no I'm not talking about trying to remember what happened a few seconds ago. I mean direct sensory experiences; as in someone holds p 3 fingers in the darkness and asks "how many fingers am I holding up right now" and I answer "I can't remember" instead of "I can't see".
Giant Look-Up Table []
What are BMIs? I'm guessing you're not talking about body mass indexes. :-)
Brain machine Interface.
BTW, my internal monologue usually sounds quite different from what I actually say in most casual situations: for example, it uses less dialectal/non-standard language and more technical terms. (IOW, it resembles the way I write more than the way I speak. So, "I know what I'm going to say, and here I am saying it" is my default state when writing, and "words are coming out of my mouth, and I'm kind of surprised by what I'm hearing myself say" is the state I'm most often in when speaking.) Anyone else finds the same?
That's pretty close to how I operate, except the words are more like the skeletons of the thoughts than the thoughts themselves, stripped of all the internal connotation and imagery that provided 99% of the internal meaning.
Well, which one do you prefer?
Oh, that's hard. The latter was awful, but of course most of that was due to all the other crap that was going on at the time. If I take my best shot at adjusting for that... well, I am most comfortable being the voice in my head. But not-being the voice in my head has an uncomfortable gloriousness associated with it. I doubt the latter is sustainable, though.

When you're playing a sport... wait, maybe you don't... okay, when you're playing an instrum—hm. Surely there is a kinesthetic skill you occasionally perform, during which your locus of identity is not in your articulatory loop? (If not, fixing that might be high value?) And you can imagine being in states similar to that much of the time? I would imagine intense computer programming sessions would be more kinesthetic than verbal. Comment linked to hints at what my default thinking process is like.

When I'm playing music or martial arts, and I'm doing it well, I'm usually in a state of flow--not exactly self-aware in the way I usually think of it. When I'm working inside a computer or motorcycle, I think I'm less self-aware, and what I'm aware of is my manipulating actuators, and the objects than I need to manipulate, and what I need to do to them. When I'm sitting in my armchair, thinking "who am I?" this is almost entirely symbolic, and I feel more self-aware than at the other times. So, I think having my locus of identity in my articulatory loop is correlated with having a strong sense of identity. I'm not sure whether my sense of identity would be weaker there, and stronger in a state of kinesthetic flow, if I spent more time sparring than sitting.
I wouldn't want to identify with the voice in my head. It can only think one thought at a time; it's slow.
How many things can you think of at once? I'm curious now.
I'm not sure how to answer that question. But when I think verbally I often lose track of the bigger picture of what I'm doing and get bogged down on details or tangents.
I play other people's voices through my head as I imagine what they would say (or are saying, when I interpret text,) but I don't have my own voice in my head as an internal monologue, and I think of "myself" as the conductor, which directs all the voices.
What happens when you are not thinking about what anyone else is saying or would say?
I think in terms of ideas and impulses, not voices. I can describe an impulse as if it had been expressed in words, but when it's going through my head, it's not. I'd be kind of surprised if people who have internal monologues need an inner voice telling them "I'm so angry, I feel like throwing something!" in order to recognize that they feel angry and have an urge to throw something. I just recognize urges directly, including ones which are more subtle and don't need to be expressed externally, without needing to mediate them through language. It definitely hasn't been my experience that not thinking in terms of a distinct inner "voice" makes it hard for me to pin down my thoughts; I have a much easier time following my own thought processes than most people I know.
In our case at least, you are correct that we don't need to vocalize impulses. Emotions and urges seem to run on a different, concurrent modality. Do ideas and impulses both use the same modality for you?
Maybe not quite the same, but the difference feels smaller than that between impulse and language. To me, words are what I need to communicate with other people, not something I need to represent complex ideas within my own head. I can represent a voice in my head if I choose to, but I don't find much use for it.
Not quite the same thing, but I've discovered that "I feel ragged around the edges" is my internal code for "I need B12". One part of therapy for some people is giving them a vocabulary for their emotions.
I can recognise that I'm angry without the voice. When I'm angry, the inner voice will often be saying unflattering things about the object of my anger; something along the lines of "Aaaaaargh, this is so frustrating! I wish it would just work like it's supposed to!" Wordless internal angry growls may also happen.
It's something like watching a movie. You can see hands typing and words appearing on the screen, but you aren't precisely thinking them. You can feel lips moving and hear words forming in the air, but you aren't precisely thinking them. They're just things your body is doing, like walking. When you walk, you don't consciously think of each muscle to move, do you? most of the time you don't even think about putting one foot in front of the other; you just think about where you're going (if that) and your motor control does the rest. For some people, verbal articulation works the same way. Words get formed, maybe even in response to other peoples' words, but it's not something you're consciously acting on; those processes are running on their own without conscious input.
I find this very strange. When I walk, yes, I don't consciously think of every muscle; but I do decide to walk. I decide my destination, I decide my route. (I may, if distracted, fall by force of habit into a default route; on noticing this, I can immediately override). So... for someone without the internal monologue... how much do you decide about what you say? Do you just decide what subject to speak about, what opinions to express, and leave the exact phrasing up to the autopilot? Or do you not even decide that - do you sit there and enjoy the taste of icecream while letting the conversation run entirely by itself?

Didn't think this was going to be my first contribution to LessWrong, but here goes (hi, everybody, I'm Phil!)

I came to what I like to think was a realisation useful to my psychological health a few months ago when I was invited to realise that there is more to me than my inner monologue. That is, I came to understand that identifying myself as only the little voice in my head was not good for me in any sense. For one thing, my body is not part of my inner monologue, ergo I was a fat guy, because I didn't identify with it and therefore didn't care what I fed it on. For another, one of the things I explicitly excluded from my identity was the subprocess that talks to people. I had (and still have) an internal monologue, but it was at best only advisory to the talking process, so you can count me as one of the people for whom conversation is not something I'm consciously acting on. Result: I didn't consider the person people meet and talk to to be "me", but (as I came to understand), nevertheless I am held responsible for everything he says and does.

My approach to this was somewhat luminous avant (ma lecture de) la lettre: I now construe my identity as consisting of at... (read more)

Single data point but: I can alternate between inner monologue (heard [in somebody else's voice not mine(!)]) and no monologue (mainly social activity - say stuff then catch myself saying it and keep going) - stuff just happens. When inner monologue is present it seems I'm in real time constructing what I imagine the future to be and then adapt to that. I can feel as if my body moved without moving it, but don't use it for thinking (mainly kinesthethic imagination or whatever). I can force myself to see images, and, at the fringe, close to sleep, can make up symphonies in my mind, but don't use them to think.
Who's speaking the voice in your head? Seems like another layer of abstraction.
Obviously the speaker is the homunculus that makes Eliezer conscious rather than a p-zombie.
A collective of neural hardware collectively calling itself "Baughn". Everyone gets some input.
I have an internal monologue. It's a bit like a narrator in my head, narrating my thoughts. I think - and this is highly speculative on my part - that it's a sign of thinking mainly with the part of the brain that handles language. Whenever I take one of those questionnaires designed to tell whether I use mainly the left or right side of my brain, I land very heavily on the left side - analytical, linguistic, mathematical. I can use the other side if I want to; but I find it surprisingly easy to become almost a caricature of a left-brain thinker. My internal monologue quite probably restricts me to (mainly) ideas that are easily expressed in English. Up until now, I could see this as a weakness, but I couldn't see any easy way around it. (One advantage of the internal monologue, on the other hand, is that I usually find it easy to speak my thoughts out loud; because they're already in word form) But now, you tell me that you don't seem to have an internal monologue. Does this mean that you can easily think of things that are not easily expressed in English?
Well.. I can easily think of things I subsequently have seriously trouble expressing in any language, sure. Occasionally through reflection via visuals (or kinesthetics, or..), but more often not using such modalities at all. (See sibling post)
Okay, visual I can understand. I don't use it often, but I do use it on occasion. Kinesthetic, I use even less often, but again I can more-or-less imagine how that works. (Incidentally, I also have a lot of trouble catching a thrown object. This may be related.) But this 'no modalities at all'... this intrigues me. How does it work? All I know is some ways in which it doesn't work.
I can't speak for Baughn but as for myself, sometimes It feels like I know ahead of time what I'm going to say as my inner voice, and sometimes this results in me not actually bothering to say it.
I went on vacation during this discussion, and completely lost track of it in the process - oops. It's an interesting question, though. Let me try to answer. First off, using a sensory modality for the purpose of thinking. That's something I do, sure enough; for instance, right now I'm "hearing" what I'm saying at the same time as I'm writing it. Occasionally, if I'm unsure of how to phrase something, I'll quickly loop through a few options; more often, I'll do that without bothering with the "hearing" part. When thinking about physical objects, sometimes I'll imagine them visually. Sometimes I won't bother. For planning, etc. I never bother - there's no modality that seems useful. That's not to say I don't have an experience of thinking. I'm going to explain this in terms of a model of thought[1] that's been handy for me (because it seems to fit me internally, and also because it's handy for models in fiction-writing where I'm modifying human minds), but keep in mind that there is a very good chance it's completely wrong. You might still be able to translate it to something that makes sense to you. ..basically, the workspace model of consciousness combined with a semi-modular brain architecture. That is to say, where the human mind consists of a large number of semi-independent modules, and consciousness is what happens when those modules are all talking to each other using a central workspace. They can also go off and do their own thing, in which case they're subconscious. Now, some of the major modules here are sensory. For good reason; being aware of your environment is important. It's not terribly surprising, then, that the ability to loop information back - feeding internal data into the sensory modules, using their (massive) computational power to massage it - is useful, though it also involves what would be hallucinations if I wasn't fully aware it's not real. It's sufficiently useful that, well, it seems like a lot of people don't notice there's anyth
Okay, let me summarise your statement so as to ensure that I understand it correctly. In short, you have a number of internal functional modules in the brain; each module has a speciality. There will be, for example, a module for sight; a module for hearing; a module for language, and so on. Your thoughts consist - almost entirely - of these modules exchanging information in some sort of central space. The modules are, in effect, having a chat. Now, you can swap these modules out quite a bit. When you're planning what to type, for example, it seems you run that through your 'hearing' module, in order to check that the word choice is correct; you know that this is not something which you are actually hearing, and thus are in no danger of treating it as a hallucination, but as a side effect of this your hearing module isn't running through the actual input from your ears, and you may be missing something that someone else is saying to you. (I imagine that sufficiently loud or out-of-place noises are still wired directly to your survival subsystem, though, and will get your attention as normal). But you don't have to use your hearing module to think with. Or your sight module. You have other modules which can do the thinking, even when those modules have nothing to do. When your sensory modules have nothing to add, you can and do shut them out of the main circuit, ignoring any non-urgent input from those modules. Your modules communicate by some means which are somehow independent of language, and your thoughts must be translated through your hearing module (which seems to have your language module buried inside it) in order to be described in English. -------------------------------------------------------------------------------- This is very different to how I think. I have one major module - the language module (not the hearing module, there's no audio component to this, just a direct language model) which does almost all my thinking. Other modules can be us
Your analysis is pretty much spot on. It's interesting to me that you say your hearing and language modules are independent. I mean, it's reasonably obvious that this has to be possible - deaf people do have language - but it's absolutely impossible for me to separate the two, at least in one direction; I can't deal with language without 'hearing' it. And I just checked; it doesn't appear I can multitask and examine non-language sounds while I'm using language, either. For comparison, I absolutely can (re)use e.g. visual modules while I'm writing this, although it gets really messy if I try to do so while remaining conscious of what they're doing - that's not actually required, though. Well... my introspection isn't really good enough to tell, and it's really more of a zeroth-approximation model than something I have a lot of confidence in. That said, I suspect the question doesn't have an answer even in principle; that there's no clear border between two adjacent subsystems, so it depends on where you want to draw the line. It doesn't help that some elements of my thinking almost certainly only exist as a property of the communication between other systems, not as a physical piece of meat in itself, and I can't really tell which is which. I think if it was just one, I wouldn't really be conscious of it. But that's not what you asked, so the answer is "Probably yes". I'm very tempted to say "conscious experience", here, but I have no real basis for that other than a hunch. I'm not sure I can give you a better answer, though. Feelings, visual input (or "hallucinations"), predictions of how people or physical systems will behave, plans - not embedded in any kind of visualization, just raw plans - etc. etc. And before you ask what that's like, it's a bit like asking what a Python dictionary feels like.. though emotions aren't much involved, at that level; those are separate. The one common theme is that there's always at least one meta-level of thought associated
This may be related to the fact that I learnt to read at a very young age; when I read, I run my visual input through my language module; the visual model pre-processes the input to extract the words, which are then run through the language module directly. At least, that's what I think is happening. Running the language module without the hearing module a lot, and from a young age, probably helped quite a bit to seperate the two. Hmph. Disappointing, but thanks for answering the question. I think I was hoping for more clearly defined modules than appears to be the case. Still, what's there is there. Now, this is interesting. I'm really going to have to go and think about this for a while. You have a kind of continual meta-commentary in your mind, thinking about what you're thinking, cross-referencing with other stuff... that seems like a useful talent to have. It also seems that, by concentrating more on the individual modules and less on the inter-module communication, I pretty much entirely missed where most of your thinking happens. One question comes to mind; you mention 'raw plans'. You've correctly predicted my obvious question - what raw plans feel like - but I still don't really have much of a sense of it, so I'd like to poke at that a bit if you don't mind. So; how are these raw plans organised? Let us say, for example, that you need to plan... oh, say, to travel to a library, return one set of books, and take out another. Would the plan be a series of steps arranged in order of completion, or a set of subgoals that need to be accomplished in order (subgoal one: find the car keys); or would the plan be simply a label saying 'LIBRARY PLAN' that connects to the memory of the last time you went on a similar errand? -------------------------------------------------------------------------------- As for me, I have a few different ways that I can formulate plans. For a routine errand, my plan consists of the goal (e.g. "I need to go and buy bread") an
Obligatory link to Yvain's article [] on the topic.
A very high proportion of what I call thinking is me talking to myself. I have some ability to imagine sounds and images, but it's pretty limited. I'm better with kinesthesia, but that's mostly for thinking about movement. What's your internal experience composed of?
That varies.. quite a lot. While I'm writing fiction there'll be dialogue, the characters' emotions and feelings, visuals of the scenery, point-of-view visuals (often multiple angles at the same time), motor actions, etc. It's a lot like lucid dreaming, only without the dreaming. Occasionally monologues, yes, but those don't really count; they're not mine. While I'm writing this there is, yes, a monologue. One that's just-in-time, however; I don't normally bother to listen to a speech in my head before writing it down. Not for this kind of thing; more often for said fiction, where I'll do that to better understand how it reads. Mostly I'm not writing anything, though. Most of the time, I don't seem to have any particular internal experience at all. I just do whatever it is I'm doing, and experience that, but unless it's relatively complex there doesn't seem to be much call for pre-action reflections. (Well, of course I still feel emotions and such, but.. nothing monologue-like, in any modality. Hope that makes sense.) A lot of the time I have (am conscious of) thoughts that don't correspond to any sensor modality whatsoever. I have no idea how I'd explain those. If I'm working on a computer program.. anything goes, but I'll typically borrow visual capacity to model graph structures and such. A lot of the modalities I'd use there, I don't really have words for, and it doesn't seem worthwhile to try inventing them; doing so usefully would turn this into a novel.
That's the internal monologue. Mine is also often just-in-time (not always, of course). I can listen to it in my head a whole lot faster than I can talk, type, or write, so sometimes I'll start out just-in-time at the start of the sentence and then my internal monologue has to regularly wait for the typing/writing/speaking to catch up before I can continue. For example, in this post, when I clicked the 'reply' button I had already planned out the first two sentences of the above post (before the first bracket). The contents of the first bracket were added when I got to the end of the second sentence, and then edited to add the 'of course'. The next sentence was added in sections, built up and then put down and occasionally re-edited as I went along (things like replacing 'on occasion' with 'sometimes'). Hmmm. Living in the moment. I'm curious; how would you go about (say) planning for a camping trip? Not so much 'what would you do', but 'how would you think about it'?
Can't speak for Nancy, but I think I know what she refers to. Different people have different thought... processes, I guess is the word. My brother's thought process is, by his description, functional; he assigns parts of his mind tasks, and gets the results back in a stack. (He's pretty good at multi-tasking, as a result.) My own thought process is, as Nancy specifies, an internal monologue; I'm literally talking to myself. (Although the conversation is only partially English. It's kind of like... 4Chan. Each "line" of dialogue is associated with an "image" (in some cases each word, depending on the complexity of the concept encoded in it), which is an abstract conceptualization. If you've ever read a flow-of-consciousness book, that's kind of like a low-resolution version of what's going on in my brain, and, I presume, hers. I've actually discovered at least one other "mode" I can switch my brain into - I call it Visual Mode. Whereas normally my attention is very tunnel vision-ish (I can track only one object reliably), I can expand my consciousness (at the cost of eliminating the flow-of-consciousness that is usually my mind) and be capable of tracking multiple objects in my field of vision. (I cannot, for some reason, actually move my eyes while in this state; it breaks my concentration and returns me to a "normal" mode of thought.) I'm capable of thinking in this state, but oddly, incapable of tracking or remembering what those thoughts are; I can sustain a full conversation which I will not remember, at all, later.
Hm, the obvious question there is: "How do you know you can sustain a full conversation, if you don't remember it at all later?" (..edit: With other people? Er, right. Somehow I was assuming it was an internal conversation.) I've got some idea what you're talking about, though - focusing my consciousness entirely on sensory input. More useful outside of cities, and I don't have any kind of associated amnesia, but it seems similar to how I'd describe the state otherwise. Neither your brother's nor your own thought processes otherwise seem to be any kind of match for mine. It's interesting that there's this much variation, really. Otherwise.. see sibling post for more details.
I can do a weaker version of this - basically, by telling my brain to "focus on the entire field of your perception" as if it was a single object. As far as I am aware, it doesn't do any of the mental effects you describe for me. It's very relaxing though.
Add one to the sample size. My thought process is also mostly lacking in sensory modality. My thoughts do have a large verbal component, but they are almost exclusively for planning things that I could potentially say or write. Rather than trying to justify how this works to the others, I will instead ask my own questions: How can words help in creating thoughts? In order to generate a sentence in your head, surely you must already know what you want to say. And if you already know what you have to say, what's the point of saying it? I presume you cannot jump to the next thought without saying the previous one in full. With my own ability to generate sentences, that would be a crippling handicap.
My thoughts are largely made up of words. Although some internal experimentation has shown that my brain can still work when the internal monologue is silent, I still associate 'thoughts' very, very strongly with 'internal monologue'. I think that, while thoughts can exist without words, the word make the thoughts easier to remember; thus, the internal monologue is used as part of a 'write-to-long-term-storage' function. (I can write images and feelings as well; but words seem to be my default write-mode). Also, the words - how shall I put this - the words solidify the thought. They turn the thought into something that I can then take and inspect for internal integrity. Something that I can check for errors; something that I can think about, instead of something that I can just think. Images can do the same, but take more working-memory space to hold and are thus harder to inspect as a whole. I don't think I've ever tried. I can generate sentences fast enough that it's not a significant delay, though. I suspect that this is simply due to long practice in sentence construction. (Also, if I'm not going to actually say it out loud, I don't generally bother to correct it if it's not grammatically correct).
Personally, I can do this to degrees. I can skip verbalizing a concept completely, but it feels like inserting a hiccup into my train of thought (pardon the mixed analogy). I can usually safely skip verbalizing all of it; that is, it feels like I have a mental monologue but upon reflection it went by too fast to actually be spoken language so I assume it was actually some precursor that did not require full auditory representation. I usually only use full monologues when planning conversations in advance or thinking about a hard problem. As far as I can tell, the process helps me ensure consistency in my thoughts by making my train of thought easier to hold on to and recall, and also enables coherence checking by explicitly feeding my brain's output back into itself.
Now I'm worrying that I might have been exaggerating. Although you are implicitly describing your thoughts as being verbal, they seem to work in a way similar to mine. ETA: More information: I still believe I am less verbal than you. In particular, I believe my thoughts become less verbal when thinking about hard problems are than becoming more so as in your case. However, my statement about my verbal thoughts being "almost exclusively for planning things that I could potentially say or write" is a half-truth; A lot of it is more along the lines that sometimes when I have an interesting thought I imagine explaining it to someone else. Some confounding factors: * There is a continuum here from completely nonverbal to having connotations of various words and grammatical structures to being completely verbal. I'm not sure when it should count as having an internal monologue. * Asking myself weather a thought was verbal naturally leads to create a verbalization of it, while not asking myself this creates a danger of not noticing a verbal thought. * I basing this a lot on introspection done while I am thinking about this discussion, which would make my thoughts more verbal.
Wikipedia article []. I'm really curious how you would describe your thoughts if you don't describe them as an internal monologue. Are you more of a visual thinker?
When I think about stuff, often I imagine a voice speaking some of the thoughts. This seems to me to be a common, if not nearly universal, experience.
I only really think using voices. Whenever I read, if I'm not 'hearing' the words in my head, nothing stays in.
Do you actually hear the voice? I often have words in my head when I think about things, but there isn't really an auditory component. It's just words in a more abstract form.
I wouldn't say I literally hear the voice; I can easily distinguish it from sounds I'm actually hearing. But the experience is definitely auditory, at least some of the time; I could tell you whether the voice is male or female, what accent they're speaking in (usually my own), how high or low the voice is, and so on. I definitely also have non-auditory thoughts as well. Sometimes they're visual, sometimes they're spatial, and sometimes they don't seem to have any sensory-like component at all. (For what it's worth, visual and spatial thoughts are essential to the way I think about math.)
If you want to poke at this a bit, one way could be to test what sort of interferences disrupt different activities for you, compared to a friend. I'm thinking of the bit in "Surely you're joking" where Feynman finds that he can't talk and maintain a mental counter at the same time, while a friend of his can -- because his friend's mental counter is visual.
Neat. I can do it both ways... actually, I can name at least four different ways of counting: * "Raw" counting, without any sensory component; really just a sense of magnitude. Seems to be a floating-point, with a really small number of bits; I usually lose track of the exact number by, oh, six. * Verbally. Interferes with talking, as you'd expect. * Visually, using actual 2/3D models of whatever I'm counting. No interference, but a strict upper limit, and interferes with seeing - well, usually the other way around. The upper limit still seems to be five-six picture elements, but I can arrange them in various ways to count higher; binary, for starters, but also geometrically or.. various ways. * Visually, using pictures of decimal numbers. That interferes with speaking when updating the number, but otherwise sticks around without any active maintenance, at least so long as I have my eyes closed. I'm still limited to five-six digits, though... either decimal or hexadecimal works. I could probably figure out a more efficient encoding if I worked at it.
I, for one, actually hear the voice. It's quite clear. Not loud like an actual voice but a "so loud I can't hear myself think" moment has never literally happened to me since the voice seems more like its on its own track, parallel to my actual hearing. I would never get it confused with actual sounds, though I can't really separate the hearing it to the making it to be sure of that.
That's interesting! Because I have definitely had "so loud I can't hear myself think" moments (even though I don't literally hear thoughts) - just two days ago, I had to ask somebody to stop talking for a while so that I could focus.
Being distracted is one thing - I mean literally not being able to hear my thoughts in the manner that I might not be able to hear what you said if a jet was taking off nearby. This was to emphasize that even though I perceive them as sounds there is 'something' different about them than sounds-from-ears that seems to prevent them from audibly mingling. Loud noises can still make me lose track of what I was thinking and break focus.
Hmm. Now that I think of it, I'm not sure to what extent it was just distraction and to what extent a literal inability to hear my thoughts. Could've been exclusively one, or parts of both.
I added more detail in a sibling post, but it can't be that universal; I practically never do that at all, basically only for thoughts that are destined to actually be spoken. (Or written, etc.) Actually, I believe I used to do so most of the time (..about twenty years ago, before the age of ten), but then made a concerted effort to stop doing so on the basis that pretending to speak aloud takes more time. Those memories may be inaccurate, though.
Hijacking this thread to ask if anybody else experiences this - when I watch a movie told from the perspective of a single character or with a strong narrator, my internal monologue/narrative will be in that character's/narrator's tone of voice and expression for the next hour or two. Anybody else?
3Ratcourse10y [] Did it work for you?
I find that sometimes, after reading for a long time, the verbal components of my thoughts resemble the writing style of what I read.
Sometimes, after reading something with a strong narrative voice, I'll want to think in the same style, but realize I can't match it.
Not exactly what you are asking for, but I've found that if I spend an extended period of time (usually around a week) heavily interacting with a single person or group of people, I'll start mentally reading things in their voice(s).
While reading books. Always particular voices for every character. So much so, I can barely sit through adaptations of books I've read. And my opinion of a writer always drops a little bit when I meet hjm/her, and the voice in my head just makes more sense for that style.
Sure. Or after listening to a charismatic person for some time.
Maybe the social signaling sensitive unconsciously translate it into "I thought up this unobvious thing about this thing because I am smarter than you", and then file it off as being an asshole about stuff that's supposed to be communal fun?
It is not healthy to believe that every curtain hides an Evil Genius (I speak here as a person who lived in the USSR). Given the high failure rate of EVERY human work, I'd say that most secrets in the movie industry have to do with saving bad writing and poor execution with clever marketing and setting up other conflicts people could watch besides the pretty explosions. It's not about selling Imperialism and Decadance to a country that's been accused of both practically since its formation(sorry if you're American and noticed these accusations exist only now in the 21st century), or trying to force people into some new world order-style government where a dictator takes care of every need. Though, I must admit that I wonder about Michael Bay's agenda sometimes... Tony Stark isn't JUST a rich guy with a WMD. He messes up. He fails his friends and loved ones. He is in some way the lowest point in each of our lives, given some nobility. In spite of all those troubles, the fellow stands up and goes on with his life, gets better and tries to improve the world. David Wong seems to have missed the POINT of a couple of movies (how about the message of empowerment-through-determination in Captain America? The fellow must still earn his power as a "runt"), and even worse tries to raise conspiracy theory thinking up as rationality. So, maybe, the knee-jerk reaction is wise, because overanalizing something made to entertain tends to be somewhat similar to seeing shapes in the clouds. Sometimes, Iron Man is just Iron Man.
You don't need to believe in intent to spread negative values to analyse that spreading negative values is bad.
Hopefully, the positive values are greater in number than the negative ones, if one is not certain which ones are which--and I see quite a few positive values in recent superhero movies.
Seems to me that the problem is, well, precisely as stated: overthinking. It's the same problem as with close reading: look too close at a sample of one and you'll start getting noise, things the author didn't intend and were ultimately caused by, oh, what he had for breakfast on a Tuesday ten months ago and not some ominous plan.
On the other hand, where do you draw the line between reasonable analysis and overthinking? I mean, you can read into a text things which only your own biases put there in the first place, but on the other hand, the director of Birth of a Nation [] allegedly didn't intend to produce a racist film. I've argued plenty of times myself that you can clearly go too far, and critics often do, but on the other hand, while the creator determines everything that goes into their work, their intent, as far as they can describe it, is just the rider on the elephant, and the elephant leaves tracks where it pleases.
Well, this is hardly unique to literary critique. If/When we solve the general problem of finding signal in noise we'll have a rigorous answer; until then we get to guess.
If someone intends to draw an object with three sides, but they don't know that an object with three sides is a triangle, have they intended to draw a triangle? Whether the answer is yes or no is purely a matter of semantics.
Yes, but the question "should we censure this movie/book because it causes harm to (demographic)" is not a question of semantics.
Well, I really enjoy music, but I made the deliberate choice to not learn about music (in terms of notes, chords, etc.). The reason being that what I get from music is a profound experience, and I was worried that knowledge of music in terms of reductionist structure might change the way I experience hearing music. (Of course some knowledge inevitably seeps in.)

Akin's Laws of Spacecraft Design are full of amazing quotes. My personal favourite:

6) (Mar's Law) Everything is linear if plotted log-log with a fat magic marker.

(See also an interesting note from HN's btilly on this law)

The movie “Apollo 13″ does a fair job of showing how rapidly the engineers in Houston devised the kludge and documented it, but because of time contraints of course they can’t show you everything. NASA is a stickler for details. (Believe me, I’ve worked with them!) They don’t just rapid prototype something that people’s lives will depend upon. Overnight, they not only devised the scrubber adapter built from stuff in the launch manifest, they also tested it, documented it, and sent up stepwise instructions for constructing it. In a high-maturity organization, once you get into the habit of doing that, it doesn’t really take that long. Something that always puzzles me when I meet cowboy engineers who insist that process will just slow them down unacceptably. I tell them that hey, if NASA engineers could design, build, test, and document a CO2 scrubber adapter made from common household items overnight, you can damn well put in a comment when you check in your code changes.

you can't wait around for someone else to act. I had been looking for leaders, but I realised that leadership is about being the first to act.

Edward Snowden, the NSA surveillance whistle-blower.

Imagine you are sitting on this plane now. The top of the craft is gone and you can see the sky above you. Columns of flame are growing. Holes in the sides of the airliner lead to freedom. How would you react?

You probably think you would leap to your feet and yell, "Let's get the hell out of here!" If not this, then you might assume you would coil into a fetal position and freak out. Statistically, neither of these is likely. What you would probably do is far weirder......

In any perilous event, like a sinking ship or towering inferno, a shooting rampage or a tornado, there is a chance you will become so overwhelmed by the perilous overflow of ambiguous information that you will do nothing at all...

about 75 percent of people find it impossible to reason during a catastrophic event or impending doom.

You Are Not So Smart by David McRaney p 55,56, and 58.

Considering the probability that I will encounter such a high-impact fast-acting disaster, and the expected benefit of acting on shallowly thought out gut reaction, I feel no need to remove from myself this bias.
Since you have taken the time to make a comment on this website I presume you get some pleasure from thinking about biases. The next time you are on an airplane perhaps you would find it interesting to work through how you should respond if the plane starts to burn.

Interestingly enough there is some evidence--or at least assertions by people who've studied this sort of thing--that doing this sort of problem solving ahead of time tends to reduce the paralysis.

When you get on a plane, go into a restaurant, when you're wandering down the street or when you go someplace new think about a few common emergencies and just think about how you might respond to them.

Yes, you're right. In fact, I did think about this situation. I think the best strategy is to enter the brace position recommended in the safety guide and to stay still, while gathering as much information as position and obeying the any person who takes on a leadership role. This sort of reasoning can be useful because it is fun to think about, because it makes for interesting conversation, or because it might reveal an abstract principle that is useful somewhere else. My point is to demonstrate a VOI calculation and to show that although this behavior seems irrational on its own, in the broader context the strategy of being completely unprepared for disaster is a good one. Still, the fact that people act in this particular maladaptive way is interesting, and so I got something out of your quote.

"When two planes collided just above a runway in Tenerife in 1977, a man was stuck, with his wife, in a plane that was slowly being engulfed in flames. He remembered making a special note of the exits, grabbed his wife's hand, and ran towards one of them. As it happened, he didn't need to use it, since a portion of the plane had been sheared away. He jumped out, along with his wife and the few people who survived. Many more people should have made it out. Fleeing survivors ran past living, uninjured people who sat in seats literally watching for the minute it took for the flames to reach them." -

Speaking as someone who's been trough that, I don't think that the article gives a complete picture. Part of the problem appears to be (particularly by reports from newer generations) in such instaces is the feeling of unreality, as the only times when we tend to see such situations is when we're sitting comfortably, so a lot of us are essentially conditioned to sit comfortably during such events. However, this does tend to get better with some experience of such situations.
See, I thought the plane was still in the air. Now I understand that the brace position is useless. This is why "gathering as much information as possible" is part of my plan. Unfortunately, with such a preliminary plan, there's a good chance I won't realise this quickly enough and become one of the passive casualties. As I stated earlier, I don't mind this.

As I stated earlier, I don't mind this.

As things one could not mind go, literally dying in a fire seems unlikely to be a good choice.

So does leaving a box with $1,000 in it on the table.
What's involved here is dying in a fire in a hypothetical situation.

No. Please, just no. This is the worst possible form of fighting the hypothetical. If you're going to just say "it's all hypothetical, who cares!" then please do everyone a favor and just don't even bother to respond. It's a waste of everyone's time, and incredibly rude to everyone else who was trying to have a serious discussion with you. If you make a claim, an your reasoning is shown to be inconsistent, the correct response is never to pretend it was all just a big joke the whole time. Either own up to having made a mistake (note: having made a mistake in the past is way higher status than making a mistake now. Saying "I was wrong" is just another way to say "but now I'm right". You will gain extra respect on this site from noticing your own mistakes as well.) or refute the arguments against your claim (or ask for clarification or things along those lines). If you can't handle doing either of those then tap out of the conversation. But seriously, taking up everyone's time with a counter-intuitive claim and then laughing it off when people try to engage you seriously is extremely rude and a waste of everyone's time, including yours.

You're completely right. I retract my remark.

And then sometimes I'm reminded why I love this site. Only on LessWrong does a (well-founded) rant about bad form or habits actually end up accomplishing the original goal.
Actually, freezing up is precisely what I-here-in-my-room imagine I-on-a-plane-in-flames would do.
I find this confusing. Ambiguity is paralysing (though in what circumstances the freeze response isn't stupid, I have no idea), but it's hard to see what response other than "RUN" this would cause. It's not like having to find words that'll placate a hostile human, or reinvent first aid on the fly.
When you're hoping the saber-tooth tiger won't notice you.

Sorry? Of course he was sorry. People were always sorry. Sorry they had done what they had done, sorry they were doing what they were doing, sorry they were going to do what they were going to do; but they still did whatever it was. The sorrow never stopped them; it just made them feel better. And so the sorrow never stopped. ...

Sorrow be damned, and all your plans. Fuck the faithful, fuck the committed, the dedicated, the true believers; fuck all the sure and certain people prepared to maim and kill whoever got in their way; fuck every cause that ended in murder and a child screaming.

Against a Dark Background by Iain M. Banks.

I read this as a poetic invocation against utilitarian sacrifices. It seems to me simultaneously wise on a practical level and bankrupt on a theoretical level. What about the special case of people prepared to be maimed and killed in order to get in someone's way? I guess it depends whether you share goals with the latter someone.
If I don't share goals with someone, or more strongly, if I consider their goals evil... then I will see their meta actions differently, because at the end, the meta actions are just a tool for something else []. If some people build a perfect superintelligent paperclip maximizer, I will hate the fact that they were able to overcome procrastination, that they succeeded in overcoming their internal conflicts, that they made good strategical decisions about getting money and smart people for their project, etc. So perhaps the quote could be understood as a complaint against people in the valley of bad rationality. Smart enough to put their plans successfully in action; yet too stupid to understand that their plans will end hurting people. Smart enough to later realize they made a mistake and feel sorry; yet too stupid to realize they shouldn't make a similar kind of plan with similar kinds of mistakes again.

The word gentleman originally meant something recognisable: one who had a coat of arms and some landed property. When you called someone 'a gentleman' you were not paying him a compliment, but merely stating a fact. If you said he was not 'a gentleman' you were not insulting him, but giving information. There was no contradiction in saying that John was a liar and a gentleman; any more than there now is in saying that James is a fool and an M.A. But then there came people who said- so rightly, charitably, spiritually, sensitively, so anything but usefully- 'Ah, but surely the important thing about a gentleman is not the coat of arms and the land, but the behaviour? Surely he is the true gentleman who behaves as a gentleman should? Surely in that sense Edward is far more truly a gentleman than John?' They meant well. To be honourable and courteous and brave is of course a far better thing than to have a coat of arms. But it is not the same thing. Worse still, it is not a thing everyone will agree about. To call a man 'a gentleman' in this new, refined sense, becomes, in fact, not a way of giving information about him, but a way of praising him: to deny that he is 'a gentleman' beco

... (read more)

When a word ceases to be a term of description and becomes merely a term of praise, it no longer tells you facts about the object; it only tells you about the speaker's attitude to that object.

This is because a speaker's attitude towards an object is not formed by the speaker's perception of the object; it is entirely arbitrary. Wait, no, that's not right.

And anyway, the previous use of the term "gentleman" was, in some sense, worse. Because while it had a neutral denotation ("A gentleman is any person who possesses these two qualities"), it had a non-neutral connotation.

That would be true if the word "gentle" meant the same thing then as it does now. Which it didn't [] The word originally comes from the ancient (not modern) meaning of Hebrew goy []: nation. EDIT: the last statement is incorrect, see replies.
From your link: Sense of "gracious, kind" (now obsolete) first recorded late 13c.; that of "mild, tender" is 1550s. This is, of course, exactly what the halo effect would predict; a word that means "good" in some context should come to mean "good" in other contexts. The same effect explains the euphemism treadmill, as a word that refers to a disfavored group is treated as an insult.
"Gentleman," "gentle" etc do not come from Hebrew. Maybe you are thinking about the fact that "gentile []" comes from the sense "someone from one of the nations (other than Israel)," just as Hebrew goy originally meant "nation" (including the nation of Israel or any other), and came to mean "someone from one of the (other) nations." "Gentile" was formed as a calque from Hebrew. But none of these come from a Hebrew root. Rather, they all come from the Latin gens, gentis [] "clan, tribe, people," thence "nation." Same root as gene, for that mater.
Right, my bad, it was translated from Hebrew [], but does not come directly from it:
You can make it correct but still informative by replacing “originally comes from” with “was originally a calque of”.
So Lewis grants that people really can be brave, honorable, and courteous, but then denies that calling someone so is descriptive? This passage does't make any sense.
I suspect his attitude is more along the lines of 'noise to signal ratio too high.'

Baroque Cycle by Neal Stphenson proves to be a very good, intelligent book series.

“Why does the tide rush out to sea?”

“The influence of the sun and the moon.”

“Yet you and I cannot see the sun or the moon. The water does not have senses to see, or a will to follow them. How then do the sun and moon, so far away, affect the water?”

“Gravity,” responded Colonel Barnes, lowering his voice like a priest intoning the name of God, and glancing about to see whether Sir Isaac Newton were in earshot.

“That’s what everyone says now. ’Twas not so when I was a lad. We used to parrot Aristotle and say it was in the nature of water to be drawn up by the moon. Now, thanks to our fellow-passenger, we say ‘gravity.’ It seems a great improvement. But is it really? Do you understand the tides, Colonel Barnes, simply because you know to say ‘gravity’?”

Daniel Waterhouse and Colonel Barnes in Solomon’s Gold

Yes, be cause saying 'gravity' in fact means the Newton gravitational law. Aristotle had no idea, that e. g. the product of two masses is involved here.

Does Colonel Barnes? If not, he is just repeating a word he has learned to say. Rather like some people today who have learned to say "entanglement", or "signalling", or "evolution", or...

Except in this case he's actually saying 'gravity' in the right context, and besides, it's not expected of people in general to know Newton's laws (or general relativity, etc) to know basically how gravity works. Although I'd like to know what his answer was to the last question...

I will gladly post the rest of the conversation because it reminds me of question I pondered for a while.

"Do you understand the tides, Colonel Barnes, simply because you know to say ‘gravity’?”

“I’ve never claimed to understand them.”

“Ah, that is very wise practice.”

“All that matters is, he does,” Barnes continued, glancing down, as if he could see through the deck-planks.

“Does he then?”

“That’s what you lot have been telling everyone. <> Sir Isaac’s working on Volume the Third, isn’t he, and that’s going to settle the lunar problem. Wrap it all up.”

“He is working out equations that ought to agree with Mr. Flamsteed’s observations.”

“From which it would follow that Gravity’s a solved problem; and if Gravity predicts what the moon does, why, it should apply as well to the sloshing back and forth of the water in the oceans.”

“But is to describe something to understand it?”

“I should think it were a good first step.”

“Yes. And it is a step that Sir Isaac has taken. The question now becomes, who shall take the second step?”

After that they started to discuss differences between Newton's and Leibniz theories. Newton is unable to explain why gravity can go through the earth, like... (read more)

Stanislaw Lem wrote a short story about this. (I don't remember its name.)

In the story, English detectives are trying to solve a series of cases where bodies are stolen from morgues and are later discovered abandoned at some distance. There are no further useful clues.

They bring in a scientist, who determines that there is a simple mathematical relationship that relates the times and locations of these incidents. He can predict the next incident. And he says, therefore, that he has "solved" or "explained" the mystery. When asked what actually happens - how the bodies are moved, and why - he simply doesn't care: perhaps, he suggests, the dead bodies move by themselves - but the important thing, the original question, has been answered. If someone doesn't understand that a simple equation that makes predictions is a complete answer to a question, that someone simply doesn't understand science!

Lem does not, of course, intend to give this as his own opinion. The story never answers the "real" mystery of how or why the bodies move; the equation happens to predict that the sequence will soon end anyway.

Amusingly, I read this story, but completely forgot about it. The example here is perfect. Probably I should re-read it.

For those interested:

I think the situation happens because of bias. Demonstrating an empirical effect to be real takes work. Finding an explanation of an effect also takes work. It's very seldom in science that both happens at exactly the same time. Their are a lot of drugs that are designed in a way where we think that the drug works by binding to specific receptors. Those explanations aren't very predictive for telling you whether a prospective drug works. Once it's shown that a drug actually works it's often that we don't fully understand why it does work.
Interesting. I imagined a world where Wegener appeared, out of blue, with all that data about geological strata and fossils (nobody noticed any of that before), and declared that it's all because of continental drift. That was anticlimactic and unsatisfactory. I imagined a world with a great unsolved mystery: all that data about geological strata and fossils. For a century, nobody is unable to explain it. Then Wegener appeared, and pointed that the shapes of continents are similar, and perhaps it's all because of continental drift. That was more satisfactory, and I suspect that most of traces of disappointment are due to hindsight bias. I think that there are several factors causing that: 1) Story-mode thinking 2) Suspicions concerning the unknown person claiming to solve the problem nobody has ever heard of. 3) (now it's my working hypothesis) The idea that some phenomena are and 'hard' to reduce, and some are 'easy': I know that fall of apple can be explained in terms of atoms, reduced to the fundamental interactions. Most of things can. I know that we are unable to explain fundamental interactions yet, so equations-without-understanding are justified. So, if I learn about some strange phenomenon, I believe that it can be easily explained in terms of atoms. Now suppose that it turned out to be very hard problem, and nobody managed to reduce it to something more fundamental. Now I feel that I should be satisfied with bare equations because making something more is hard. Maybe a century later. This isn't complete explanation, but it feels like a step in the right direction.
"For whatever reason, " seems like it should be a legitimate hypothesis, as much as ", therefore ". The former technically being the disjunction of all variations of the latter with possible reasons substituted in. But, then again, at the point when we are saying "for whatever reason, ", we are saying that because we haven't been able to think of the correct explanation yet—that is, because we haven't been creative enough, a bounded rationality issue. So we're perhaps not really in a position to evaluate a disjunction of all possible reasons.
“Indeed, Sire, Monsieur Lagrange has, with his usual sagacity, put his finger on the precise difficulty with the hypothesis [of a Creator of the Universe]: it explains everything, but predicts nothing.” []
I strikes me his understanding of gravity is on the same level as saying that everything attracts everything else, which is after all not much of a step up on saying that it's in the nature of water to be attracted to the moon - just a more general phrasing. You can make more specific predictions if you know that everything attracts everything, and you know more about the laws of planetary motion and so on, and the gravitational constant and the decay rate and so on, but the basic knowledge of gravity by itself doesn't let you do those things. If your predictions after are the same as your predictions going in can you really claim to understand something better? Seems to me you need to network ideas and start linking them up to data because you can really start to claim to understand stuff better.
Probably I should've added some context to this conversation. One of the themes of Baroque Cycle is that Newton described his gravitational law, but said nothing about why the reality is the way it is. This bugs Daniel, and he rests his hopes upon Leibniz who tries to explain reality on the more fundamental level (monads). This conversation is "Explain/Worship/Ignore" thing as well as "Teacher's password" thing.

The reason Newton's laws are an improvement over Aristotelian "the nature of water is etc." is that Newton lets you make predictions, while Aristotle does not. You could ask "but WHY does gravity work like so-and-so?", but that doesn't change the fact that Newton's laws let you predict orbits of celestial objects, etc., in advance of seeing them.

That's certainly the conventional wisdom, but I think the conventional wisdom sells Aristotle and his contemporaries a little short. Sure, speaking in terms of water and air and fire and dirt might look a little silly to us now, but that's rather superficial: when you get down to the experiments available at the time, Aristotelian physics ran on properties that genuinely were pretty well correlated, and you could in fact use them to make reasonably accurate predictions about behavior you hadn't seen from the known properties of an object. All kosher from a scientific perspective so far.

There are two big differences I see, though neither implies that Aristotle was telling just-so stories. The first is that Aristotelian physics was mainly a qualitative undertaking, not a quantitative one -- the Greeks knew that the properties of objects varied in a mathematically regular way (witness Erastothenes' clever method of calculating Earth's circumference), but this wasn't integrated closely into physical theory. The other has to do with generality: science since Galileo has applied as universally as possible, though some branches reduced faster than others, but the Greeks and their medieval followers were much more willing to ascribe irreducible properties to narrow categories of object. Both end up placing limits on the kinds of inferences you'll end up making.

Bad things don't happen to you because you're unlucky. Bad things happen to you because you're a dumbass.

  • That 70s Show

Single bad things happen to you at random. Iterated bad things happen to you because you're a dumbass. Related: "You are the only common denominator in all of your failed relationships."

Corollaries: The more of a dumbass you are, the less well you can recognize common features in iterated bad things. So dumbasses are, subjectively speaking, just unlucky.

The corollary is more useful than the theorem:-) If I wish to be less of a dumbass, it helps to know what it looks like from the inside. It looks like bad luck, so my first job is to learn to distinguish bad luck from enemy action. In Eliezer's specific example that is going to be hard because I need to include myself in my list of potential enemies.

6Eliezer Yudkowsky10y
(That's fair.)
Also, oxygen. (Edit: "You are the only common denominator in all of your failed relationships." is misleading, hiding all the other common elements.)

What we want to find is the denominator common to all of your failed relationships, but absent from the successful relationships that other people have (the presumed question being "why do all my relationships fail, but Alice, Bob, Carol, etc. have successful ones?"). Oxygen doesn't fit the bill.

It could also be that Alice, Bob, and Carol's relationships appear more successful than they are. We do tend to hide our failures when we can. I've heard the failed-relationships quote before, but hadn't seen it generalized to bad things in general. I like that one. Useful corollary: "Iterated bad things are evidence of a pattern of errors that you need to identify and fix."

Of course, "bad things", and even more so "iterated bad things", have to be viewed relative to expectations, and at the proper level of abstraction. Explanation:

Right level of abstraction

"I punched myself in the face six times in a row, and each time, it hurt. But this is not mere bad luck! I conclude that I am bad at self-face-punching! I must work on my technique, such that I may be able to punch myself in the face without ill effect." This is the wrong conclusion. The right conclusion is "abstain from self-face-punching".

Substitute any of the following for "punching self in face":

  • Extreme sports
  • Motorcycle riding
  • Fad diets
  • Prayer

Right expectations

"I've tried five brands of water, and none of them tasted like chocolate candy! My water-brand-selection algorithm must be flawed. I will have to be even more careful about picking only the fanciest brands of water." Again this is the wrong conclusion. The right conclusion is "This water is just fine and there was nothing wrong with my choice of brand. I simply shouldn't have such ridiculous expectations."

Substitute any of the following for "brands of water&qu... (read more)

Ah, I've been in that job. My favorite in the stupid-expectations department was a customer who expected us to lie about the cause of a failure on the work order, so that his insurance company would cover the repair. When we refused, he made his own edits to his copy of the work order....and a few days later brought the machine back (I forget why) and handed us the edited order. We photocopied it (without telling him) and filed it with our own copy. That was entertaining when the insurance company called.
This can be easily generalized as an algorithm. * Something repeatedly goes wrong * Identify correctly your prior hypothesis * Identify the variables involved * Check/change the variables * Observe the result (apply bayes when needed) * Repeat if necessary Scientific method applied to everiday life, if you want :)
The thing is, some of the steps are very vague. If you have a bad case of insufficient clue, what's the cure?
I'm not sure I understood what you mean, but I guess you're thinking about cases where you can't have a "perfect experimental setup" to collect information. Well, in this case one should do the best with the information one has (though information can also be collected from other external sources of course). Sometimes there's simply not enough information to identify with sufficient certainty the best course of action, so you have to go with your best guess (after a risk/reward evaluation, if you want).
Sorry, I wasn't very clear. I meant that if you have a deep misunderstanding of what's going on, as here [], what do you do about it?
Well, it's somewhat hidden in steps 2 and 3. You have to be able to correctly state your hypothesis and to indentify all the possible variables. Consider chocolate water: your hipothesis is "There exist some brands of water that tastes like chocolate candy". Let's say for whatever reson you start with a prior probability p for this hypothesis. You then try some brands, find that none tastes like chocolate candy, and should therefore apply bayes and emerge with a lower posterior. What's much more effective, though, is evaluating the evidence you already have that induced you to believe the original hypothesis. What made you think that water could taste like chocolate? A friend told you? Did it appear in the news? In the more concrete cases: * Sex partners : Why did you expect them to be able to satisfy you without your input? What is your source? Porn movies? * Computer repair shops : Why did you expect people to work for free? * Diets : Have you talked to a professional? Gathered massive anedoctale evidence?
"You are the only common denominator in all of your failed relationships." != "Why do all my relationships fail?" Both you and others have relationships, both "failed" and "not-failed" (for some value of failed). The statement "You are the only common denominator in all of your failed relationships" is clearly false, even if comparing to others who have successful ones in search of differentiating factors. The "only" is the problem even then.
3Said Achmiz10y
The intended formulation, I should think, is "You are the only denominator guaranteed to be common to all of your failed relationships" (which is to say that it might be a contingent fact about your particular set of failed relationships that it has some more common denominators, but for any set of all of any particular person's failed relationships, that person will always, by definition, be common to them all). Even this might be false when taken literally... so perhaps we need to qualify it just a bit more: "You are the only interesting denominator guaranteed to be common to all of your failed relationships." (i.e. if we consider only those factors along which relationships-in-general differ from each other, i.e. those dimensions in relationship space which we can't just ignore). That, I think, is a reasonable, charitable reading of the original quote.
It's not nitpicking on my side, there are plenty of people who tend to blame themselves for anything going wrong, even when it was outside their control. Maybe they lived in a neighborhood incompatible to themselves, especially pre-social media. Think of 'nerds' stranded in classes without peers. Sure, their behavior was involved in the success or failure of their relationships (how could it not have been?). However, a mindset and pseudo-wise aphorisms such as "you are the only common denominator in all of your failed relationships" would be fueling an already destructive fire of gnawing self-doubt with more gasoline.
5Said Achmiz10y
I agree. This sort of thing... can be viewed as a case of "wrong level of abstraction" as I alluded to here []. I think what we have here is two possible sources of error, diametrically opposed to each other. Some people refuse to take responsibility for their failures, and it is at them that "you are the only common denominator ..." is aimed. Other people blame themselves even when they shouldn't, as you say. Let us not let one sort of error blind us to the existence of the other. When it comes to constructing or selecting rationality quotes, we should keep in mind that what we're often doing is attempting to point out and correct some bias, which means that the relevance of the quote is obviously constrained by whether we have that bias at all, or perhaps have the opposite bias instead.
There is such a thing as bad luck, though perhaps it's less in play in relationships than in most areas of life. I think that if you keep having relationships that keep failing in the same way, it's a stronger signal that if they just fail.
Alternatively, iterated bad things happen because someone is out to get you and messes constantly with what you are trying to do.

Stepan Arkadyevitch subscribed to a liberal paper, and read it. It was not extreme in those views, but advocated those principles the majority held. And though he was not really interested in science or art or politics, he strongly adhered to such views on all those subjects as the majority, including his paper, advocated, and he changed them only when the majority changed them; or more correctly, he did not change them, but they themselves imperceptibly changed in him.

Stepan Arkadyevitch never chose principles or opinions, but these principles and opinions came to him, just as he never chose the shape of a hat or coat, but took those that others wore. And, living as he did in fashionable society, through the necessity of some mental activity, developing generally in a man's best years, it was as indispensable for him to have views as to have a hat. If there was any reason why he preferred liberal views rather than the conservative direction which many of his circle followed, it was not because he found a liberal tendency more rational, but because he found it better suited to his mode of life.

The liberal party declared that everything in Russia was wretched; and the fact was that

... (read more)

Stepan is a smart chap. He has realized (perhaps unconsciously)

  • that one's political views are largely inconsequential,
  • that it's nonetheless socially necessary to have some,
  • that developing popular and coherent political views oneself is expensive,

and so has outsourced them to a liberal paper.

One might compare it to hiring a fashion consultant... except it's cheap to boot!

"Oh, you could do it all by magic, you certainly could. You could wave a wand and get twinkly stars and a fresh-baked loaf. You could make fish jump out of the sea already cooked. And then, somewhere, somehow, magic would present its bill, which was always more than you could afford.

That’s why it was left to wizards, who knew how to handle it safely. Not doing any magic at all was the chief task of wizards - not “not doing magic” because they couldn’t do magic, but not doing magic when they could do and didn’t. Any ignorant fool can fail to turn someone else into a frog. You have to be clever to refrain from doing it when you knew how easy it was.

There were places in the world commemorating those times when wizards hadn’t been quite as clever as that, and on many of them the grass would never grow again."

-- Terry Prachett, Going Postal

It is said, for example, that a man ten times regrets having spoken, for the once he regrets his silence. And why? Because the fact of having spoken is an external fact, which may involve one in annoyances, since it is an actuality. But the fact of having kept silent! Yet this is the most dangerous thing of all. For by keeping silent one is relegated solely to oneself, no actuality comes to a man's aid by punishing him, by bringing down upon him the consequences of his speech. No, in this respect, to be silent is the easy way. But he who knows what the dreadful is, must for this very reason be most fearful of every fault, of every sin, which takes an inward direction and leaves no outward trace. So it is too that in the eyes of the world it is dangerous to venture. And why? Because one may lose. But not to venture is shrewd. And yet, by not venturing, it is so dreadfully easy to lose that which it would be difficult to lose in even the most venturesome venture, and in any case never so easily, so completely as if it were's self. For if I have ventured amiss--very well, then life helps me by its punishment. But if I have not ventured at all--who then helps me?

--Soren Kierkegaard, The Sickness Unto Death

That's an interesting opening comment on regretting choosing to speak more than choosing not to speak. In particular, it brings to mind studies of the elderly's regrets in life and how most of those are not-having-done's versus having-done's. These two aren't incompatible: if we remain silent 20 times for every time we speak, then we still regret remaining silent more than we regret speaking even if we regret each having-spoken 10 times as much as a not-having-spoken. Still, though, there seems to be some disagreement.
Obviously the fact that it's translated complicates things, and I don't know anything about Danish. But I think the first sentence is meant to be a piece of folk wisdom akin to "Better to remain silent and be thought a fool, than to open your mouth and remove all doubt." That is, he's not really concerned with the relative proportions of regret, but with the idea that it's better (safer, shrewder) to keep your counsel than to stake out a position that might be contradicted. In light of the rest of the text, this is the reading of the line that makes the most sense to me: equivocation and bet-hedging in the name of worldly safety are a symptom of the sin of despair. Compare:
Reminds me of standards processes and project proposals that produce ever more elaborate specifications that no-one gets round to implementing.

...the machines will do what we ask them to do and not what we ought to ask them to do. In the discussion of the relation between man and powerful agencies controlled by man, the gnomic wisdom of the folk tales has a value far beyond the books of our sociologists.

you would be foolish to accept what people believed for “thousands of years” in many domains of natural science. When it comes to the ancients or the moderns in science always listen to the moderns. They are not always right, but overall they are surely more right, and less prone to miss the mark. In fact, you may have to be careful about paying too much attention to science which is a generation old, so fast does the “state of the art” in terms of knowledge shift.

Razib Khan

Similar thought:

16) The previous people who did a similar analysis did not have a direct pipeline to the wisdom of the ages. There is therefore no reason to believe their analysis over yours. There is especially no reason to present their analysis as yours.

-- Akin's Laws of Spacecraft Design

"It’s actually hard to see when you’ve fucked up, because you chose all your actions in a good-faith effort and if you were to run through it again you’ll just get the same results. I mean, errors-of-fact you can see when you learn more facts, but errors-of-judgement are judged using the same brain that made the judgement in the first place." - Collin Street

"I call that 'the falling problem'. You encounter it when you first study physics. You realize that, if you were ever dropped from a plane without a parachute, you could calculate with a high degree of accuracy how long it's take to hit the ground, your speed, how much energy you'll deposit into the earth. And yet, you would still be just as dead as a particularly stupid gorilla dropped the same distance. Mastery of the nature of reality grants you no mastery over the behavior of reality. I could tell you your grandpa is very sick. I could tell you what each cell is doing wrong, why it's doing wrong, and roughly when it started doing wrong. But I can't tell them to stop."

"Why can't you make a machine to fix it?"

"Same reason you can't make a parachute when you fall from the plane."

"Because it's too hard?"

"Nothing is too hard. Many things are too fast."


"I think I could solve the falling problem with a jetpack. Can you try to get me the parts?"

"That's all I do, kiddo."


IDG the punchline...

I wouldn't call it a punchline, exactly... I mean, it's not a joke. But in the comic it's likely a parent and child talking, and the subtext I infer is that parenting is a process of giving one's children the tools with which to construct superior solutions to life problems.

How I would love to quote you next month. This is pretty much my approach in a sentence.
For me, the real punchline is in the 'votey image' you get by hovering over the red dot at the bottom.

Th[e] strategy [of preferring less knowledge and intelligence due to their high cognitive costs] is exemplified by the sea squirt larva, which swims about until it finds a suitable rock, to which it then permanently affixes itself. Cemented in place, the larva has less need for complex information processing, whence it proceeds to digest part of its own brain (its cerebral ganglion). Academics can sometimes observe a similar phenomenon in colleagues who are granted tenure.

Nick Bostrom

It is perhaps worth noting that a similar comment was made by Dennett:

“The juvenile sea squirt wanders through the sea searching for a suitable rock or hunk of coral to cling to and make its home for life. For this task, it has a rudimentary nervous system. When it finds its spot and takes root, it doesn't need its brain anymore, so it eats it! It's rather like getting tenure.” 1991 or so.

7Eliezer Yudkowsky10y
I remember this as a famous proverb, it may predate Dennett.

Apparently it does... a few minutes of googling turned up a cite to Rodolfo Llinas (1987), who referred to it as "a process paralleled by some human academics upon obtaining university tenure."

Has the life cycle of the sea squirt ever been notably used to describe something other than the reaction of an academic to tenure?

Hah! Um... hm. A quick perusal of Google results for "sea squirt -tenure" gets me some moderately interesting stuff about their role as high-sensitivity harbingers for certain pollutants, and something about invasive sea-squirt species in harbors. But nothing about their life-cycle per se. I give a tentative "no."

From the remarkable opening chapter of Consciousness Explained:

One should be leery of these possibilities in principle. It is also possible in principle to build a stainless-steel ladder to the moon, and to write out, in alphabetical order, all intelligible English conversations consisting of less than a thousand words. But neither of these are remotely possible in fact and sometimes an impossibility in fact is theoretically more interesting than a possibility in principle, as we shall see.

--Daniel Dennett

While I agree with the general point that it's important to consider impossibilities in fact, I'm not quite sure I agree where he's drawing the line between fact and principle. Does the compressive strength of stainless steel, and the implied limit on the height of a ladder constructed of it, not count as a restriction in principle?
It just takes some imagination. Hollow out both the Earth and the Moon to reduce their gravitational pull; support the ladder with carbon nanotube filaments; stave off collapse by pushing it around with high-efficiency ion impulse engines; etc. I agree, though, that philosophers often make too much of the distinction between "logically impossible" and "physically impossible." There's probably no in principle possible way to hollow out the Earth significantly while retaining its structure; etc.
So basically, build a second ladder out of some other material that's feasible (unlike steel), and then just tie the steel ladder to it so it doesn't have to bear any weight.
I think that often "logically possible" means "possible if you don't think too hard about it". Which is exactly Dennett's point in context: the idea that you are a brain in a vat is only conceivable if you don't think about the computing power that would be necessary for a convincing simulation.
Dreams can be quite convincing simulations that don't need that much computing power. The worlds that people who do astral traveling perceive can be quite complex. Complex enough to convince people who engage in that practice that they really are on an astral plane. Does that mean that the people are really on an astral plane and aren't just imagining it?
The way I like to think about it is that convincingness is a 2-place function - a simulation is convincing to a particular mind/brain. If there's a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more) then it's cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose. From that perspective, dreams are not especially convincing compared to experience while awake, rather dreamers are especially convincable. Dennett's point seems to be that a lot of computing power would be needed to make a convincing simulation for a mind as clear-thinking as a reader who was awake. Later in the chapter he talks about other types of hallucinations.
The 5 senses are brain events. There aren't input channels to the brain. Take taste. How many different tastes of food can you perceive through your taste sense? More than 5. Why? Your brain takes data from nose, tongue and your memory and fits them together to something that you can perceive through your smell sense. You have no direct access to the data that your nose or tongue sends to your brain through your conscious qualia perception. If someone is open by receiving suggestions and you give him a hypnotic suggestion that a apple tastes like an orange you can awake him. If he eats the thing he will tell you that the apple is an orange. He might even get angry when someone tells him that the thing isn't an orange because it obviously tastes like an orange. You don't need to introduce any chemicals. Millions of years of evolutions have trained brains to have an extremly high prior for thinking that they aren't "brains in a vat". Doubting your own perception is an incredibly hard cognitive task. There are experients where an experimentor uses a single electron to trigger a subject to do a particular task like raising his arm. If the experimentor afterwards ask the subject why he raised the arm the subject makes up a story and believes in that story. It takes effort for the leader of an experiment to convince a subject that he made up the story and there was no reason he raised his arm.
I suggest you read the opening chapter of Consciousness Explained. Someone's posted it online here [].
Dennett seems to quote no actual scientific paper in the paragraph or otherwise really know what the brain does. You don't need to provide detailed feedback to the brain, Dennett should be well aware that humans have a blind spot in their eyes and the brain makes up information to fill the blind spot. It's the same with suggesting a brain in the vat that it's acting in the real world. The brain makes up the information that's missing to provide for an experience of being in the real world. To produce a strong hallucination (as I understand Dennett he means equates strong hallucination with complex hallucination) you might need to have a channel through with you can insert information into the brain but you don't need to provide every detail. Missing details get made up by the brain.
No, Dennett explicitly denies that the brain makes up information to fill the blind spot. This is central to his thesis. He creates a whole concept called 'figment' to mock this notion. His position is that nothing within the brain's narrative generators expects, requires, or needs data from the blind spot; hence, in consciousness, the blind spot doesn't exist. No gaps need to be filled in, any more that HJPEV can be aware that Eliezer has removed a line that he might, counterfactually, have spoken. For a hallucination to be strong, does not require the hallucination to have great internal complexity. It suffices that the brain happen to not ask too many questions.
That's a question of definition of strong. But it seems that I read Dennett to charitable for that purpose. He defines it as: Given that definition, Dennett just seems wrong. He continues saying: I know multiple people in real life who report hallucinations of that strength. If you want an online source, the Tulpa forum [] has plenty of peope who manage to have strong hallucinations of Tulpa's. The Tupla way seems to take months or a year. If you have a strongly hypnotically suggestible person a good hypnotist can create such hallucination in less than an hour.
I think I must be misreading you. I'm puzzled that you believe this about hallucinations - that it's possible for the brain to devote enough processing power to create a "strong" hallucination in the Dennettian sense - but upthread, you seemed to be saying that dreams did not require such processing power. Dreams are surely the canonical example, for people who believe that whole swaths of world-geometry are actually being modelled, rendered and lit inside of their heads? After all, there is nothing else to be occupying the brain's horsepower; no conflicting signal source. If I may share with you my own anecdote; when asleep, I often believe myself to be experiencing a fully sensory, qualia-rich environment. But often as I wake, there is an interim moment when I realise - it seems to be revealed - that there never was a dream. There was only a little voice making language-like statements to itself - "now I am over here now I am talking to Bob the scenery is so beautiful how rich my qualia are". I think Dennett's position is just this; that there never was a dream, only a series of answers to spurious questions, which don't have to be consistent because nothing was awake to demand consistency. Do you think he's wrong about dreams, too, or are you saying that waking hallucinations are importantly different? I had a quick look at the Tulpa forum and am unimpressed so far. Could you point to any examples you find particularly compelling? Ok, so I flat out don't believe that. If waking consciousness was that unstable, a couple of hours of immersive video gaming would leave me psychotic; and all it would take to see angels would be a mildly-well-delivered Latin Mass, rather than weeks of fasting and self-flagellation. I'll go read about it, though.
3Rob Bensinger10y
I don't think I've ever had an experience quite like that. I've perhaps had experiences that are transitional between images and propositions -- I'm thinking by visualizing a little story to myself, and the images themselves are seamlessly semantic, like I'm on the inside of a novel and the narration is a deep component of the concrete flow of events. But to my knowledge I've never felt a sudden revelation that my mental images were 'only a little voice making language-like statements to itself', à la Dennett's suggestion that all experiences are just judgments. Perhaps we're conceptualizing the same experience after-the-fact in different ways. Or perhaps we just have different phenomenologies []. A lot of people have suggested (sometimes tongue-in-cheek) that Dennett finds his own wilder hypotheses credible because he has an unusually linguistic, abstract, qualitatively impoverished phenomenology. (Personally, I wouldn't be surprised if that's a little bit true, but I think it's a small factor compared to Dennett's philosophical commitments.)
He is known to be a wine connoisseur. Sidney Shoemaker once asked him why he doesn't just read the label..
I've occasionally had dreams where elements have backstories--- I just know something about something in my dream, without having any way of having found it out.
This is common, I think, or at least I've seen other people discuss it before ( [] ), and it fits my own experience as well. From which I had the rather obvious-in-hindsight insight that the experience of knowledge is itself just another sort of experience, just another type of qualia, just like color or sound. In dreams knowledge doesn't need to have an origin-via-discovery, same way that dream images don't need to originate in our eyes, and dream sounds don't need to originate in vibrations of our ear drums...
0Rob Bensinger10y
Is this any different from how it feels to know something in waking life, in cases where you've forgotten where you learned it?
Probably way too old here, but I had multible experiences relevant to the thread. Once I had a dream and then, in the dream, I remembered I had dreamt this exact thing before, and wondered if I was dreaming now, and everything looked so real and vivid that I concluded I was not. I can create a kind of half-dream, where I see random images and moving sequences at most 3 seconds or so long, in succession. I am really dimmed but not sleeping, and I am aware in the back of my head that they are only schematic and vague. I would say the backstories in dreams are different in that they can be clearly nonsensical. E.g. I hold and look at a glass relief, there is no movement at all, and I know it to be a movie. I know nothing of its content, and I dont believe the image of the relief to be in the movie.
It's hard to be sure, but I think dream elements have less of a feeling of context for me. On the other hand, is the feeling of context the side effect of having more connections to my web of memories, or is it just another tag?
(nods) Me too. I've also had the RPG-esque variation where I've had a split awareness of the dream... I am aware of the broader narrative context, but I am also experiencing being a character in the narrative who is not aware. E.g., I know that there's something interesting behind that door, and I'm walking around the room, but I can't just go and open that door because I don't actually know that in my walking-around-the-room capacity.
It is perfectly consistent to both believe that (some people) can have fully realistic mental imagery, and that (most people's) dreams tend to exhibit sub-realistic mental imagery. I have one friend who claims to have eidetic mental imagery, and I have no reason to doubt her. Thomas Metzinger discusses in Being No-One the notion of whether the brain can generate fully realistic imagery, and holds that it usually cannot, but notes the existence of eidetic imaginers as an exception to the rule [].
Thanks for the cite: sadly, on clicking through, I get a menacing error message in a terrifying language, so evidently you can't share it that way? You are quite right that it's consistent. It's just that it surprised my model, which was saying "if realistic mental imagery is going to happen anywhere, surely it's going to be dreams, that seems obviously the time-of-least-contention-for-visual-workspace." I'm beginning to wonder whether any useful phenomenology at all survives the Typical Mind Fallacy. Right now, if somebody turned up claiming that their inner monologue was made of butterscotch and unaccountably lapsed into Klingon from three to five PM on weekdays, I'd be all "cool story bro".
Hmmm. Well, I don't speak Klingon, but I am bilingual (English/Afrikaans); my inner monologue runs in English all the time in general but, after reading this, I decided to try running it in Afrikaans for a bit. Just to see what happens. Now, my Afrikaans is substantially poorer than my English (largely, I suspect, due to lack of practice). My inner monologue switches languages very quickly on command; however, there are some other interesting differences that happen. First of all, my inner monologue is rather drastically slowed down. I have a definite sense of having to wait for my brain to look up the right word to describe the concept I mean; that is, there is a definite sense that I know what I am thinking before I wrap it in the monologue. (This is absent when my internal monologue is in the default English; possibly because my English monologue is fast enough that I don't notice the delay). I think that that delay is the first time that I've noticed anticipatory thinking in my own head without the monologue. There's also grammatical differences between the two languages; an English sentence translated to Afrikaans will come out with a different word order (most of the time). This has its effect on my internal monologue as well; there's a definite sense of the meanings being delivered to my language centres (or at least to the word-looking-up part thereof) in the order that would be correct for an English sentence, and the language centre having to hold certain meanings in a temporary holding space (or something) until I get to the right part of the sentence. I also notice that my brain slips easily back into the English monologue; that's no doubt due mainly to force of habit, and did not come as a surprise.
That's odd, it works on three different browsers and two different machines for me. I guess there's some geographical restriction. Here's a PDF [] instead then, I was citing what's page 45 by the book's page numbering and page 60 by the PDF's.
Curiously, the first time I clicked the Google Books link, I got the "Yksi sormus hallitsemaan niitä kaikkia..." message (not an exact transcription), but the second time, it let me in.
My tulpa, which belongs to a Kardashev 3b civilization (but has its own penpal tulpas higher up) disagrees. For example, you can construct a gravitational shell around the earth to guard against collapse by compensating the gravity. Use superglue so the wabbits and stones don't start floating. Edit: This is incorrect [], stupid Tulpa. More like Kardashev F!
I think your tulpa is playing tricks on you. A shell around the Earth will have no effect on the interactions of bodies within it, or their interactions with everything outside the shell.
It could counteract the gravitational pull which would cause the surface of a hollow Earth to collapse otherwise. Edit: It would not [] :-(

A spherically symmetric shell has no effect on the gravitational field inside. It will not pull the surface of a hollow Earth outwards.

You're correct []. There's other ways to guard against collapse of an empty shell, it's a similar scenario to guarding against collapse of a Dyson sphere.
Hey, that's a great idea--lots of little black hole-fueled satellites in low-earth orbit, suspending the crust so it doesn't collapse in on itself. I think we can build this ladder, after all! edit: I think this falls prey to the shell theorem if they're in a geodesic orbit, but not if they're using constant acceleration to maintain their altitude, and vectoring their exhaust so it doesn't touch the Earth.
I'm someone who still finds subjective experience mysterious, and I'd like to fix that. Does that book provide a good, gut-level, question-dissolving explanation?

I've had that conversation with a few people over the years, and I conclude that it does for some people and not others. The ones for whom it doesn't generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It's not entirely clear to me what question they think he answers instead.)

That said, it's a pretty fun read. If the subject interests you, I'd recommend sitting down and writing out as clearly as you can what it is you find mysterious about subjective experience, and then reading the book and seeing if it answers, or at least addresses, that question.

He seems to answer the question of why humans feel and report that they are conscious; why, in fact, they are conscious. But I don't know how to translate that into an explanation of why I am conscious. The problem that many people (including myself) feel to be mysterious is qualia. I know indisputably that I have qualia, or subjective experience. But I have no idea why that is, or what that means, or even what it would really mean for things to be otherwise (other than a total lack of experience, as in death). A perfect and complete explanation of of the behavior of humans, still doesn't seem to bridge the gap from "objective" to "subjective" experience. I don't claim to understand the question. Understanding it would mean having some idea over what possible answers or explanations might be like, and how to judge if they are right or wrong. And I have no idea. But what Dennett writes doesn't seem to answer the question or dissolve it.
Here's how I got rid of my gut feeling that qualia are both real and ineffable. First, phrasing the problem: Even David Chalmers [] thinks there are some things about qualia that are effable. Some of the structural properties of experience - for example, why colour qualia can be represented in a 3-dimensional space [] (colour, hue, and saturation) - might be explained by structural properties of light and the brain, and might be susceptible to third-party investigation []. What he would call ineffable is the intrinsic properties of experience. With regards to colour-space, think of spectrum inversion []. When we look at a firetruck, the quale I see is the one you would call "green" if you could access it, but since I learned my colour words by looking at firetrucks, I still call it "red". If you think this is coherent, you believe in ineffable qualia: even though our colour-spaces are structurally identical, the "atoms" of experience additionally have intrinsic natures (I'll call these eg. RED and GREEN) which are non-causal and cannot be objectively discovered. You can show that ineffable qualia (experiential intrinsic natures, independent of experiential structure) aren't real by showing that spectrum inversion (changing the intrinsic natures, keeping the structure) is incoherent. An attempt at a solution: Take another experiential "spectrum": pleasure vs. displeasure. Spectrum inversion is harder, I'd say impossible, to take seriously in this case. If someone seeks out P, tells everyone P is wonderful, laughs and smiles when P happens, and even herself believes (by means of mental representations or whatever) that P is pleasant, then it makes no sense to me to imagine P really "ultimately" being UNPLEASANT for her. Anyway, if pleasure-displeasure can't be noncausally inver
I'm not sure pleasure/pain is that useful, because 1) they have such an intuitive link to reaction/function 2) they might be meta-qualities: a similar sensation of pain can be strongly unpleasant, entirely tolerable or even enjoyable depending on other factors. What you've done with colours is combine what feels like a somewhat arbitary/ineffable qualia and declare it inextricable associated with one that has direct behavioural terms involved. Your talk of what's required to 'make the inversion succesfully' is misleading: what if the monkey has GREEN and antsiness rather than RED and antsiness? It seems intuitive to assume 'red' and 'green' remain the same in normal conditions: but I'm left totally lost as to what 'red' would look like to a creature that could see a far wider or narrower spectrum than the one we can see. Or to that matter to someone with limited colour-blindness. There seems to me to be the Nagel 'what is it like to be a bat' problem, and I've never understood how that dissolves. It's been a long time since I read Dennett, but I was in the camp of 'not answering the question, while being fascinating around the edges and giving people who think qualia are straightforward pause for thought'. No-one's ever been able to clearly explain how his arguments work to me, to the point that I suggest that either I or they are fundamentally missing something. If the hard problem of consciousness has really been solved I'd really like to know!

Consider the following dialog:
A: "Why do containers contain their contents?"
B: "Well, because they are made out of impermeable materials arranged in such a fashion that there is no path between their contents and the rest of the universe."
A: "Yes, of course, I know that, but why does that lead to containment?"
B: "I don't quite understand. Are you asking what properties of materials make them impermeable, or what properties of shapes preclude paths between inside and outside? That can get a little technical, but basically it works like this --"
A: "No, no, I understand that stuff. I've been studying containment for years; I understand the simple problem of containment quite well. I'm asking about the hard problem of containment: how does containment arise from those merely mechanical things?"
B: "Huh? Those 'merely mechanical things' are just what containment is. If there's no path X can take from inside Y to outside Y, X is contained by Y. What is left to explain?"
A: "That's an admirable formulation of the hard problem of containment, but it doesn't solve it."

How would you reply to A?

There's nothing left to explain about containment. There's something left to explain about consc.
Would you expect that reply to convince A? Or would you just accept that A might go on believing that there's something important and ineffable left to explain about containment, and there's not much you can do about it? Or something else?
If you were a container, you would understand the wonderful feeling of containment, the insatiable longing to contain, the sweet anticipation of the content being loaded, the ultimate reason for containing and other incomparable wonderful and tortuous qualia no non-container can enjoy. Not being one, all you can understand is the mechanics of containment, a pale shadow of the rich and true containing experience. OK, maybe I'm getting a bit NSFW here...
It is for A to state what the remaining problem actually is. And qualiphiles can do that D: I can explain how conscious entities respond to their environments, process information and behave. What more is there? C: How it all looks from the inside -- the qualia.
That's funny, David again and the other David arguing about the hard versus the "soft" problem of consciousness. Have you two lost your original? I think A and B are sticking different terminology on a similar thing. A laments that the "real" problem hasn't been solved, B points out that it has to the extent that it can be solved. Yet in a way they treat common ground: A believes there are aspects of the problem of con(tainment|sciousness) that didn't get explained away by a "mechanistic" model. B believes that a (probably reductionist) model suffices, "this configuration of matter/energy can be called 'conscious'" is not fundamentally different from "this configuration of matter/energy can be called 'a particle'". If you're content with such an explanation for the latter, why not the former? ... However, with many Bs I find that even accepting a matter-of-fact workable definition of "these states correspond to consciousness" is used as a stop sign more so than as a starting point. Just as A insists that further questions exist, so should B, and many of those questions would be quite similar, to the point of practically dissolving the initial difference. Off of the top of my head: If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form? Is it just that the qualia we experience are modulated and processed by virtue of the relevant matter (brain) being in a state which can organize memories, reflect on its experiences etc.? Anthropic considerations apply: Even if anything had a "value" for "subjective experience", we would know only about our own, and probably only ascribe that property to similar 'things' (other humans or highly developed mammals). But is it just because those can reflect upon that property? Are waterfalls conscious, even if not sentient? "What an algorithm feels like on the inside" - any natural phenomenon is executing algorithms just th
I can no longer remember if there was actually an active David when I joined, or if I just picked the name on a lark. I frequently introduce myself in real life as "Dave -- no, not that Dave, the other one."
I always assumed that the name was originally to distinguish you from David Gerard.
Sure, I agree that there may be systems that have subjective experience but do not manifest that subjective experience in any way we recognize or understand. Or, there may not. In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don't see any value to asking the question. If it makes you feel better if I don't deny their existence, well, OK, I don't deny their existence, but I really can't see why anyone should care one way or the other. In any case, I don't agree that the B's studying conscious experience fail to explore further questions. Quite the contrary, they've made some pretty impressive progress in the last five or six decades towards understanding just how the neurobiological substrate of conscious systems actually works. They simply don't explore the particular questions you're talking about here. And it's not clear to me that the A's exploring those questions are accomplishing anything. So, A asks "If containment is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?" How would you reply to A? My response is something like "We know that certain configurations of physical objects give rise to containment. Sure, it's not impossible that "unprocessed containment" exists in other systems, and we just haven't ever noticed it, but why are you even asking that question?"
But I don't think conscious experience (qualia if you like) have been explained. I think we have some pretty good explanations of how people act, but I don't see how it pierces through to consciousness as experienced, and linked questions such as 'what is it like to be a bat?' or 'how do I know my green isn't your red' It would help if you could sum up the merely mechnical things that are 'just what consciousness is' in Dennett's (or your!) sense. I've never been clear on what confident materialists are saying on this: I'm sometimes left with the impression that they're denying that we have subjective experience, sometimes that they're saying it's somehow an inherent quality of other things, sometimes that it's an incidental byproduct. All of these seem to be problematic to me.
I don't think it would, actually. The merely mechanical things that are 'just what consciousness is' in Dennett's sense are the "soft problem of consciousness" in Chalmers' sense; I don't expect any amount of summarizing or detailing the former to help anyone feel like the "hard problem of consciousness" has been addressed, any more than I expect any amount of explanation of materials science or topology to help A feel like the "hard problem of containment" has been addressed. But, since you asked: I'm not denying that we have subjective experiences (nor do I believe Dennett is), and I am saying that those experiences are a consequence of our neurobiology (as I believe Dennett does). If you're looking for more details of things like how certain patterns of photons trigger increased activation levels of certain neural structures, there are better people to ask than me, but I don't think that's what you're looking for. As for whether they are an inherent quality or an incidental byproduct of that neurobiology, I'm not sure I even understand the question. Is being a container an inherent quality of being composed of certain materials and having certain shape, or an incidental byproduct? How would I tell? And: how would you reply to A?
That's such a broad statement, it could cover some forms of dualism.
I may not remember Chalmer's soft problem well enough either for reference of that to help! If experiences are a consequence of our neurobiology, fine. Presumably a consequence that itself has consequences: experiences can be used in causal explanations? But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat. And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them. It seems subjective experience is just being ignored: we could identify that an AI could carry out all sorts of tasks that we associate with consciousness, but I have no idea when we'd say 'it now has conscious experiences'. Or whether we'd talk about degrees of conscious experience, or whatever. This is obviously ethically quite important, if not that directly pertinent to me, and it bothers me that I can't respond to it. With a container, you describe various qualities and that leaves the question 'can it contain things': do things stay in it when put there. You're adding a sort of purpose-based functional classification to a physical object. When we ask 'is something conscious', we're not asking about a function that it can perform. On a similar note, I don't think we're trying to reify something (as with the case where we have a sense of objects having ongoing identity, which we then treat as a fundamental thing and end up asking if a ship is the same after you replace every component of it one by one). We're not chasing some over-abstracted ideal of consciousness, we're trying to explain an experienced reality. So to answer A, I'd say 'there is no fundamental property of 'containment'. It's just a word we use to describe one thing surrounded by another in circumstances X and Y. You're over-idealising a useful functional concept'. The same is not true of consciousness, because it's not (just) a function. It might help if you
I'm splitting up my response to this into several pieces because it got long. Some other stuff: The process isn't anything special, but OK, since you ask. Let's assert for simplicity that "I" has a relatively straightforward and consistent referent, just to get us off the ground. Given that, I conclude that am at least sometimes capable of subjective experience, because I've observed myself subjectively experiencing. I further observe that my subjective experiences reliably and differentially predict certain behaviors. I do certain things when I experience pain, for example, and different things when I experience pleasure. When I observe other entities (E2) performing those behaviors, that's evidence that they, too, experience pain and pleasure. Similar reasoning applies to other kinds of subjective experience. I look for commonalities among E2 and I generalize across those commonalities. I notice certain biological structures are common to E2 and that when I manipulate those structures, I reliably and differentially get changes in the above-referenced behavior. Later, I observe additional entities (E3) that have similar structures; that's evidence that E3 also demonstrates subjective experience, even though E3 doesn't behave the way I do. Later, I build an artificial structure (E4) and I observe that there are certain properties (P1) of E2 which, when I reproduce them in E4 without reproducing other properties (P2), reproduce the behavior of E2. I conclude that P1 is an important part of that behavior, and P2 is not. I continue this process of observation and inference and continue to draw conclusions based on it. And at some point someone asks "is X conscious?" for various Xes: If I interpret "conscious" as meaning having subjective experience, then for each X I observe it carefully and look for the kinds of attributes I've attributed to subjective experience... behaviors, anatomical structures, formal structures, etc... and compare it to my accumulated kno
Ok, well given that responses to pain/pleasure can equally be explained by more direct evolutionary reasons, I'm not sure that the inference from action to experience is very useful. Why would you ever connect these things with expereince rather than other, more directly measurable things? But the point is definitely not that I have a magic bullet or easy solution: it's that I think there's a real and urgent question - are they conscious - which I don't see how information about responses etc. can answer. Compare to the cases of containment, or heat, or life - all the urgent questions are already resolved before those issues are even raised.
As I say, the best way I know of to answer "is it conscious?" about X is to compare X to other systems about which I have confidence-levels about its consciousness and look for commonalities and distinctions. If there are alternative approaches that you think give us more reliable answers, I'd love to hear about them.
I have no reliable answers! And I have low meta-confidence levels (in that it seems clear to me that people and most other creatures are conscious but I have no confidence in why I think this) If the Dennett position still sees this as a complete bafflement but thinks it will be resolved with the so-called 'soft' problem, I have less of an issue than I thought I did. Though I'd still regard the view that the issue will become clear one of hope rather than evidence.
I'm splitting up my response to this into several pieces because it got long. Some other stuff: I expect so, sure. For example, I report having experiences; one explanation of that (though hardly the only possible one) starts with my actually having experiences and progresses forward in a causal fashion. Sure, there are many causal explanations of many phenomena, including but not limited to how bats use echolocation, that don't posit subjective experience as part of their causal chain. For example, humas do all kinds of things without the subjective experience of doing them. Certainly. In the examples you give, yes, it is being ignored. So? Lots of things are being ignored in those examples... mass, electrical conductivity, street address, level of fluency in Russian, etc. If these things aren't necessary to explain the examples, there's nothing wrong with ignoring these things. On the other hand, if we look at an example for which experience ought to be part of the causal chain (for example, as I note above, reporting having those experiences), subjective experience is not ignored. X happens, as a consequence of X a subjective experience Y arises, as a consequence of Y a report Z arises, and so forth. (Of course, for some reports we do have explanations that don't presume Y... e.g., confabulation, automatic writing, etc. But that needn't be true for all reports. Indeed, it would be surprising if it were.) "But we don't know what Xes give rise to the Y of subjective experience, so we don't fully understand subjective experience!" Well, yes, that's true. We don't fully understand fluency in Russian, either. But we don't go around as a consequence positing some mysterious essence of Russian fluency that resists neurobiological explanation... though two centuries ago, we might have done so. Nor should we. Neither should we posit some mysterious essence of subjective experience. "But subjective experience is different! I can imagine what a mechanical explanation
I have a lot of sympathy for this. The most plausible position of reductive materialism is simply that at some future scientific point this will become clear. But this is inevitably a statement of faith, rather than acknoweldging current acheivement. It's very hard to compare current apparent mysteries to solved mysteries - I do get that. Having said that, I can't even see what the steps on the way to explaining consciousness would be, and claiming there is not such thing seems not to be an option (unlike 'life', 'free will' etc.) whereas most other cases you rely on saying that you can't see how the full extent could be achieved: a machine might speak crap Russian in some circumstances etc. Also, if a machine can speak Russian, you can check that. I don't know how we'd check a machine was conscious. BTW, when I said 'it seems subjective experience is just being ignored', I meant ignored in your and Dennett's arguments, not in specific explanations. I have nothing against analysing things in ways that ignore consciousness, if they work.
I don't know what the mechanical explanation would look like, either. But I'm sufficiently aware of how ignorant my counterparts two centuries ago would have been of what a mechanical explanation for speaking Russian would look like that I don't place too much significance on my ignorance. I agree that testing whether a system is conscious or not is a tricky problem. (This doesn't just apply to artificial systems.)
Indeed: though artificial systems are more intuitively difficult as we don't have as clear an intuitive expectation. You can take an outside view and say 'this will dissolve like the other mysteries'. I just genuinely find this implausible, if only because you can take steps towards the other mysteries (speaking bad Russian occasionally) and because you have a clear empirical standard (russians). Whereas for consciousness I don't have any standard for identifying another's consciousness: I do it only by analogy with myself and by the implausibility of me having an apparently causal element that others who act similarly to me lack.
I agree that the "consciousness-detector" problem is a hard problem. I just can't think of a better answer than the generalizing-from-commonalities strategy I discussed previously, so that's the approach I go with. It seems capable of making progress for now. And I understand that you find it implausible. That said, I suspect that if we solve the "soft" problem of consciousness well enough that a typical human is inclined to treat an artificial system as though it were conscious, it will start to seem more plausible. Perhaps it will be plausible and incorrect, and we will happily go along treating computers as conscious when they are no such thing. Perhaps we're currently going along treating dogs and monkeys and 90% of humans as conscious when they are no such thing. Perhaps not. Either way, plausibility (or the absence of it) doesn't really tell us much.
Yes. This is what worries me: I can see more advances making everyone sure that computers are conscious, but my suspicion is that this will not be logical. Take the same processor and I suspect the chances of it being seen as conscious will rise sharply if it's put in a moving machine, rise sharply again for a humanoid, again for face/voice and again for physically indistinguishable. The problem with generalising from commonalities is that I have precisely one direct example of consciousness. Although having said that, I don't find epiphenomenal accounts convincing, so it's reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don't have a totally separate framework for their statements about qualia. I wouldn't be that confident though, and it gets harder with artificial consciousness.
Sure. By the same token, if you take me, remove my ability to communicate, and encase me in an opaque cylinder, nobody will recognize me as a being with subjective experience. Or, for that matter, as a being with the ability to construct English sentences. We are bounded intellects reasoning under uncertainty in a noisy environment. We will get stuff wrong. Sometimes it will be important stuff. I agree. And, as I said initially, I apply the same reasoning not only to the statements I make in English, but to all manner of behaviors that "seem to rise from my qualia," as you put it... all of it is evidence in favor of other organisms also having subjective experience, even organisms that don't speak English. How confident are you that I possess subjective experience? Would that confidence rise significantly if we met in person and you verified that I have a typical human body? Agreed.
Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we're 1) left with a sort of argument from analogy for others having qualia 2) even if we can resolve(1), I can't see how we can start to know whether my green is your red etc. etc. I can't think of many comparable cases: certainly I don't think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say 'gosh, I wonder if it's conscious'. This is nothing like the casuistic 'but what about this container gives it its containerness'. I think we're on the same point here, though? I'm intuitively very confident you're conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren't conscious is that you're actually a computer designed to post about things on less wrong. This would also explain why you like Dennett - I've always suspected he's a qualia-less robot too! ;-)
Yes, I agree that we're much more confused about subjective experience than we are about containership. We're also more confused about subjective experience than we are about natural language, about solving math problems, about several other aspects of cognition. We're not _un_confused about those things, but we're less confused than we used to be. I expect us to grow still less confused over time. I disagree about the lack of comparable cases. I agree about containers; that's just an intuition pump. But the issues that concern you here arise for any theoretical construct for which we have only indirect evidence. The history of science is full of such things. Electrons. Black holes. Many worlds. Fibromyalgia. Phlogiston. Etc. What makes subjective experience different is not that we lack the ability to perceive it directly; that's pretty common. What makes it different is that we can perceive it directly in one case, as opposed to the other stuff where we perceive it directly in zero cases. Of course, it's also different from many of them in that it matters to our moral reasoning in many cases. I can't think of a moral decision that depends on whether phlogiston exists, but I can easily think of a moral decision that depends on whether cows have subjective experiences. OTOH, it still isn't unique; some people make moral decisions that depend on the actuality of theoretical constructs like many worlds and PTSD.
Fair enough. As an intuition pump, for me at least, it's unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like 'life' as something beyond its parts). Only having indirect evidence isn't the problem. For a black hole, I care about the observable functional parts. I wouldn't be being sucked towards it and being crushed while going 'but is it really a black hole?' A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel. Many worlds could be comparable if there is evidence that means that there are 'many worlds' but people are vague about if these worlds actually exist. And you're right, this is also a potentially morally relevant point.
Insofar as people infer from the fact of subjective experience that there is some essence of subjective experience that is, as you say, "beyond its parts" (and their patterns of interaction), I do in fact think they are mistaking a label for a thing.
I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say "they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky". You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they're actually referring to anything: it's a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it's conscious. This is bothersome. But it's not to do with essences, necessarily.
Insofar as people don't infer something else beyond the parts of a (for example) my body and their pattern of interactions that account for (for example) my subjective experience, I don't think they are mistaking a label for a thing.
Well, until we know how to identify if something/someone is conscious, it's all a bit of a mystery: I couldn't rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that's it.
I'm splitting up my response to this into several pieces because it got long. The key bit, IMHO: And I would agree with you. "No," replies A, "you miss the point completely. I don't ask whether a container can contain things; clearly it can, I observe it doing so. I ask how it contains things. What is the explanation for its demonstrated ability to contain things? Containership is not just a function," A insists, "though I understand you want to treat it as one. No, containership is a fundamental essence. You can't simply ignore the hard question of "is X a container?" in favor of thinking about simpler, merely functional questions like "can X contain Y?". And, while we're at it," A coninues, "what makes you think that an artificial container, such as we build all the time, is actually containing anything rather than merely emulating containership? Sure, perhaps we can't tell the difference, but that doesn't mean there isn't a difference." I take it you don't find A's argument convincing, and neither do I, but it's not clear to me what either of us could say to A that A would find at all compelling.
Maybe we couldn't, but A is simply asserting that containership is a concept beyond its parts, whereas I'm appealing directly to experience: the relevance of this is that whether something has experience matters. Ultimately for any case, if others just express bewilderment in your concepts and apparently don't get what you're talking about, you can't prove it's an issue. But at any rate, most people seem to have subjective experience. Being conscious isn't a label I apply to certain conscious-type systems that I deem 'valuable' or 'true' in some way. Rather, I want to know what systems should be associated with the clearly relevant and important category of 'conscious'
My thoughts about how I go about associating systems with the expectation of subjective experience are elsewhere and I have nothing new to add to it here. As regards you and A... I realize that you are appealing directly to experience, whereas A is merely appealing to containment, and I accept that it's obvious to you that experience is importantly different from containment in a way that makes your position importantly non-analogous to A's. I have no response to A that I expect A to find compelling... they simply don't believe that containership is fully explained by the permeability and topology of containers. And, you know, maybe they're right... maybe some day someone will come up with a superior explanation of containerhood that depends on some previously unsuspected property of containers and we'll all be amazed at the realization that containers aren't what we thought they were. I don't find it likely, though. I also have no response to you that I expect you to find compelling. And maybe someday someone will come up with a superior explanation of consciousness that depends on some previously unsuspected property of conscious systems, and I'll be amazed at the realization that such systems aren't what I thought they were, and that you were right all along.
Are you saying you don't experience qualia and find them a bit surprising (in a way you don't for containerness)? I find it really hard to not see arguments of this kind as a little disingenous: is the issue genuinely not difficult for some people, or is this a rhetorical stance intended to provoke better arguments, or awareness of the weakness of current arguments?
I have subjective experiences. If that's the same thing as experiencing qualia, then I experience qualia. I'm not quite sure what you mean by "surprising" here... no, it does not surprise me that I have subjective experiences, I've become rather accustomed to it over the years. I frequently find the idea that my subjective experiences are a function of the formal processes my neurobiology implements a challenging idea... is that what you're asking? Then again, I frequently find the idea that my memories of my dead father are a function of the formal processes my neurobiology implements a challenging idea as well. What, on your view, am I entitled to infer from that?
Yes, I meant surprising in light of other discoveries/beliefs. On memory: is it the conscious experience that's challenging (in which case it's just a sub-set of the same issue) or do you find the functional aspects of memory challenging? Even though I know almost nothing about how memory works, I can see plausible models and how it could work, unlike consciousness.
Isn't our objection to A's position that it doesn't pay rent in anticipated experience? If one thinks there is a "hard problem of consciousness" such that different answers would cause one to behave differently, then one must take up the burden of identifying what the difference would look like, even if we can't create a measuring device to find it just now. If A means that we cannot determine the difference in principle, then there's nothing we should do differently. If A means that a measuring device does not currently exist, he needs to identify the range of possible outputs of the device.
This may be a situation where that's a messy question. After all, qualia are experience. I keep expecting experiences, and I keep having experiences. Do experiences have to be publicly verifiable?
If two theories both lead me to anticipate the same experience, the fact that I have that experience isn't grounds for choosing among them. So, sure, the fact that I keep having experiences is grounds for preferring a theory of subjective-experience-explaining-but-otherwise-mysterious qualia over a theory that predicts no subjective experience at all, but not necessarily grounds for preferring it to a theory of subjective-experience-explaining-neural-activity.
They don't necessarily once you start talking about uploads, or the afterlife for that matter.
different answers to the HP would undoubtedly change our behaviour, because they would indicate different classes of entity have feelings impacting morality. Indeed, it is pretty hard to think of anything more impactive. The measuring device for conscious experience is consciousness, which is the whole problem.
Sure. But in this sense, false believed answers to the HP are no different from true believed answers.... that is, they would both potentially change our behavior the way you describe. I suspect that's not what TimS meant.
That is the case for most any belief you hold (unless you mean "in the exact same way", not as "change behavior"). You may believe there's a burglar in your house, and that will impact your actions, whether it be false or true. Say you believe that it's more likely there is a burglar, you are correct in acting upon that belief even if it turns out to incorrect. It's not AIXI's fault if it believes in the wrong thing for the right reasons. In that sense, you can choose an answer for example based on complexity considerations. In the burglar example, the answer you choose (based on data such as crime rate, cat population etc.) can potentially be further experimentally "verified" (the probability increased) as true or false, but even before such verification, your belief can still be strong enough to act upon. After all, you do act upon your belief that "I am not living in a simulation which will eventually judge and reward me only for the amount of cheerios I've eaten". It also doesn't lead to different expected experiences at the present time, yet you also choose to act thus as if it were true. Prior based on complexity considerations alone, yet strong enough to act upon. Same when thinking about whether the sun has qualia ("hot hot hot hot hot"). (Bit of a hybrid fusion answer also meant to refer to our neighboring discussion branch.) Cheerio!
Yes, I agree with all of this.
Well, in the case of "do landslides have qualia", Occam's Razor could be used to assign probabilities just the same as we assign probabilities in the "cheerio simulation" example. So we've got methodology, we've got impact, enough to adopt a stance on the "psychic unity of the cosmos", no?
I'm having trouble following you, to be honest. My best guess is that you're suggesting that, with respect to systems that do not manifest subjective experience in any way we recognize or understand, Occam's Razor provides grounds to be more confident that they have subjective experience than that they don't. If that's what you mean, I don't see why that should be. If that's not what you mean, can you rephrase the question?
I think it's conceivable if not likely that Occam's Razor would favor or disfavor qualia as a property of more systems than just those that seem to show or communicate them in terms we're used to. I'm not sure which, but it is a question worth pondering, with an impact on how we view the world, and accessible through established methodology, to a degree. I'm not advocating assigning a high probability to "landslides have raw experience", I'm advocating that it's an important question, the probability of which can be argued. I'm an advocate of the question, not the answer, so to speak. And as such opposed to "I really can't see why anyone should care one way or the other".
Ah, I see. So, I stand by my assertion that in the absence of evidence one way or the other, I really can't see why anyone should care. But I agree that to the extent that Occam's Razor type reasoning provides evidence, that's a reason to care. And if it provided strong evidence one way or another (which I don't think it does, and I'm not sure you do either) that would provide a strong reason to care.
I have evidence in the form of by personal experience of qualia. Granted, I have no way of showing you that evidence, but that doesn't mean I don't have it.
Agreed that the ability to share evidence with others is not a necessary condition of having evidence. And to the extent that I consider you a reliable evaluator of (and reporter of) evidence, your report is evidence, and to that extent I have a reason to care.
The point has been made that we should care because qualia have moral implications.
Moral implications of a proposition in the absence of evidence one way or another for that proposition are insufficient to justify caring. If I actually care about the experiences of minds capable of experiences, I do best to look for evidence for the presence or absence of such experiences. Failing such evidence, I do best to concentrate my attention elsewhere.
It's possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition. People often adopt precautionary principles in such scenarios.
I don't think that's the situation here though. That sounds like a description of this situation: (imagine) we have weak evidence that 1) snakes are sapient, and we grant that 2) sapience is morally significant. Therefore (perhaps) we should avoid wonton harm to snakes. Part of why this argument might make sense is that (1) and (2) are independent. Our confidence in (2) is not contingent on the small probability that (1) is true: whether or not snakes are sapient, we're all agreed (lets say) that sapience is morally significant. On the other hand, the situation with qualia is one where we have weak evidence (suppose) that A) qualia are real, and we grant that B) qualia are morally significant. The difference here is that (B) is false if (A) is false. So the fact that we have weak evidence for (A) means that we can have no stronger (and likely, we must have yet weaker) evidence for (B).
Does the situation change significantly if "the situation with qualia" is instead framed as A) snakes have qualia and B) qualia are morally significant?
Yes, if the implication of (A) is that we're agreed on the reality of qualia but are now wondering whether or not snakes have them. No, if (A) is just a specific case of the general question 'are qualia real?'. My point was probably put in a confusing way: all I mean to say was that Juno seemed to be arguing as if it were possible to be very confident about the moral significance of qualia while being only marginally confident about their reality.
(nods) Makes sense.
What? Are you saying we have weak evidence for even in ourselves?
What I think of the case for qualia is beside the point, I was just commenting on your 'moral hazard' argument. There you said that even if we assume that we have only weak evidence for the reality of qualia, we should take the possibility seriously, since we can be confident that qualia are morally significant. I was just pointing out that this argument is made problematic by the fact that our confidence in the moral significance of qualia can be no stronger than our confidence in their reality, and therefore by assumption must be weak.
But of course it can. i can be much more confident in (P -> Q) than I am in P. For instance, I can be highly confident that if i won the lottery, I could buy a yacht.
I am guessing that Juno_Watt means that strong evidence for our own perception of qualia makes them real enough to seriously consider their moral significance, whether or not they are "objectively real".
Yes, they often do. On your view, is there a threshold of doubtfulness of a proposition below which it is justifiable to not devote resources to avoiding the potential moral hazard of that proposition being true, regardless of the magnitude of that moral hazard?
i don't think ir's likely my house will catch fire, but i take out fire insurance. OTOH, if i don't set a lower bound I will be susceptible to pascal's muggings.
He may have meant something like "Qualiaphobia implies we would have no expereinces at all". However, that all depends on what you mean by experience. I don't think the Expected Experience criterion is useful here (or anywhere else)
I realize that non-materialistic "intrinsic qualities" of qualia, which we perceive but which aren't causes of our behavior, are incoherent. What I don't fully understand is why have I any qualia at all. Please see my sibling comment.
Tentatively: If it's accepted that GREEN and RED are structurally identical, and that in virtue of this they are phenomenologically identical, why think that phenomenology involves anything*, beyond structure, which needs explaining? I think this is the gist of Dennett's dissolution attempts. Once you've explained why your brain is in a seeing-red brain-state, why this causes a believing-that-there-is-red mental representation, onto a meta-reflection-about-believing-there-is-red functional process, etc., why think there's anything else?
Phenomenology doesn't involve anything beyond structure. But my experience seems to.
(nods) Yes, that's consistent with what I've heard others say. Like you, I don't understand the question and have no idea of what an answer to it might look like, which is why I say I'm not entirely clear what question you/they claim is being answered. Perhaps it would be more correct to say I'm not clear how it differs from the question you/they want answered. Mostly I suspect that the belief that there is a second question to be answered that hasn't been is a strong, pervasive, sincere, compelling confusion, akin to where does the bread go? []. But I can't prove it. Relatedly: I remember, many years ago, attending a seminar where a philosophy student protested to Dennett that he didn't feel like the sort of process Dennett described. Dennett replied "How can you tell? Maybe this is exactly what the sort of process I'm describing feels like!" I recognize that the traditional reply to this is "No! The sort of process Dennett describes doesn't feel like anything at all! It has no qualia, it has no subjective experience!" To which my response is mostly "Why should I believe that?" An acceptable alternative seems to be that subjective experience ("qualia", if you like) is simply a property of certain kinds of computation, just as the ability to predict the future location of a falling object ("prescience", if you like) is a property of certain kinds of computation. To which one is of course free to reply "but how could prescience -- er, I mean qualia -- possibly be an aspect of computation??? It just doesn't make any sense!!!" And I shrug. Sure, if I say in English "prescience is an aspect of computation," that sounds like a really weird thing to say, because "prescience" and "computation" are highly charged words with opposite framings. But if I throw out the English words and think about computing the state of the world at some future time, it doesn't seem mysterious at all, and such computations have become s
Thanks for your reply and engagement. I agree. We already know what we feel like. Once we know empirically what kind of process we are, we can indeed conclude that "that's what that kind of process feels like". What I don't understand is why being some kind of process feels like anything at all. Why it seems to myself that I have qualia in the first place. I do understand why it makes sense for an evolved human to have such beliefs. I don't know if there is a further question beyond that. As I said, I don't know what an answer would even look like. Perhaps I should just accept this and move on. Maybe it's just the case that "being mystified about qualia" is what the kind of process that humans are is supposed to feel like! As an analogy, humans have religious feelings with apparently dedicated neurological underpinnings. Some humans feel the numinous [] strongly, and they ask for an answer to the Mystery of God, which to them appears as obvious as any qualia. However, an answer that would be more satisfactory (if possible) would be an exploration and an explanation of mind-space and its accompanying qualia. Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away. Does being like some other kind of process "feel like" anything? Like what? Would it be meaningful for me to experience something else without becoming something else? Are the qualia of a cat separate from being a cat? Or would I have to have a cat-mind and forget all about being human and verbal and DanArmak to experience the qualia of a cat, at which point I'd be no different than any existing cat, and which I wouldn't remember on becoming human again? I agree. To clarify, I believe all of these propositions: * Full materialism * Humans are physical systems that have self-awareness ("consciousness") and talk about it * That isn't a separate fact that could be otherw
Shouldn't you anticipate being either clone with 100% probability, since both clones will make that claim and neither can be considered wrong?
What I meant is that some time after the cloning, the clones' lives would become distinguishable. One of them would experience X, while the other would experience ~X. Then I would anticipate experiencing X with 50% probability. If they live identical lives forever, then I can anticipate "being either clone" or as I would call it, "not being able to tell which clone I am".
My first instinctive response is "be wary of theories of personal identity where your future depends on a coin flip". You're essentially saying "one of the clones believes that it is your current 'I' experiencing 'X', and it has a 50% chance of being wrong". That seems off. I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability. The problem is that the way anticipation works with probability depends implicitly on there only being one future self that things can happen to.
No, I'm not saying that. I'm saying: first both clones believe "anticipate X with 50% probability". Then one clone experiences X, and the other ~X. After that they know what they experienced, so of course one updates to believe "I experienced X with ~1 probability" and the other "I experienced ~X with ~1 probability". I think we need to unpack "experiencing" here. I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability. If X takes nontrivial time, such that one can experience "X is going on now", then I anticipate ever experiencing that with 50% probability.
But that means there is always (100%) a future state of you that has experienced X, and a separate future state that has always (100%) experienced ~X. I think there's some similarity here to the problem of probability in a many-worlds universe, except in this case both versions can still interact. I'm not sure how that affects things myself.
You're right, there's a contradiction in what I said. Here's how to resolve it. At time T=1 there is one of me, and I go to sleep. While I sleep, a clone of me is made and placed in an identical room. At T=2 both clones wake up. At T=3 one clone experiences X. The other doesn't (and knows that he doesn't). So, what should my expected probability for experiencing X be? At T=3 I know for sure, so it goes to 1 for one clone and 0 for the other. At T=2, the clones have woken up, but each doesn't know which he is yet. Therefore each expects X with 50% probability. At T=1, before going to sleep, there isn't a single number that is the correct expectation. This isn't because probability breaks down, but because the concept of "my future experience" breaks down in the presence of clones. Neither 50% nor 100% is right. 50% is wrong for the reason you point out. 100% is also wrong, because X and ~X are symmetrical. Assigning 100% to X means 0% to ~X. So in the presence of expected future clones, we shouldn't speak of "what I expect to experience" but "what I expect a clone of mine to experience" - or "all clones", or "p proportion of clones".
Suppose I'm ~100% confident that, while we sleep tonight, someone will paint a blue dot on either my forehead or my husband's but not both. In that case, I am ~50% confident that I will see a blue dot, I am ~100% confident that one of us will see a blue dot, I am ~100% confident that one of us will not see a blue dot. If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to "one of us will see a blue dot" means assigning ~0% to "one of us will not see a blue dot", I would reply that they are deeply confused. The noun phrase "one of us" simply doesn't behave that way. In the scenario you describe, the noun phrase "I" doesn't behave that way either. I'm ~100% confident that I will experience X, and I'm ~100% confident that I will not experience X.
I really find that subscripts help here.
In your example, you anticipate your own experiences, but not your husband's experiences. I don't see how this is analogous to a case of cloning, where you equally anticipate both. I'm not saying that "[exactly] one of us will see a blue dot" and "[neither] one of us will not see a blue dot" are symmetrical; that would be wrong. What I was saying was that "I will see a blue dot" and "I will not see a blue dot" are symmetrical. All the terminologies that have been proposed here - by me, and you, and FeepingCreature - are just disagreeing over names, not real-world predictions. I think the quoted statement is at the very least misleading because it's semantically different from other grammatically similar constructions. Normally you can't say "I am ~1 confident that [Y] and also ~1 confident that [~Y]". So "I" isn't behaving like an ordinary object. That's why I think it's better to be explicit and not talk about "I expect" at all in the presence of clones. My comment about "symmetrical" was intended to mean the same thing: that when I read the statement "expect X with 100% probability", I normally parse it as equivalent to "expect ~X with 0% probability", which would be wrong here. And X and ~X are symmetrical by construction in the sense that every person, at every point in time, should expect X and ~X with the same probability (whether you call it "both 50%" like I do, or "both 100%" like FeepingCreature prefers), until of course a person actually observes either X or ~X.
In my example, my husband and I are two people, anticipating the experience of two people. In your example, I am one person, anticipating the experience of two people. It seems to me that what my husband and I anticipate in my example is analogous to what I anticipate in your example. But, regardless, I agree that we're just disagreeing about names, and if you prefer the approach of not talking about "I expect" in such cases, that's OK with me.
One thing you seem to know but keep forgetting is the distinction between your current state, and recorded memories. Memories use extreme amounts of lossy and biased compression, and some of your confusion seem to come from looking at your current experience while explicitly thinking about this stuff and then generalizing it as something continuous over time and something applicable to a wider range of mental states than it actually is.
Sure, that makes sense. As far as I know, current understanding of neuroanatomy hasn't identified the particular circuits responsible for that experience, let alone the mechanism whereby the latter cause the former. (Of course, the same could be said for speaking English.) But I can certainly see how having such an explanation handy might help if I were experiencing the kind of insistent sense of mysteriousness you describe (for subjective experience or for speaking English).
Hmm, to your knowledge, has the science of neuroanatomy ever discovered any circuits responsible for any experience?
Quick clarifying question: How small does something need to be for you to consider it a "circuit"?
It's more a matter of discreetness than smallness: I would say I need to be able to identify the loop.
Second clarifying question, then: Can you describe what 'identifying the loop' would look like?
Well, I'm not sure. I'm not confident there are any neural circuits, strictly speaking. But I suppose I don't have anything much more specific than 'loop' in mind: it would have to be something like a path that returns to an origin.
In the sense of the experience not happening if that circuit doesn't work, yes. In the sense of being able to give a soup-to-nuts story of how events in the world result in a subjective experience that has that specific character, no.
I guess I mean: has the science of neuroanatomy discovered any circuits whatsoever?
I am having trouble knowing how to answer your question, because I'm not sure what you're asking. We have identified neural structures that are implicated in various specific things that brains do. Does that answer your question?
I'm not very up to date on neurobiology, and so when I saw your comment that we had not found the specific circuits for some experience I was surprised by the implication that we had found that there are neural circuits at all. To my knowledge, all we've got is fMRI captures showing changes in blood flow which we assume to be correlated in some way with synaptic activity. I wondered if you were using 'circuit' literally, or if your intended reference to the oft used brain-computer metaphor. I'm quite interested to know how appropriate that metaphor is.
Ah! Thanks for the clarification. No, I'm using "circuit" entirely metaphorically.
I think it does. It really is a virtuoso work of philosophy, and Dennett helpfully front-loaded it by putting his most astonishing argument in the first chapter. Anecdotally, I was always suspicious of arguments against qualia until I read what Dennett had to say on the subject. He brings in plenty of examples from philosophy, from psychological and scientific experiments, and even from literature to make things nice and concrete, and he really seems to understand the exact ways in which his position is counter-intuitive and makes sure to address the average person's intuitive objections in a fair and understanding way.
I've read some of Dennet's essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a 'noisy quorum' model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn't actually that surprising. It'd be hard to design to a human-style system that didn't have a similar internal behavior that it could talk about.

What I have been calling nefarious rhetoric recurs in a rudimentary form also in impromptu discussions. Someone harbors a prejudice or an article of faith or a vested interest, and marshals ever more desperate and threadbare arguments in defense of his position rather than be swayed by reason or face the facts. Even more often, perhaps, the deterrent is just stubbon pride: reluctance to acknowledge error. Unscientific man is beset by a deplorable desire to have been right. The scientist is distinguished by a desire to be right.

— W. V. Quine, An Intermittently Philosophical Dictionary (a whimsical and fun read)

Usually I find myself deploying nefarious rhetoric when I believe something on good evidence but have temporarily forgotten the evidence (this is very embarrassing and happens to me a lot).

The recognition of confusion is itself a form of clarity.

T.K.V. Desikachar

Not having all the information you need is never a satisfactory excuse for not starting the analysis.

-Akin's Laws of Spacecraft Design

My sense of the proper way to determine what is ethical is to make a distinction between a smuggler of influence and a detective of influence. The smuggler knows these six principles and then counterfeits them, brings them into situations where they don’t naturally reside.

The opposite is the sleuth’s approach, the detective’s approach to influence. The detective also knows what the principles are, and goes into every situation aware of them looking for the natural presence of one or another of these principles.

  • Robert Cialdini at the blog Bakadesuyo explaining the difference between ethical persuasion and the dark arts

Graffiti on the wall of an Austrian public housing block:

White walls — high rent.

(German original: "Weiße Wände — hohe Mieten". I'm not actually sure it's true, but my understanding is that rent in public housing does vary somewhat with quality and it seems plausible that graffiti could enter into it. And to make the implicit explicit, the reason it seems worth posting here is how it challenges the tenants' — and my — preconceptions: You may think that from a purely selfish POV you should not want graffiti on your house, but it's quite possible that the benefits to you are higher than the costs.)

This makes sense as helping with a price discrimination scheme which is probably made very complicated legally (if the landlord is a monopolist, then both you and them might prefer that they have a crappy product to offer at low cost, but often it is hard to offer a crappier product for legal reasons) or as a costly signal of poverty (if you are poor you are willing to make your house dirtier in exchange for money---of course most of the costs can also be signaling, since having white walls is a costly signal of wealth). My guess would be these kinds of models are too expressive to have predictive power, but this at least seems like a clean case. Signaling explanations often seem to have this vaguely counter-intuitive form, e.g. you might think that from a selfish point of view you would want your classes to be more easily graded. But alas...
Well, this is public housing, so the landlord is the government and thus is like to both have monopoly power and not be subject to the same laws as a private landlord.
If I guess correctly the reasons why a government would pass a law against renting excessively crappy houses, I don't think it would exempt itself from it.
Er... Why? The only reasons for that I can think of are aesthetics (but you can't ‘should’ that []), economic value (but that only applies to landlords, not tenants) and signalling (but people who know what building I live in already know me well, so I can afford countersignalling to them []),
Broken Windows? ( - If you live in an aesthetically unpleasing area, then people are more likely to trash the place.)
I often see really ingenious grafitti in Vienna. My favorite was somewhere in the 9. district someone wrote "peace to the huts, war to the palaces" and then someone corrected it to "peace to the huts, and to the palaces". I found it amusing because it sounded like a grafitti battle between anarchists and catholics.
Wow. I wonder if the graffiti artist is part of the housing community, or someone with a special interest in political art targeting rent-seekers. The delete account that has posted below makes a concise and informative contribution if anyones interested in checking it out. I wonder why it's deleted...
A better house in a better neighbourhood costs more. How is this news?
8Said Achmiz10y
I believe the implication is "I am doing you a favor by spraying graffiti on your apartment building, because that will cause your rent to decrease." I don't know if this is actually true, but that's what I take to be the intent.
So it's utility maximising for renters, assuming they don't get caught and the time penalty isn't significant, to deface their property or those in their area so that the property is less attractive to others, assuming they don't value the original state aesthetics of the property relative to the defaced state more than the price differential?
That is a lot to squeeze from four words. FWIW, they struck me as a snarl of rage against people who have more money than the perpetrators. As a tenant in such an apartment building I would reply that nice white walls and a nice neighbourhood is the entire point of paying that rent, and that anyone who wants to live in a slum should go and find one, preferably a long way from me.
2Said Achmiz10y
It's not just the words you're squeezing, it's the medium — the fact that the words are written in graffiti. I agree that a nice neighborhood is the point of paying rent, and your comment about people who want to live in slums, etc. I'm not sure that graffiti by itself constitutes neighborhood-not-niceness, but of course it's correlated with lots of other things, and there's the broken windows theory, etc.
To a poor person, having walls at all is more important than having white walls.
To a poor person, having a car at all is more important than having one with no dents in the panels. I don't see that as justifying vandalising the cars at a second-hand dealer by night so as to pick one up cheap the next day. But we're working from just four words of graffiti here, from an unknown author, and the site where Google led me from the original German text is dead.
This happens in the context of gentrification. In a city like Berlin rents in the cool neighborhoods rise and some people have to leave their neighborhood because of the rising rents. Putting grafiti on walls is a way to counteract this trend. At least it is from the point of view of people who want to justify that they are moral when the illegally spray grafiti on the houses of other people
[-][anonymous]10y 15

It’s hard to tell the difference between "Nobody ever complains about this car because it’s reliable" and “Nobody complains about this car because nobody buys this car."

-- Shamus Young

Thanks for the link.

Here's another good quote:

But if your solution to a problem is “don’t make mistakes”, then it’s not a solution. If you’re worried about falling off a cliff, the solution isn’t to walk along the edge very carefully, it’s to get away from the edge.

It depends on why you were walking there in the first place.
I think you were downvoted too hastily. Seriously, imagine that instead of driving very carefully last week, I phoned my destination to say "Sorry I'm going to miss your wedding, but the only route to the venue is next to a cliff!" Would "Great solution!" be an expected or an accurate response?
I don't think a single downvote is that significant. I've seen so many inexplicable downvotes that I would suspect there's a bot that every time a new comment is posted generates a random number between 0 and 1 and downvotes the comment if the random number is less than 0.05 -- if I could see any reason why anyone would do that.
Now that you've suggested it...
It actually depends on how many total votes the post in question has accumulated, and how much karma the user in question has accumulated. A completely new user doesn't need to have to worry about anomalous downvotes, because they're too new to have a reputation. Likewise, a well-established user doesn't need to worry about anomalous downvotes, because they get lost in the underflow. I'd say somewhere around the 200 mark it can become problematic to one's reputation; and, of course, one or two consecutive anomalous downvotes that act as the first votes on a given post can easily set a trend to bury that post before anyone has a chance to usefully comment on it.
(If much more than 5% of the comments in a thread/by a user are downvoted, then it's probably not my hypothetical bot's fault.)
(OTOH, depending on what kind of connection is there between you and the bride and groom and what kind of people they are, a half-joking “how the hell did you choose to get married here of all places” might have been in order.) :-)

If you're not making mistakes, you're not taking risks, and that means you're not going anywhere. The key is to make mistakes faster than the competition, so you have more chances to learn and win.

John W. Holt (previously quoted here, but not in a Rationality Quotes thread)

The hidden thought embedded in most discussions of conspiracy theories is this: The world is being controlled by evil people; so, if we can get rid of them, the world can revert to control by good people, and things will be great again. This thought is false. The world is not controlled by any group of people – evil or good – and it will not be. The world is a large, chaotic mess. Those groups which do exert some control are merely larger pieces in the global mix.

-- Paul Rosenberg

I don't know if there are short words for this, but seems to me that some people generally assume that "things, left alone, naturally improve" and some people assume that "things, left alone, naturally deteriorate".

The first option seems like optimism, and the second option seems like pesimism. But there is a catch! In real life, many things have good aspects and bad aspects. Now the person who is "optimistic about the future of things left alone" must find a reason why things are worse than expected. (And vice versa, the person who is "pessimistic about the future of things left alone" must find a reason why things are better.) In both cases, a typical explanation is human intervention. Which means that this kind of optimism is prone to conspiracy theories. (And this kind of pessimism is prone to overestimate the benefits of human actions.)

For example, in education: For a "pessimist about spontaneous future" things are easy -- people are born stupid, and schools do a decent job at making them smarter; of course, the process is not perfect. For an "optimist about spontaneous future", children should be left alone to become... (read more)

I don't think that accurately describes a position of someone like Alex Jones. You can care about people and still push the fat man over the bridge but then try to keep the fact that you pushed the fat man over the bridge secret because you live in a country where the prevailing Christian values dictate that it's a sin to push the fat man over the bridge. There are a bunch of conspiracy theories where there is an actual conflict of values and present elites are just evil according to the moral standards that the person who started the conspiracy theory has. Take education. If you look at EU educational reform after the Bologna Process there are powerful political forces who want to optimize education to let universities teach skills that are valuable to employeers. On the other hand you do have people on the left who think that universities should teach critical thinking and create a society of individuals who follow the ideals of the Enlightment. There's a real conflict of values.
In this specific conflict, I would prefer having two kinds of school -- universities and polytechnics [] -- each optimized for one of the purposes, and let the students decide. Seems to me that conflicts of values are worse when a unified decision has to be made for everyone. (Imagine that people would start insisting that only one subject can be ever taught at schools, and then we would have a conflict of values whether the subject should be English or Math. But that would be just a consequence of a bad decision at meta level.) But yeah, I can imagine a situation with a conflict of values that cannot be solved by letting everyone pick their choice. And then the powerful people can push their choice, without being open about it.
You do have this in a case like teaching the theory of evolution. You have plenty of people who are quite passionate but making an unified decision to teach everyone the theory of evolution, including the parents of children who don't believe in the theory of evolution. Germany has compulsory schooling. Some fundamental chrisitan don't want their children in public schools. If you discuss the issue with people who have political power you find that those people don't want that those children get taught some strange fundamental worldview that includes things like young earth creationism. The want that the children learn the basic paradigm that people in German society follow. On the other hand I'm not sure whether you can get a motivation like that from reading the newspaper. Everyone who's involved in the newspaper believes that it's worth to teach children the theory of evolution so it's not worth writing a newspaper article about it. Is it a secret persecution of fundamentalist Christians? The fundamentalist Christian from whom the government takes away the children for "child abuse" because the children don't go to school feel perscecuted. On the other hand the politician in question don't really feel like the are persecuting fundamentalist Christians. The ironic thing about it is that compulsory schooling was introduced in Germany for the stated purpose of turning children into 'good Christians". In a case like evolution, do you sincerely believe that the intellectual elite should use their power to push a Texan public school to teach evolution even if the parents of the children and the local board of education don't want it?
Yeah, when people in power create tools to help them maintain the power, if those tools are universal enough, they will be reused by the people who get the power later. The trade-offs need to be discussed rationally. The answer would probably be "yes", but there are [] some negative side effects. For example, you create a precedent for other elites to push their agenda. (Just like those Christians did with the compulsory education.) Maybe a third option could be found. (Something like: Don't say what schools have to teach, but make the exams independent on schools. Make the evolutionary knowledge necessary to pass a biology exam. Make it public when students or schools or cities are "failing in biology".)
Why have governments control exams at all? Have different certifying authorities and employers are free to decide which authorities' diploma they accept.
That could work! On the other hand, it may set up a situation where a person who is only guilty of being raised in the wrong place may never get a decent job. Wonder what can be done to prevent that as much as possible?
And this differs from the status quo, how?
I was under the impression you wanted to improve things significantly. Hence why I mentioned that issue--and it IS an issue.
Maybe elites that push their agenda have a much better chance keeping their power that don't? I'm not sure how much setting precedents limit further elites. Basically you try to make the system more complicated to still get what you want but make people feel less manipulated. Complicated and intransparent systems lead to conspiracy theories.
Pessimists can also believe that education started out decent and has deteriorated to the point where it's worse than nothing. In addition to Armok's alternatives, there's also those who believe the tendency is a reversion to the mean (the mean being the mean because it's a natural equilibrium, perhaps).
And what about those that tend to assume things stay the same/revert to only changing on geological timescales, or those that assume it keeps moving in a linear way?

Conspiracy theorists of the world, believers in the hidden hands of the Rothschilds and the Masons and the Illuminati, we skeptics owe you an apology. You were right. The players may be a little different, but your basic premise is correct: The world is a rigged game. We found this out in recent months, when a series of related corruption stories spilled out of the financial sector, suggesting the world's largest banks may be fixing the prices of, well, just about everything.

Matt Taibbi opening paragraph in [Everything Is Rigged The Biggest Price-Fixing Scandal Ever] (

I smell a rat. Googling Matt Talibi (actually Taibbi []) does not suggest that he was ever one of these "skeptics". It's a rhetorical flourish, nothing more.
Matt isn't a mainstream journalist. On the other hand he writes about stuff that you can easily document instead of writing about Rothschilds, Masons and the Illuminati. He isn't the kind of person who cares about symbolic issues such as whether the members of the Bohemian grove do mock human sacrifices. In the post I link to he makes his case by arguing facts.
He may even be right, and the Paul Rosenberg article is lightweight and appears on what looks like a kook web site. But it seems to me that there's no real difference between their respective conclusions.

Rosenberg writes:

So, when they say, “No one saw this crisis coming,” they may be telling the truth, at least as far as they know it. Neither they nor anyone in their circles would entertain such thoughts. Likewise, they may not see the next crisis until it hits them.

Plenty of big banks did make money by betting on the crisis. There were a lot of cases where banks sold their clients products the banks knew that the products would go south.

Realising that there are important political things that don't happen in the open is a meaningful conclusion. Matt isn't in a position where he can make claims for which he doesn't have to provide evidence.

In 2011 Julian Assange told the press that the US government has an API with the can use to query the data that they like from facebook. On the skeptics stackexchange website there a question whether their's evidence for Assange claim or whether he makes it up. It doesn't mention the possibility that Assange just refers to nonpublic information. The orthodox skeptic just rejects claims without public proof.

Two years later we know have decent evidence that the US has that capability via PRISM. In 2011 Julian had the knowledge that it happ... (read more)

On this site, it's probably worth clarifying that "evidence" here refers to legally admissible evidence, lest we go down an unnecessary rabbit hole.

My dad used to run a business and whenever they needed a temp, he'd always line up 5-10 interviewees, to check out how they looked.

And then hire the ugliest.

Aside from keeping my mother off his back, he reasoned that if the temp had kept good employment, and it wasn't for her looks, she must be ok.

From the comments on the article on the jobs for good-looking.

This is a nice calculation with a fairly simple causal diagram. The basic point is that if you think people are repeatedly hired either for their looks or for being a good worker, then among the pool of people who are repeatedly hired, looks and good work are negatively correlated.

[-][anonymous]10y 11

That's called Berkson's paradox.

“Those who will not reason, are bigots, those who cannot, are fools, and those who dare not, are slaves.” --Lord Byron.

All too often those who are least rational in their best moments are the greatest supporters of using one's head, if only to avoid too early a demise. I wonder how many years Lord Byron gained from rational thought, and which of the risks he took did he take because he was good at betting...

the designers of a theoretical technology in any but the most predictable of areas should identify its assumptions and claims that have not already been tested in a laboratory. They should design not only the technology but also a map of the uncertainties and edge cases in the design and a series of such experiments and tests that would progressively reduce these uncertainties. A proposal that lacks this admission of uncertainties coupled with designs of experiments that will reduce such uncertainties should not be deemed credible for the purposes of any important decision. We might call this requirement a requirement for a falsifiable design.

--Nick Szabo, Falsifiable design: A methodology for evaluting theoretical technologies

If something is purely theoretical you can't test it in the lab. You need to move beyond theory to start testing how a technology really works. There cases where a technology might be dangerous and you want to stay some time with the theorethical analysis of the problem. In other cases you don't want to do falsifiable design but put the technology into reality as soon as possible.

Students are often quite capable of applying economic analysis to emotionally neutral products such as apples or video games, but then fail to apply the same reasoning to emotionally charged goods to which similar analyses would seem to apply. I make a special effort to introduce concepts with the neutral examples, but then to challenge students to ask wonder why emotionally charged goods should be treated differently.

-- R. Hanson

I'm under the impression that all EY / RH quotes are discouraged, as described in this comment tree, which suggests the following rule should be explicitly amended to be broader:

Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.

I will destroy my enemies by converting them to friends!

  • Maimonides
Source? It's pithy, yet not on the usual quote compilations that I checked.

Sounds like Takamachi Nanoha to me.

6Eliezer Yudkowsky10y
That's more along the lines of, "I will convert my enemies to friends by STARLIGHT BREAKER TO THE FACE". Offhand I can't think of a single well-recorded real-life historical instance where this has ever worked.

Substitute "friends" with "trading partners" and the outlook improves though.

8Eliezer Yudkowsky10y
Fair, the British were totally befriending their way through history for a while.
"Befriending" by force? Well, post-WWII Japan worked out pretty well for the United States. As for dealing with would-be enemies by actually befriending them, Alexander Nevsky [] sucked up to the Mongols and ended up getting a much better deal for Russia than many of the other places the Mongols invaded.
That's what her reputation turned out like, and what TSAB propaganda likes to claim. It's not what she actually did. Let me count the befriendings: * Alisa Bannings. The sole "Nanoha-style befriending": Nanoha punched her to make her stop bothering Suzuka, after which they somehow became friends. No starlight breaker, though. * Alicia. Mostly Alicia was the one beating up Nanoha. It's true that Nanoha eventually defeated her in a climactic battle, after first sort-of-befriending her along more normal lines; however, Nanoha's victory in that battle isn't what finally turned Alicia. That's down to the actions of her insane, brain-damaged mother. * Vita. Neither motivation nor loyalty ever wavered. * Reinforce. Decided to work with Nanoha after Hayate asked her to. Nanoha's starlight breaker was helpful for temporarily weakening the defence program, but was not instrumental in the actual motivation change. * Vivio. I really need to go there? Her reputation for converting enemies is not undeserved, but she's not converting them by defeating them; she's converting and defeating them. Amusingly, the movies (which are officially TSAB propaganda) show marginal causation where there's only correlation. Oh, and explicitly because people have asked me not to, you're hereby invited to the rizon/#nanoha irc channel. I'm relatively confident you won't show up, which is good - it has a tendency to distract authors when I do this. :P
Did you confuse Alicia with Fate?
No. I'm just opinionated on the subject.
8Eliezer Yudkowsky10y
MAHOU SHOUJO TRANSHUMANIST NANOHA "Girl," whispered Precia. The little golden-haired girl's eyes were fluttering open, amid the crystal cables connecting the girl's head to the corpse within its stasis field. "Girl, do you remember me?" It took the girl some time to speak, and when she did, her voice was weak. "Momma...?" The memories were there. The brain pattern was there. Her daughter was there. "Momma...?" repeated Alicia, her voice a little stronger. "Why are you crying, Momma? Did something happen? Where are we?" Precia collapsed across her daughter, weeping, as some part of her began to believe that the long, long task was finally over.
So, in case anyone is still confused about the point of the Quantum Physics Sequence, it was to help future mad scientists love their reconstructed daughters properly :)

An Idiot Plot is any plot that goes away if the characters stop being idiots. A Muggle Plot is any plot which dissolves in the presence of transhumanism and polyamory. That exact form is surprisingly common; e.g. from what I've heard, canon!Twilight has two major sources of conflict, Edward's belief that turning Bella into a vampire will remove her soul, and Bella waffling between Edward and Jacob. I didn't realize it until Baughn pointed it out, but S1 Nanoha - not that I've watched it, but I've read fanfictions - counts as a Muggle Plot because the entire story goes away if Precia accepts the pattern theory of identity.

I would find it unhelpful to describe as a "Muggle Plot" any plot that depends on believing one side of an issue where there is serious, legitimate, disagreement. (Of course, you may argue that there is no serious, legitimate disagreement on theories of identity, if you wish.) I also find it odd that polyamory counts but not, for instance, plots that fail when you assume other rare preferences of people. Why isn't a plot that assumes that the main characters are heterosexual considered a Muggle Plot just as much as one which assumes they are monogamous? What about a plot that fails if incest is permitted (Star Wars could certainly have gone very differently.) If a plot assumes that the protagonist likes strawberry ice cream, and it turned out that the same percentage of the population hates strawberry ice cream as is polygamous, would that now be a Muggle Plot too?

I also find it odd that polyamory counts but not, for instance, plots that fail when you assume other rare preferences of people. Why isn't a plot that assumes that the main characters are heterosexual considered a Muggle Plot just as much as one which assumes they are monogamous?

I think the idea is not so much "rare preference" as "constrained preference," where that constraint is not relevant / interesting to the reader. Looking at gay fiction, there's lots of works in settings where homosexuality is forbidden, and lots of works in settings where homosexuality is accepted. A plot that disappears if you tried to move it to a setting where homosexuality is accepted seems too local; I've actually mostly grown tired of reading those because I want them to move on and get to something interesting. I imagine that's how it feels for a polyamorist to read Bella's indecision.

To use the ice cream example, imagine trying to read twenty pages on someone in an ice cream shop, agonizing over whether to get chocolate or strawberry. "Just get two scoops already!"

7Eliezer Yudkowsky10y
Excellent reply. I'm pretty sure I'd feel the same way if I was reading a story where A wants to be with only B, B wants to be with only A, neither of them want to be with C, but it's just never occurred to them that monogamy is an option.
3Paul Crowley10y
Better to say "B wishes A would not sleep with others, A wishes B would not sleep with others, but..". Monogamy is the state of disallowing other partners, not just not having them.
I'll accept this definition, but would like a word to describe my marriage in that case. I'm quite confident that if we ever wanted to open the relationship up to romantic/sexual relationships with third parties, we would have that conversation and negotiate the terms of it, so I'm reluctant to describe us as disallowing other partners. But I currently describe us as monogamous, because, well, we are. Describing us as polyamorous when neither of us is interested in romantic/sexual relationships with third parties seems as ridiculous as describing a gay man as bisexual because he's not forbidden to have sex with women. So how ought I refer to relationships like ours, on your view?
2Paul Crowley10y
I'd describe that as monogamous. You're saying that you think you'd be able to negotiate a new rule if circumstances arose, but the current rule is monogamy.
Mm. OK, with that connotation of "disallowing", I would agree. It's not the connotation I would expect to ordinarily come to mind in conversation, and in particular your statements about "B wishes A would not sleep with others" emphasized a different understanding of "disallowing" in my mind.
Have you (implicitly or explicitly) promised each other to not have sex with anyone else for the time being (even though the promise is renegotiable)? For example, would it be OK with you if your husband went to (say) a conference abroad and had a one-night stand with someone there without telling you until afterwards? That'd sound as a stronger condition than "B wishes A would not sleep with others" -- I wish my grandma didn't smoke, but given that she's never promised me not to smoke...
If he had sex with someone without telling me until afterwards, I would be very surprised, and it would suggest that our relationship doesn't work the way I thought it did. I wouldn't be OK with that change/revelation, and would need to adjust until I was OK with it. If he bought a minivan without telling me, all of the above would be true as well. But it simply isn't true that I wish he wouldn't buy a minivan, nor is it true that I wish he wouldn't sleep with others. And if he came to me today and said "I want to sleep with so-and-so," that would be a completely different situation. (Whether I would be OK with it would depend a lot on so-and-so.) It's possible that, somewhere in the last 20 years, he promised me he wouldn't sleep with anyone else. Or, for that matter, buy a minivan. If so, I've forgotten (if it was an implicit promise, I might not even have noticed), and it doesn't matter to me very much either way.
If so, I wouldn't consider it much of a stretch to call it monogamous.
Nor would I, as I said initially []. What I considered a stretch was accepting ciphergoth's definition of monogamy [], given that my marriage is monogamous, because "We disallow other partners" didn't seem to accurately describe my monogamous marriage. (Similarly, "We disallow the purchase of minivans" seems equally inaccurate.) Then came ciphergoth's clarification [] that he simply meant by "disallow" that right this moment it isn't allowed, even though if we expressed interest in changing the rule the rule would change and at that time it would be allowed. That seems like a weird usage of "disallow" to me (consider a dialog like "You aren't allowed to do X." "Oh, OK. Can I do X?" "Yeah, sure.", which is permitted under that usage, for example), but I agreed that under that usage [] it's true that we're not allowed other partners. I hope that clears things up.
0Paul Crowley10y
Right, but those are the obvious circumstances where a couple who were not monogamous might become so.
(The more plausible reason being that C is just coercing them both.)
Explaining it as a complaint about a constrained preference does negate the heterosexual example, but I could easily tweak the example a bit: I could still ask why "Muggle Plots" doesn't include plots that assume a character isn't bisexual. And my incest example applies without even any tweaks--I'm not pointing out that Star Wars would be different if characters accepted incestuous relationships and no other kind, I'm pointing out that Star Wars would be different if characters accepted incestuous relationships in addition to the ones they do now--that is, if their preference was less constrained. So why is it that a plot that depends on the unacceptability of incest doesn't count as a Muggle Plot?
Having read the rest of the conversation... I'd say that yes, I have a mild "dammit, aren't condoms invented in this universe long ago enough to these issues to have gone away?!" to Starwars, but only after reconsidering it in the light of Homestuck. Which by the way, provides an excellent example in the alien Trolls considering both heterosexuality and incest-taboos in the kids to be trite annoyances. I'm going out on a limb here, and saying that Muggle Plot is not a property of a plot, or even a plot-reader pair, but rather an emotion that can be felt in response to a plot, and which is scalar, with a rough heuristic being that it's stronger the more salient the option that'd make the plot go away is in whatever communities you participate in.
Why? Remember adaptation executors not fitness maximizers []. And if condoms have been around for long enough for people to adapt to them, the first adaptation would be to no longer find condomed sex pleasurable or fulfilling.
I suspect the constraint against incest seems relevant to Eliezer. (The concept as I outlined it is subjective, and I suspect the association with "transhumanism + polyamory" is difficult to pin down without a reference to Eliezer or clusters he's strongly associated with.)
Because poly evangelism? It certainly seems like something people decide is a good idea rather than some sort of innate preference difference. But if that were true, I would have to admit that monogomy is probably a bad idea, and that would be sad :(

(shrug) My husband and I live in a largely poly-normative social environment, and are monogamous. We don't object, we simply aren't interested. It still makes "oh noes! which lover do I choose! I want them both!" plots seem stupid, though. ("if you want them both, date them both... what's the difficulty here?")

So, no, acknowledging that polyamory is something some people decide is a good idea doesn't force me to "admit" that monogamy is a bad idea.

Admittedly, I'm also not sure why it would be sad if it did.

Because social norms, of course. Actually, I was pretty tired when I wrote that, but thats what I think I meant. (I'll note that most monogomous people whose opinions I hear on this think polyamory is almost always a bad idea, although possibly OK for a rare minority. But if relationships are usually a good idea, and polyamory isn't usually actively bad, then polyamory=more relationships=good, goes the 1:00 AM logic.)
Re pattern identity theory: Scott Aaronson in The Ghost in the Quantum Turing Machine [].
The first paragraph of the quote is about pattern identity theory. Unfortunately the second paragraph is actually something of a muddling of pattern identity with the separate issue of basing moral/ethical/legal considerations only on the externalities experienced by the survivors. Specifically, making it about 'depriving the rest of society' distracts from the (hopefully) primary point that it is the pattern that matters more so than spooky stuff about an instance.
Nice one. Though one could perhaps recover most of the Nanoha storyline by giving Precia Capgras delusion [], unless by "transhumanism" you include the assumption that organic disorders would be trivially fixed (albeit I don't think Precia had anyone around to diagnose her?) I'm not sure if that would make it more or less tragic.
Right, that's my standard head-canon on the subject. Precia was very badly hurt by the accident, and had to leave society because - for some reason - resurrecting Alicia the way she did was severely illegal. As a result, there was no-one around to double-check her conclusions, or spot the brain damage.
My personal head-canon says that Precia, who ought to know better, was afflicted with a particular type of brain damage that prevented her from recognizing her own daughter. She was, effectively, insane. Given that the cause of both Alicia's first death and Precia's insanity were an inadvisable engineering experiment that she is explicitly stated to have been against, this makes Precia a tragic figure in her own right.
Does worrying about that sort of thing suggest that Edward actually has a soul?
BUFFY THE VAMPIRE SLAYER SPOILERS (up to season 4) Rira gubhtu Fcvxr jnf fbhyyrff ng gur gvzr ur sryy va ybir jvgu Ohssl, V qba'g guvax ur jbhyq jnag gb erzbir ure fbhy, fvapr gung jbhyq shaqnzragnyyl punatr ure. Bs pbhefr Va gur Ohssl-irefr, univat be abg univat n fbhy unf dhvgr pyrne rssrpgf (ynpxvat n fbhy zrnag lbh prnfr gb unir nal zbenyf, gubhtu lbh pna fgvyy srry ybir gbjneqf crbcyr lbh xabj), naq jr frr n pyrne qvssrerapr orgjrra crefba-jvgu-fbhy if gur fnzr crefba-jvgubhg-fbhy. V qbhog gung'f gur pnfr va gur Gjvyvtug-irefr...
0[anonymous]10y the only upvoter, I suspect nobody else got that.
After a bit of googling, I don't think it's a quote by Maimonides. The closest I could find is this passage of the Babilonian Talmud []:
That's because it's usually attributed to Abe [] Lincoln [], with an exception [].

That's kind of amusing, considering that Lincoln is also famous for destroying his enemies the other way.

He tried the nice way first...

He tried the nice way first...

This would seem to further weaken the quote in as much as it is evidence that the tactic doesn't work.

Just because your enemies will not always be your friends does not mean it is useless to TRY to convert them to be one's friends. It is, as most things, a bet. One must know, beforehand, if it is WORTH it to try. I would say it's a useful quote because it provides an alternative to the usual "smash them as soon as they oppose you" deal going on.
Nevertheless, the statement to which I replied remains evidence against rather than evidence for. You are of course welcome to support the sentiment despite the anecdote in question---such things aren't typically considered to be strong evidence either way.
It may also be better than the even more common "deal with them as you can, but don't expect they'll ever be on your side".
I don't know in what context Lincoln said this (if he really said it), but the tactic worked very well for him at the convention in the summer of 1860. (In those days, the conventions would start without people knowing who would be nominated. But often you had an idea, and Lincoln was a long shot.) All of the other candidates then joined Lincoln's cabinet (his ‘Team of Rivals’).
DId not work in one notable case, to which the quote may or may not have originally been applied. Of course it doesn't apply all the time.
Found on the Forbes site a week or so ago. Then I've googled it further and found some more occurences. Interestingly the quote is usually attributed to Abraham Lincoln. But he was certainly not the first with this nifty idea,
2Yaakov T10y
does anyone know the original source in Maimonides writings?
I'm not sure where this, and the idea is good, but it doesn't sound like Maimonides. He was extremely willing to declare that those who disagreed with him were drunks, whoremongers and idolators. Rambam would rarely have talked about how his own personal goals anyways. It really isn't his style. I'm skeptical that this is a genuine quote due to him.

women rarely regret having a child, even one they thought they didn’t want. But as Katie Watson, a bioethicist at Northwestern University’s Feinberg School of Medicine, points out, we tell ourselves certain stories for a reason. “It’s psychologically in our interest to tell a positive story and move forward,” she says. “It’s wonderfully functional for women who have children to be glad they have them and for women who did not have children to enjoy the opportunities that afforded them.”

--Joshua Lang, New York Times, June 12, 2013, What Happens to Women Who Are Denied Abortions?

I was also under the impression that the process of giving birth to a child triggers hormonal changes of some kind (involving oxytocin?) in the mother that help induce maternal bonding.

“Reality provides us with facts so romantic that imagination itself could add nothing to them.” --Jules Verne.

The fellow had a brilliant grasp of how to make scientific discovery interesting, and I think people could learn a thing or two from reading his stuff, still.

The paucity of skepticism in the world of health science is staggering. Those who aren't insufferable skeptical douchebags are doing it wrong.

-Stabby the Raccoon

He [the Inner Game player] reasons that since by definition the commonplace is what is experienced most often, the talent to be able to appreciate it is extremely valuable.

--W. Timothy Gallwey, Inner Tennis: Playing the Game

When you have to shoot, shoot. Don't talk.

Tuco, The Good, The Bad, and the Ugly

[This comment is no longer endorsed by its author]Reply
A great line, but it's a dupe [].
Ah! Humblest apologies, retracted.

I just watched Oz the Great and Powerful, the big-budget fanfic prequel film to The Wizard of Oz. Hardly a rationalist movie, but there was some nice boosting of science and technology where I didn't expect it. So here's the quotation:

I’m not that kind of wizard. Where I come from there aren’t any real wizards, except one, Thomas Edison. He could look into the future and make it real. […] That's the kind of wizard I want to be.

(There's more, but this is all that I could get out of the Internet and my memory.)

I haven't seen the movie, but that sounds awfully familiar. It doesn't sound consistent with the Oz books or any of the big-name fanfic out there (Wicked, etc.), but I wonder if it might have shown up in some similar context.

Another potential detour on the road to truth is the nature of statistical variation and people’s tendency to misjudge through overgeneralization. Often in the fitness world, someone who appears to have above-average physical characteristics or capabilities is assumed to be a legitimate authority. The problem with granting authority to appearance is that a large part of an individual’s expression of such above-average physical characteristics and capabilities could simply be the result of wild variations across the statistical landscape. For instance, if you look out over a canopy of trees, you will probably notice a lone tree or two rising up above the rest – and it’s completely within human nature to notice things that stand out in such a way. In much the same manner, we take notice of individuals who possess superior physical capabilities, and when we do, there is a strong tendency to identify these people as sources of authority.

To make matters worse, many people who happen to posses such abnormal physical capabilities frequently misidentifies themselves as sources of authority, taking credit for something that nature has, in essence, randomly dropped in their laps. In other words, people are intellectually prepared to overlook the role of statistical variation in attributing authority.

-- Doug McDuff, M.D., and John Little, Body by Science, pp. x-xi

Hindsight is blindsight. The very act of looking back on events once you know their outcome, or even try to imagine their outcome, makes it, by definition, impossible to view such events objectively.

— Mark Salter & Trevor H. Turner, Community Mental Health Care: A practical guide to outdoor psychiatry

Though you can still find subjects who don't know the outcome, ask them for their predictions, and compare those predictions with subjects who are told the outcome to find the size of the hindsight bias.

Linguistic traditions force us to think of body and mind as separate and distinct entities. Everyday notions like free will and moral responsibility contain underlying contradictions. Language also uses definitions and forms of the verb to be in ways that force us to think of classes of things as clearly defined (Is a fetus a human being or not?), when in fact every classification scheme has fuzzy boundaries and continuous gradations.

--Thomas M Georges, Digital Soul, 2004, p. 14

I don't have a pithy parallel quote from Korzybski [] to put alongside this (pithiness was not his style), but the ideas here are exactly in accordance with Korzybski on "elementalism" (treating as separate and distinct entities things that are not, including body vs. mind), over/under defined terms (verbal definitions lacking extensionality), reification of categories, and the rejection of the is of identity.
I don't know that I'd recommend thinking of body and mind as identical (as in identity theory in phil mind). The proper relation is probably better thought of as instantiation of a mind by a brain, in a similar way to how transistors instantiate addition and subtraction. It matters because if you think mind=brain then you may come to some silly philosophical conclusions, like that a mind that does exactly what yours does (in terms of inputs and outputs to the rest of the body) but, say, runs on silicon, is "not the same mind" or "not a real mind."

With machine intelligence and other technologies such as advanced nanotechnology, space colonization should become economical. Such technology would enable us to construct “von Neumann probes” – machines with the capability of traveling to a planet, building a manufacturing base there, and launching multiple new probes to colonize other stars and planets. A space colonization race could ensue. Over time, the resources of the entire accessible universe might be turned into some kind of infrastructure, perhaps an optimal computing substrate (“computronium”)

... (read more)
If you haven't seen it I can recommend Stuart Armstrong's talk at Oxford on the Fermi paradox and Von Neumann probes []. Before I saw this I was thinking in a fuzzy way about "colonization waves" of probes going from star to star ...

If you turn on your television and tune it between stations, about 10 percent of that black-and-white speckled static you see is caused by photons left over from the birth of the universe. What grater proof of the reality of the Big Bang–you can watch it on TV.

Jim Holt

Would the static look any different if it was 0% though?

Yes, it wouldn't be peaked at about 3 GHz. Since television only goes up to about 1 GHz, this means more noise at higher channels after accounting for other sources.

There would be less?
Can you actually do this experiment on a modern TV? I know how to change the channels on mine, but I have no idea how you would "tune" it.
1. Selecting a channel is tuning; each channel has a specific frequency and the TV knows what frequencies the channel numbers stand for. But what you can't do is tune to a frequency that isn't assigned to any channel, so you would have to select a channel on which no station in your area is broadcasting. 2. You would have to be using an analog TV tuner (which is now obsolete, if you're in the US); digital TV has a much less direct relationship between received radio photons and displayed light photons. On the upside, it's really easy to find a channel where no station is broadcasting, now :) (though actually, I don't know what the new allocation of the former analog TV bands is and whether there would be anything broadcasting on them). (I've recently gotten an interest in radio technology; feel free to ask more questions even if you're just curious.)
This grater. [,or.r_cp.r_qf.&bvm=bv.47810305,d.dmg&fp=dd138b3421242746&biw=1920&bih=952&facrc=_&]
S/he is making a pun of the typo: "what grater proof..." instead of "what greater proof...". (I don't find it a very funny pun myself.)

A little less conversation, a little more action!

Elvis Presley

One needs the right balance between conversation and action, and overall, it's probably too much of the latter and too little of the former in this world.
Or more precisely: ..Most actors don't think enough, and most thinkers don't act enough. cf. Dunning-Kruger effect []. Extroverts and Introverts typically line up with those two categories quite neatly, and in my observation tend to associate mainly with people of similar temperament (allowing them to avoid much of the pressure to be more balanced they'd find in a less homogenous social circle). I believe that this lack of balanced interaction is the real source of the problem. We need balanced pressure to both act and think competently, but the inherent discomfort makes most people unwilling to voluntarily seek it out (if they even become aware that doing so is beneficial).
I'm not sure I agree in the general case, and I think that among LessWrongers things are certainly unbalanced in the other direction.
[-][anonymous]10y 8

Fundamental ideas play the most essential role in forming a physical theory. Books on physics are full of complicated mathematical formulae. But thought and ideas, not formulae, are the beginning of every physical theory. The ideas must later take the mathematical form of a quantitative theory, to make possible the comparison with experiment.

-- Albert Einstein

— Amartya Sen, On Economic Inequality, p. vii

From David Shields' Reality Hunger:

Once, after running deep into foul territory to make an extraordinary catch to preserve a victory, he was asked, “When did you know you were going to catch the ball?” Ichiro replied, “When I caught it."

The first duty of life is to assume a pose. What the second duty is, no one has yet discovered.

--Oscar Wilde on signalling.

-Thank you, thank you Lord, for preserving my virginity!

  • You bloody idiot! Do you think God, to keep you a virgin, will drown the whole city of Florence?

(Architect Melandri to Noemi, the girl he is in love with, who thinks the flood of 1966 was sent as an answer to her prayers)

All my Friend, Act II [roughly translated by me]

This is yet another reason why a God that answers prayers is far, far crueler than an indifferent Azathoth. Imagine the weight of guilt that must settle on a person if they prayed for the wrong thing and God answered!

On another note, that girl must not be very picky, if God has to destroy a whole city to keep her a virgin...(please don't blast me for this!)

[T]here can be no way of justifying the substantive assumption that all forms of altruism, solidarity and sacrifice really are ultra-subtle forms of self-interest, except by the trivializing gambit of arguing that people have concern for others because they want to avoid being distressed by their distress. And even this gambit […] is open to the objection that rational distress-minimizers could often use more efficient means than helping others.

Jon Elster

Even if altruism turns out to be a really subtle form of self-interest, what does it matter? An unwoven rainbow still has all its colors.

Rational distress-minimizers would behave differently from rational atruists. (Real people are somewhere in the middle and seem to tend toward greater altruism and less distress-minimization when taught 'rationality' by altruists.)

That could be because rationality decreases the effectiveness of distress minimisation techniques other than altruism.
..because it makes you try to see reality as it is? In me, it's also had the effect of reducing empathy. (Helps me not go crazy.)
Well, for me, believing myself to be a type of person I don't like causes me great cognitive dissonance. The more I know about how I might be fooling myself, the more I have to actually adjust to achieve that belief. For instance, it used to be enough for me that I treat my in-group well. But once I understood that that was what I was doing, I wasn't satisfied with it. I now follow a utilitarian ethics that's much more materially expensive.
Are they being taught 'rationality' by altruists or 'altruism' by rationalists? Or 'rational altruism' by rational altruists?
Shouldn't the methods of rationality be orthogonal to the goal you are trying to achieve?
Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren't very effective might be pretty distressing.
0Eliezer Yudkowsky10y
That seems to verge on the trivializing gambit, though.
I guess I don't see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?
Why would actual altruism be a "new kind" of motivation? What makes it a "newer kind" than self interest?
I meant that everyone I've discussed the subject with believes that self-interest exists as a motivating force, so maybe "additional" would have been a better descriptor than "new."
Hrm... But "self-interest" is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc... Seems like it wouldn't be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person's well being, instead of it secretly being just a concern for your own. It'd perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that's not the same thing as saying that the only think you care about is your own reinforcement system.
Well, the trivializing gambit here would be to just say that "caring about another person" just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress. I'm not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you're looking for information not to literally feel the same thing at the same intensity - when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression). It's worth noting (for me) that this doesn't diminish the importance of empathy and it doesn't mean that I don't really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can't really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you're trying to do something specific with your caring circuits (or trying to figure out how to emulate them).
It may not matter pragmatically but it still matters scientifically. Just as you want to have a correct explanation of rainbows, regardless of whether this explanation has any effects on our aesthetic appreciation of them, so too you want to have a factually accurate account of apparently altruistic behavior, independently of whether this matters from a moral perspective.
Things is about predicting things not about explaning them. If a theory has no additional predictive value than it's not scientifically valuable. In this case I don't the the added predictive value.
There's the alternative "gambit" of describing it in terms of signaling. There's the alternative "gambit" of describing it in terms of wanting to live in the best possible universe. There's the alternative "gambit" of ascribing altruism to the emotional response it invokes in the altruistic individual. I find the quote false on its face, in addition to being an appeal to distaste.
Careful, there are some tricky conceptual waters here. Strictly, anything I want to do can be ascribed to my emotional response to it, because that's how nature made us pursue goals. "They did it because of the emotional response it invoked" is roughly analogous to "They did it because their brain made them do it." The cynical claim would be that if people could get the emotional high without the altruistic act (say, by taking a pill that made them think they did it), they'd just do that. I don't think most altruists would, though. There are cynical explanations for that fact, too ("signalling to yourself leads to better signalling to others") but they begin to lose their air of streetwise wisdom and sound like epicycles.
Are you suggesting emotions are necessary to goal-oriented behavior? There should be some evidence for that claim; we have people with diminished emotional capacity in wide range of forms. Do individuals with alexithymia demonstrate impaired goal-oriented behaviors? I think there's more to emotion as a motive system than the brain as a motive force. People can certainly choose to stop taking certain drugs which induce emotional highs. 10% of people who start taking heroin are able to keep their consumption levels "moderate" or lower, as compared to 90% for something like tobacco, according to one random and hardly authoritative internet site - the precise numbers aren't terribly important. Perhaps such altruists, like most people, deliberately avoid drugs like heroin for this reason?

Idealism increases in direct proportion to one's distance from the problem.

-- John Galsworthy

"Why do people worry about mad scientists? It's the mad engineers you have to watch out for." - Lochmon

Considering the "mad scientists" keep building stuff, perhaps the question is "Why do people keep calling mad engineers mad scientists?"

I want to use one of those phrases in conversation. Either grfgvat n znq ulcbgurfvf be znxvat znq bofreingvbaf (spoilers de-rot13ed []) Also I found the creator's page for the comic []
— Miles Vorkosigan, Komarr by Lois McMaster Bujold

As far as I'm concerned, insight, intuition, and recognition are all synonymous.

Herbert Simon