Related to: The Apologist and the Revolutionary, Dreams with Damaged Priors

Several years ago, I posted about V.S. Ramachandran's 1996 theory explaining anosognosia through an "apologist" and a "revolutionary".

Anosognosia, a condition in which extremely sick patients mysteriously deny their sickness, occurs during right-sided brain injury but not left-sided brain injury. It can be extraordinarily strange: for example, in one case, a woman whose left arm was paralyzed insisted she could move her left arm just fine, and when her doctor pointed out her immobile arm, she claimed that was her daughter's arm even though it was obviously attached to her own shoulder. Anosognosia can be temporarily alleviated by squirting cold water into the patient's left ear canal, after which the patient suddenly realizes her condition but later loses awareness again and reverts back to the bizarre excuses and confabulations.

Ramachandran suggested that the left brain is an "apologist", trying to justify existing theories, and the right brain is a "revolutionary" which changes existing theories when conditions warrant. If the right brain is damaged, patients are unable to change their beliefs; so when a patient's arm works fine until a right-brain stroke, the patient cannot discard the hypothesis that their arm is functional, and can only use the left brain to try to fit the facts to their belief.

In the almost twenty years since Ramachandran's theory was published, new research has kept some of the general outline while changing many of the specifics in the hopes of explaining a wider range of delusions in neurological and psychiatric patients. The newer model acknowledges the left-brain/right-brain divide, but adds some new twists based on the Mind Projection Fallacy and the brain as a Bayesian reasoner.


Strange as anosognosia is, it's only one of several types of delusions, which are broadly categorized into polythematic and monothematic. Patients with polythematic delusions have multiple unconnected odd ideas: for example, the famous schizophrenic game theorist John Nash believed that he was defending the Earth from alien attack, that he was the Emperor of Antarctica, and that he was the left foot of God. A patient with a monothematic delusion, on the other hand, usually only has one odd idea. Monothematic delusions vary less than polythematic ones: there are a few that are relatively common across multiple patients. For example:

In the Capgras delusion, the patient, usually a victim of brain injury but sometimes a schizophrenic, believes that one or more people close to her has been replaced by an identical imposter. For example, one male patient expressed the worry that his wife was actually someone else, who had somehow contrived to exactly copy his wife's appearance and mannerisms. This delusion sounds harmlessly hilarious, but it can get very ugly: in at least one case, a patient got so upset with the deceit that he murdered the hypothesized imposter - actually his wife.

The Fregoli delusion is the opposite: here the patient thinks that random strangers she meets are actually her friends and family members in disguise. Sometimes everyone may be the same person, who must be as masterful at quickly changing costumes as the famous Italian actor Fregoli (inspiring the condition's name).

In the Cotard delusion, the patient believes she is dead. Cotard patients will neglect personal hygiene, social relationships, and planning for the future - as the dead have no need to worry about such things. Occasionally they will be able to describe in detail the "decomposition" they believe they are undergoing.

Patients with all these types of delusions1 - as well as anosognosiacs - share a common feature: they usually have damage to the right frontal lobe of the brain (including in schizophrenia, where the brain damage is of unknown origin and usually generalized, but where it is still possible to analyze which areas are the most abnormal). It would be nice if a theory of anosognosia also offered us a place to start explaining these other conditions, but this Ramachandran's idea fails to do. He posits a problem with belief shift: going from the originally correct but now obsolete "my arm is healthy" to the updated "my arm is paralyzed". But these other delusions cannot be explained by simple failure to update: delusions like "the person who appears to be my wife is an identical imposter" never made sense. We will have to look harder.


Coltheart, Langdon, and McKay posit what they call the "two-factor theory" of delusion. In the two-factor theory, one problem causes an abnormal perception, and a second problem causes the brain to come up with a bizarre instead of a reasonable explanation.

Abnormal perception has been best studied in the Capgras delusion. A series of experiments, including some by Ramachandran himself, demonstrate that Capgras patients lack a skin conductance response (usually used as a proxy of emotional reaction) to familiar faces. This meshes nicely with the brain damage pattern in Capgras, which seems to involve the connection between the face recognition areas in the temporal lobe and the emotional areas in the limibic system. So although the patient can recognize faces, and can feel emotions, the patient cannot feel emotions related to recognizing faces.

The older "one-factor" theories of delusion stopped here. The patient, they said, knows that his wife looks like his wife, but he doesn't feel any emotional reaction to her. If it was really his wife, he would feel something - love, irritation, whatever - but he feels only the same blankness that would accompany seeing a stranger. Therefore (the one-factor theory says) his brain gropes for an explanation and decides that she really is a stranger. Why does this stranger look like his wife? Well, she must be wearing a very good disguise.

One-factor theories also do a pretty good job of explaining many of the remaining monothematic delusions. A 1998 experiment shows that Cotard delusion sufferers have a globally decreased autonomic response: that is, nothing really makes them feel much of anything - a state consistent with being dead. And anosognosiacs have lost not only the nerve connections that would allow them to move their limbs, but the nerve connections that would send distress signals and even the connections that would send back "error messages" if the limb failed to move correctly - so the brain gets data that everything is fine.

The basic principle behind the first factor is "Assume that reality is such that my mental states are justified", a sort of Super Mind Projection Fallacy.

Although I have yet to find an official paper that says so, I think this same principle also explains many of the more typical schizophrenic delusions, of which two of the most common are delusions of grandeur and delusions of persecution. Delusions of grandeur are the belief that one is extremely important. In pop culture, they are typified by the psychiatric patient who believes he is Jesus or Napoleon - I've never met any Napoleons, but I know several Jesuses and recently worked with a man who thought he was Jesus and John Lennon at the same time. Here the first factor is probably an elevated mood (working through a miscalibrated sociometer). "Wow, I feel like I'm really awesome. In what case would I be justified in thinking so highly of myself? Only if I were Jesus and John Lennon at the same time!" A similar mechanism explains delusions of persecution, the classic "the CIA is after me" form of disease. We apply the Super Mind Projection Fallacy to a garden-variety anxiety disorder: "In what case would I be justified in feeling this anxious? Only if people were constantly watching me and plotting to kill me. Who could do that? The CIA."

But despite the explanatory power of the Super Mind Projection Fallacy, the one-factor model isn't enough.


The one-factor model requires people to be really stupid. Many Capgras patients were normal intelligent people before their injuries. Surely they wouldn't leap straight from "I don't feel affection when I see my wife's face" to "And therefore this is a stranger who has managed to look exactly like my wife, sounds exactly like my wife, owns my wife's clothes and wedding ring and so on, and knows enough of my wife's secrets to answer any question I put to her exactly like my wife would." The lack of affection vaguely supports the stranger hypothesis, but the prior for the stranger hypothesis is so low that it should never even enter consideration (remember this phrasing: it will become important later.) Likewise, we've all felt really awesome at one point or another, but it's never occurred to most of us that maybe we are simultaneously Jesus and John Lennon.

Further, most psychiatric patients with the deficits involved don't develop delusions. People with damage to the ventromedial area suffer the same disconnection between face recognition and emotional processing as Capgras patients, but they don't draw any unreasonable conclusions from it. Most people who get paralyzed don't come down with anosognosia, and most people with mania or anxiety don't think they're Jesus or persecuted by the CIA. What's the difference between these people and the delusional patients?

The difference is the right dorsolateral prefrontal cortex, an area of the brain strongly associated with delusions. If whatever brain damage broke your emotional reactions to faces or paralyzed you or whatever spared the RDPC, you are unlikely to develop delusions. If your brain damage also damaged this area, you are correspondingly more likely to come up with a weird explanation.

In his first papers on the subject, Coltheart vaguely refers to the RDPC as a "belief evaluation" center. Later, he gets more specific and talks about its role in Bayesian updating. In his chronology, a person damages the connection between face recognition and emotion, and "rationally" concludes the Capgras hypothesis. In his model, even if there's only a 1% prior of your spouse being an imposter, if there's a 1000 times greater likelihood of you not feeling anything toward an imposter than to your real spouse, you can "rationally" come to believe in the delusion. In normal people, this rational belief then gets worn away by updating based on evidence: the imposter seems to know your spouse's personal details, her secrets, her email passwords. In most patients, this is sufficient to have them update back to the idea that it is really their spouse. In Capgras patients, the damage to the RDPC prevents updating on "exogenous evidence" (for some reason, the endogenous evidence of the lack of emotion itself still gets through) and so they maintain their delusion.

This theory has some trouble explaining why patients are still able to update about other situations, but Coltheart speculates that maybe the belief evaluation system is weakened but not totally broken, and can deal with anything except the ceaseless stream of contradictory endogenous information.


McKay makes an excellent critique of several questionable assumptions of this theory.

First, is the Capgras hypothesis ever plausible? Coltheart et al pretend that the prior is 1/100, but this implies that there is a base rate of your spouse being an imposter one out of every hundred times you see her (or perhaps one out of every hundred people has a fake spouse) either of which is preposterous. No reasonable person could entertain the Capgras hypothesis even for a second, let alone for long enough that it becomes their working hypothesis and develops immunity to further updating from the broken RDPC.

Second, there's no evidence that the ventromedial patients - the ones who lose face-related emotions but don't develop the Capgras delusion - once had the Capgras delusion but then successfully updated their way out of it. They just never develop the delusion to begin with.

McKay keeps the Bayesian model, but for him the second factor is not a deficit in updating in general, but a deficit in the use of priors. He lists two important criteria for reasonable belief: "explanatory adequacy" (what standard Bayesians call the likelihood ratio; the new data must be more likely if the new belief is true than if it is false) and "doxastic conservativism" (what standard Bayesians call the prior; the new belief must be reasonably likely to begin with given everything else the patient knows about the world).

Delusional patients with damage to their RDPC lose their ability to work with priors and so abandon all doxastic conservativism, essentially falling into a what we might term the Super Base Rate Fallacy. For them the only important criterion for a belief is explanatory adequacy. So when they notice their spouse's face no longer elicits any emotion, they decide that their spouse is not really their spouse at all. This does a great job of explaining the observed data - maybe the best job it's possible for an explanation to do. Its only minor problem is that it has a stupendously low prior, and this doesn't matter because they are no longer able to take priors into account.

This also explains why the delusional belief is impervious to new evidence. Suppose the patient's spouse tells personal details of their honeymoon that no one else could possibly know. There are several possible explanations: the patient's spouse really is the patient's spouse, or (says the left-brain Apologist) the patient's spouse is an alien who was able to telepathically extract the relevant details from the patient's mind. The telepathic alien imposter hypothesis has great explanatory adequacy: it explains why the person looks like the spouse (the alien is a very good imposter), why the spouse produces no emotional response (it's not the spouse at all) and why the spouse knows the details of the honeymoon (the alien is telepathic). The "it's really your spouse" explanation only explains the first and the third observations. Of course, we as sane people know that the telepathic alien hypothesis has a very low base rate plausibility because of its high complexity and violation of Occam's Razor, but these are exactly the factors that the RDPC-damaged2 patient can't take into account. Therefore, the seemingly convincing new evidence of the spouse's apparent memories only suffices to help the delusional patient infer that the imposter is telepathic.

The Super Base Rate Fallacy can explain the other delusional states as well. I recently met a patient who was, indeed, convinced the CIA were after her; of note she also had extreme anxiety to the point where her arms were constantly shaking and she was hiding under the covers of her bed. CIA pursuit is probably the best possible reason to be anxious; the only reason we don't use it more often is how few people are really pursued by the CIA (well, as far as we know). My mentor warned me not to try to argue with the patient or convince her that the CIA wasn't really after her, as (she said from long experience) it would just make her think I was in on the conspiracy. This makes sense. "The CIA is after you and your doctor is in on it" explains both anxiety and the doctor's denial of the CIA very well; "The CIA is not after you" explains only the doctor's denial of the CIA. For anyone with a pathological inability to handle Occam's Razor, the best solution to a challenge to your hypothesis is always to make your hypothesis more elaborate.


Although I think McKay's model is a serious improvement over its predecessors, there are a few loose ends that continue to bother me.

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Likewise, how come delusions are so specific? It's impossible to convince someone who thinks he is Napoleon that he's really just a random non-famous mental patient, but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried). But him being Alexander the Great is also consistent with his observed data and his deranged inference abilities. Why decide it's the CIA who's after you, and not the KGB or Bavarian Illuminati?

Why is the failure so often limited to failed inference from mental states? That is, if a Capgras patient sees it is raining outside, the same process of base rate avoidance that made her fall for the Capgras delusion ought to make her think she's been transported to ther rainforest or something. This happens in polythematic delusion patients, where anything at all can generate a new delusion, but not those with monothematic delusions like Capgras. There must be some fundamental difference between how one draws inferences from mental states versus everything else.

This work also raises the question of whether one can one consciously use System II Bayesian reasoning to argue oneself out of a delusion. It seems improbable, but I recently heard about an n=1 personal experiment of a rationalist with schizophrenia who used successfully used Bayes to convince themselves that a delusion (or possibly hallucination; the story was unclear) was false. I don't have their permission to post their story here, but I hope they'll appear in the comments.


1: I left out discussion of the Alien Hand Syndrome, even though it was in my sources, because I believe it's more complicated than a simple delusion. There's some evidence that the alien hand actually does move independently; for example it will sometimes attempt to thwart tasks that the patient performs voluntarily with their good hand. Some sort of "split brain" issues seem like a better explanation than simple Mind Projection.

2: The right dorsolateral prefrontal cortex also shows up in dream research, where it tends to be one of the parts of the brain shut down during dreaming. This provides a reasonable explanation of why we don't notice our dreams' implausibility while we're dreaming them - and Eliezer specifically mentions he can't use priors correctly in his dreams. It also highlights some interesting parallels between dreams and the monothematic delusions. For example, the typical "And then I saw my mother, but she was also somehow my fourth grade teacher at the same time" effect seems sort of like Capgras and Fregoli. Even more interestingly, the RDPC gets switched on during lucid dreaming, providing an explanation of why lucid dreamers are able to reason normally in dreams. Because lucid dreaming also involves a sudden "switching on" of "awareness", this makes the RDPC a good target area for consciousness research.

New Comment
155 comments, sorted by Click to highlight new comments since: Today at 5:31 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Reminded me of The Three Christs of Ypsilanti:

To study the basis for delusional belief systems, [psychologist] Rokeach brought together three men who each claimed to be Jesus Christ and confronted them with one another's conflicting claims, while encouraging them to interact personally as a support group. Rokeach also attempted to manipulate other aspects of their delusions by inventing messages from imaginary characters. He did not, as he had hoped, provoke any lessening of the patients' delusions, but did document a number of changes in their beliefs.

While initially the three patients quarreled over who was holier and reached the point of physical altercation, they eventually each explained away the other two as being mental patients in a hospital, or dead and being operated by machines.


Where are we on selectively/temporarily/safely de-activating brain regions? Magnetic field to the RDPC sounds like it'd be fantastically fun at partiesextremely informative under the right circumstances.

Note to self: Do not attend any party organized by MBlume without making sure that all participants have signed an iron-clad NDA in advance.

Don't worry, what happens in la la land stays in la la land.

Note to self: Always sign NDAs associated to parties thrown by MBlume.


I had the exact same thought myself back in 2008, so I asked an experimental psych professor about this. At the same, he said that the TMS devices that we had are somewhat wide-area and also induce considerable muscle activation. This doesn't matter very much when studying the occipital lobe, but for the prefrontal cortex you basically start scrunching up the person's face, which is fairly distracting. Maybe worth trying anyway.

I've wanted to get my hands on a TMS device for years. Building one at home does not seem particularly feasible, and the magnetism involved is probably dangerous for nearby metal/electronics...


Building one at home does not seem particularly feasible, and the magnetism involved is probably dangerous for nearby metal/electronics...

A few minutes on Google makes this seem very unlikely.

I'm scared as hell to induce currents in my brain without knowing the neurobiology of it, but I do understand the electrical engineering half, so if you want an electromagnet and driver, I'll help you build one.

I had a very similar thought while reading this post. I have the Shakti system, maybe this weekend I'll target my RDPC with various frequencies and see what happens.
Follow-up: I didn't experience anything outside of the typical Shakti effects for me (a feeling similar to a strong nicotine buzz); however, there are many variables to tweak before I declare it a wash. I'll continue to experiment and post the final results somewhere.
Why not here?
2Scott Alexander12y
I don't know the technical differences between TMS and TDCS, but looks promising.
TDCS isn't depolarizing neurons with magnetism, it doesn't disable brain regions at all. Instead it is running a direct current across them. This appears to permanently increase or decrease its level of excitability. o_O
I think safely de-activating that part of your brain while you are still awake and able to act on your beliefs is a contradiction in terms. I'd want an experienced psychiatric nurse present, personally. And a million quid.
"Magnetic field to the RDPC sounds like it'd be..." ... fairly similar to high doses of psychadelics...?

Would a neurologist who has thus far been immersed daily with the fact that all brains can fail in all sorts of interesting ways be hit just as bad with these delusions if given brain damage as someone who might have operated all their life under a sort of naive realism that makes no difference between reality and their brain's picture of it? What about a philosopher with no neurological experience but with a well-seated obsession with the map not being the territory?

Had to make an account to answer this one, since I can give unique insight

I'm an atypical case in that I had the Capgras Delusion (along with Reduplicative Paramnesia) in childhood, rather than as an adult. The delusions started sometime around 6-9 years of age. I hid it from others, partly because I halfway knew it was ridiculous, partly because I didn't want to let out that I was on to them...and it caused me quite a bit of anxiety, because I felt like I lost my loved ones and slipped into parallel universes every few days. I would try to keep my eyes on my loved ones, because as soon as I looked away and looked back the feeling that something was different would return.

Sometime around 12-14, I realized how implausible it was for any kind of impostor to conduct such large scale conspiracy, and how implausible it was that I was slipping into parallel universe. I told my parents what I was experiencing and admitted it was irrational. I forced myself to ignore the feeling every time it came (though it still bothered me). Eventually around 17 the feeling stopped bothering me altogether, although little twinges still occured from time to time.

I'm currently in what I would consider to... (read more)

This is yet again a different scenario, but very interesting, thanks! It does occur to me now that there might be adult trauma patients who can see through the delusion, and never get diagnosed with it, since they don't start raving about impostor family members but just go, whoa, brain seems messed, better go see the stroke doctor.

This raises the obvious question: Could training in bayesian reasoning effectively increase the insight of delusional patients?
I ocassionally entertained ideas like that in the back of my mind. Truman Show, Teachers are aliens, Parents somehow know everything/ everything about me and are just fucking with me in the way that Zeus would to test character, except over a much longer Santa Clause/ Jesus esque period of time, the mothman is watching me, there are invisible monsters/ demons all around me and I need to be very sneaky not to be seen. I'm not sure I believed them, exactly. Maybe I did. Maybe I didn't. I still do the same stuff sometimes, with equally wierd things. Whenever I start half way believing in god or track of thought in my bain giving arbitrary commands is the voice of God, I just start doing experiments against the rest of reality til the shadow of belief goes away, since they never line up with testable reality. I've never had actual hallucinations, though, as far as I know.
For me it was that I suspected I was the robot. Never told anyone though.
0someonewrongonthenet12y be honest I half-believed those too...not that everyone was a robot, but that everyone was a philosophical zombie. It wasn't until high school that I figured out that for all intents and purposes, I'm a philosophical zombie too. But in my opinion, those really ARE normal childhood beliefs that are not the result of any neuropathology... beliefs that many philosophers still entertain in the form of solipsism.
Do those turn into these when they grow up?
Jill Bolte has provided a case study. She is a neurologist who had a stroke. Her experience is recounted in her TED talk and her book.

I have read the book (I recently received it from an elderly friend who hoarded books--I picked through about $20,000 worth of books and chose several hundred dollars worth), and it started off interesting, to hear of her personal experience of the stroke and its accompanying mind-states. She seems to have fought her way through various delusions, but not with any more success than other examples cited here. Yes, she is/was a neuroscientist. She also proudly proclaims that she tells her bowels "Good job! I am so thankful that you do exactly what you are meant to do!" every time she takes a dump, and concluded the book with some painfully New Age-y exhortations which gave me the same urge to roll around frothing at the mouth that I often experienced with clearly delusional Christian preachers in church.

The Amazon page for the book doesn't describe her getting any of the sort of very specific delusions described in the OP though, just general debilitation and paradoxical feelings of euphoria.
It's the closest we're likely to get, though, given the rarity of both neurologists and anosognosias.
Well, neurologists are rare, but I think we do know how to create targeted brain lesions that can cause pretty specific symptoms.
Any volunteers?
I might. Anybody got $20,000,000?

Well, if we're going there I'll do it for $10M.

All of the theories presented in this post seem to make the implausible assumption that somehow the brain acts like a hypothetical ideally rational individual and that impairment somehow breaks some aspect of this rationality.

However, there is a great deal of evidence the brain works nothing like this. In contrast, it has many specific modules that are responsible for certain kinds of thought or behavior. These modules are not weighed by some rational actor that sifts through them, they are the brain. When these modules come into conflict, e.g., in the standard word/color test where yellow is spelled in red, fairly simply conflict resolution methods are brought into play. When things go wrong in the brain, either an impairment in conflict resolution mechanisms or in the underlying modules themselves, things will go wonky in specific (not general) ways.

Speaking from personal experience, being in a psychotic/paranoid state simply makes certain things seem super salient to you. You can be quite well aware of the rational arguments against the conclusion you are worrying about but it's just so salient that it 'wins.' In other words it also feels like there is just a failure in yo... (read more)

This is generally a good comment, but I think the views of the original post and your comment are actually pretty similar. For example, seeing the brain as a rational Bayesian agent is compatible with the modular view. One module might store beliefs, another might be responsible for forming new candidate beliefs on the basis of sensory input, another module may enforce consistency and weaken beliefs which don't fit in... The "rational actor that sifts through [the modules]" could easily be embodied by one or several of the modules themselves. Whether this is a good model is a more complicated question (it certainly isn't perfect since we know people diverge from the Bayesian ideal quite regularly), but it is not implausible.
However, even if there are modules that try to form accurate beliefs about some things or even most things (and there probably are), it's still true that taken in aggregate, your various brain modules push you to have beliefs that would be locally optimal in the evolutionary ancestral environment, not necessarily true. Many modules in our brain push us toward believing things that would be praised, avoiding things that would be condemned or ridiculed, etc. It's too costly to be a perfect deceiver, so evolution hacked together a system where if it's consistently beneficial to your fitness for others to believe you believe X, most of the time you just go ahead and believe X. In large realms of thought, especially far mode beliefs, political beliefs, and beliefs about the self, the net result of all your modules working together is that you're pushed toward status and social advantage, not truth. Maybe there aren't even any truth-seeking modules with respect to these classes of belief. Maybe we call it delusion when your near-mode, concrete anticipations start behaving like your far-mode, political beliefs.

It is embarrassing to admit but I used to think I really had dog ears and a tail until I was about 16.

Well, at least older students found it completely adorable when I made noises...and the school authorities thought I was like smart or something and didn't really care either.

I don't really know the cause, I don't remember knowing about kemonomimi until a bit later but I had delusions not only about seeing these body parts in myself but also felt them. I thought I broke my tail once, for example.

[This comment is no longer endorsed by its author]Reply

I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Off the top of my head, that people believe what their brain tells them above any outside evidence, c.f. religious conversion originating from what, to the outside view, is clearly a personal delusion - but, from the inside view, is incontrovertible evidence of God.

It takes very good and clear thinking for the lens to actually see its flaws even when you don't have brain damage to the bit that evaluates evidence. I'm somewhat surprised a rationalist with schizophrenia actually managed this. Though TheOtherDave has mentioned being able to work out that a weird perception was almost certainly due to the stroke he was recovering from, and Eliezer mentions someone else managing it as well.

John Nash claimed that he recovered from schizophrenia because "he decided to think rationally" - but this only happened after he took medications, so...

For what it's worth, in order to understand the syntax of this phrase, I had to start over about five times.
Commas added!

This provides a reasonable explanation of why we don't notice our dreams' implausibility while we're dreaming them - and Eliezer specifically mentions he can't use priors correctly in his dreams.

Have I ever mentioned my theory that it may be partially due to overloaded working memory?


"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Maybe it's really hard to really get that you are a brain on an intuitive level. Human intuitions seem to be pretty dualistic (well, at least mine do). So 'you have brain damage' doesn't sound very explanatory unless you've spent lot of time convincing yourself that it should.

By the way, the last link is broken.

For example, one male patient expressed the worry that his wife was actually someone else, who had somehow contrived to exactly copy his wife's appearance and mannerisms. This delusion sounds harmlessly hilarious ...

It's harmless to claim that someone is observationally equivalent to his wife, but not his wife? When that kind of thing happens on a large scale, it's called "the debate about p-zombies".

isn't claimed actual equivalence the problem with P-zombies. Someone being observationally equivalent but different is merely extremely unlikely (maybe she has an identical twin, maybe aliens etc.) P-zombies are supposed to be indistingishable in principle, which is impossible/requires souls that aren't subject to testing for distinguishability.
I don't think P Zombie debates are a reat sign of rationality either, but I think the debate itself probably does nearly zero harm, if you don't count wasted time.
"If you don't count wasted time"? Okay, but likewise, if you don't count her husband getting shot, Mrs. Lincoln really enjoyed the play...
That's not likewise.
How so? A bunch of philosophers blowing valuable time on a worthless debate is a major harm, almost as if they were forcibly held in unemployment but drew the same resources from society.

For what it's worth, the "Super Base Rate Fallacy" seems to line up with my own experiences, except that there's sometimes an independent part of my mind that can go "Okay, I have 99.999% confidence that the floor will eat us. But what's the actual odds of that confidence, and what evidence did I use to reach it?". While I can't just dismiss the absurd confidence value as absurd, I can still (sometimes) do a meta-evaluation about the precise confidence.

It's sort of like how if a friend says that global warming is 99.99% likely to be tru... (read more)

It seems improbable, but I recently heard about an n=1 personal experiment of a rationalist with schizophrenia who used successfully used Bayes to convince themselves that a delusion (or possibly hallucination; the story was unclear) was false. I don't have their permission to post their story here, but I hope they'll appear in the comments.

I was under the impression that learning to recognize hallucinations was a standard component of schizophrenia therapy.

0Scott Alexander12y
Therapists can very very carefully try to talk patients out of their delusions, but I've always heard of it as a complicated long-term process and I've never before heard of Bayes being used directly.
You seem to be conflating the original schizophrenic state with the residual after the patients get antipsychotic medication: the latter may be readily amenable to reason; the former, the therapist would breach rapport with the patient, by challenging full delusions. Medication is part of the standard treatment for schizophrenia--usually, the major part. Drawing conclusions about delusions from the residuals following treatment seems to shield you from what would be obvious had you observed unmedicated patients. Delusions aren't failures of Bayesian rationality: they involve, typically, accepting a few self-evident priors, and these are driven by intense affect.

Yvain, it seems like some of this is potentially answered by how this interacts with other cognitive biases present.

Re: specific delusions, when you have an entire class of equally-explanatory hypotheses, how do you choose between them? The availability heuristic! These hypotheses do have to come from somewhere inside the neural network after all. You could argue that availability is a form of "priors", but these "priors" are formed on the level of neurons themselves and not a specific brain region: some connection strengths are stronge... (read more)

5Scott Alexander12y
Availability heuristic seems related, but still doesn't explain why delusions are so much more fixed than ordinary conclusions. I think dreams are also a good parallel for psychosis, but it's hard to tell how good without having been psychotic.
To continue with the bias theme, how about confirmation bias? They settled on the most available theory that fits all the facts, and then it becomes part of their identity, they begin to rally the soldiers. Is their delusion that they are Jesus really that much less sticky than someone's political party?
Seems unlikely. First, confirmation bias has its limits and normally is never capable beating direct observational evidence. Second, people basing their identity on their being Jesus sounds like a plausible idea, but identity based on the fact that my arm isn't paralysed not that much. Third, it takes some time to associate own identity and status feeling with an idea - one doesn't become political partisan overnight - while the anosognosic delusions emerge immediately after the brain is damaged (well, I suppose this is so, but I can easily be mistaken here).
I dunno. During the period after my stroke where I was suffering from partial right-side paralysis, a lot of the emotional suffering I experienced could reasonably be described as caused by having my identity as a person whose arm wasn't paralyzed challenged. I would probably say "self-image" instead of "identity", granted, but I'm not sure the difference is crisp.
Interesting. Did thinking about the paralysis feel similar to (learning a good argument against your favourite political ideology / seeing your favourite sports team lose / listening to an offensive but true remark made by your enemy / any situation in which you fell victim to confirmation bias)?
It did not feel especially similar to any of the examples you list. The general case is harder to think about... I'm not sure.
I can give some personal anecdotes regarding salvia if you are interested. If I had to come up with a rationalist explanation to the experience, I would say that the affected consciousness accepts, without question, fantastically generated priors as absolute truth, and largely ignoring actual external sensory input, and even then modifying it to fit the delusion.
Do you think it's at all feasible for someone under the influence of salvia to record their thoughts as they occur? Would it help if they do so often? For example, I write a stream-of-consciousness monologue every day on Would I be competent enough to write down what I'm thinking while under the influence of the drug? If you lose all sense of self, would you still be able to understand the concept of another person? For example, I wonder if it would be possible for someone under the influence of salvia to answer questions about their mental state. Considering the description, I'd guess that even if you were physically capable of talking to someone or writing down your experiences, you probably won't be inclined to, am I right? Or if you did speak, you wouldn't be aware of it. Sorry if these questions are intrusive; I'm very curious about this sort of thing.
Considering that I provoked the questions, I don't consider them intrusive. First of all, due to the extreme distortion of sense of time, the whole episode may occur in less time than it would take to have a useful conversation. However, I have very vivid memories of my stream of consciousness--maybe one of the main reasons that it makes such an impression is that one remembers the whole thing, even if it is difficult to put into words. I'll recount here a few such memories; this is from quite a few years back, but many facets of it changed my mindset. First of all, I began to feel a little bit dizzy and noticed a kind of echoing effect in the ambient sounds around me. Soon afterward, I got the impression that I was sweating profusely from my temples, and I reached up to see if it was only a feeling, or if it was actually sweat, but could not reliably analyze my hands, due to a sort of increasing pulsation feeling, like when you get up from sleep and straight into a brightly lit bathroom, but involving all of my senses. I began having difficulty moving around, due to the sensation that "down" was now where "north" used to be, so I had to sit down on the floor to avoid falling out the back door (I use the word sensation in an objective sense; at the time I truly believed that gravity had turned ninety degrees). Continuing to sit on the floor while feeling like I was pressed to the floor by centrifugal force, I became aware of whispering sounds all around the room. I discovered that the room, and all of waking life, was filled with ghosts, whispering to each other and observing the living. Two things to clarify here: The sweating temples feeling happened consistently. Among this and other descriptions, many times "to hear" something also implies "to see" something, and yet it was not photonically visible. The best I can describe is like a subliminal HUD. Maybe more like what I imagine one might sense with echolocation. It's to know the shape of something without see
Heh. This is a lot like how Erik Davis describes Jewish mystics viewing the Torah as a compressed encoding of all possible texts ever, and the Tetragrammaton, YHWH, as the source of all the words in the Torah.
Now we know what they were smoking!
Yeah, not exactly - Salvia divinorum is native to Mexico - but I've read scholars implying that the Middle Eastern mystics often used psychoactive mushroooms in addition to generic techniques like prayer and fasting.
Isn't it much more likely they were brain damaged in a more permanent way? Religious people who use psychoactives tend to openly praise their drugs much like they praise their gods (think soma, peyote, ayahuasca) - middle eastern mystics didn't do that. And with malnutrition, rampant child abuse and almost no health care, there's bound to have been enough brain damage around.
That is also implied in The Transmigration of Timothy Archer. Or, maybe some of the Nephites returned to Jerusalem with a stash... And of course, as Risto_Saarelma mentions in a comment further down, it may be possible to attain similar states through mental exercises without benefit of pharmaceutical remedies.
Thanks! That sounds fascinating, if scary. Did any of these experiences affect your beliefs and actions while sober? I've heard of people having life-changing revelations on LSD, for example, although I'd be skeptical of the accuracy of any beliefs suddenly revealed to people while tripping. I can easily imagine more subtle and potentially helpful behavioral changes, though.
I have had mild but long-lasting effects from revelations under the influence of MDMA and 2C-E. The revelations were personal, not about the nature of reality. I would say that they could generally be described as resulting from a reduced avoidance of thinking about things that I already had plenty of information on, and had basically positive results. Both took some time to integrate afterwards, and the 2C-E trip was at times a somewhat unpleasant look at myself. The MDMA trip was unambiguously pleasant at the time, even considering that I spent time thinking about some fairly unpleasant stuff.
That was something I failed to get across in my reply, I guess. I feel like I owe a part of my mental composition of today to those experiences, I mean, imagining infinity is not the same as experiencing infinity, and even though it was internally generated, the memories and impressions and rewired synapses are very real. I was fully aware when the effects wore off that it was not "revealed knowledge", but it exposed me to viewpoints and thoughts that I might not have otherwise had access to. My description of the events was my flow of thoughts during the events, not my "usual" philosophy. On a side note, as a child I had the unfortunate combination of truth-seeking and logic, and a strong neurological tendency toward magical thinking. Perhaps my familiarity with walking the line between Spock and Q allowed me the ability to interpret the otherworldly impressions with quiet detachment, while simultaneously benefiting from the sense of wonder they conveyed.
LSD is a source of metaphysical spectacle and entertainment, not of edification. It will give you a lot to think about, but it's not a source of answers, and I mildly recommend against it if you value intellectual achievement.
I've understood the claims of LSD therapy to be mostly about fixing psychological hang-ups, like the recent research claim that it helps with alcoholism. This is mostly a separate direction from both entertainment and intellectual achievement. Of course psychological well-being can indirectly lead to more intellectual achievement, and an altered psychological outlook can change the set of hypotheses you will entertain as the starting point for intellectual work. No idea whether the post-LSD hypothesis pool will necessarily be better than the pre-LSD one. If it's larger, then it might help discover some unlikely ideas that actually do pan out when you take the time to think through them off-LSD. Incidentally, there are some interesting anecdotes that deep meditative states achieved by long-term meditators resemble the states you end up on LSD. At least MCTB alludes to this.
My personal experience with salvia is limited (2 times, one much more intense than the other), but here are my thoughts. I don't think I would want to try to record a salvia experience while it was occurring. While I found the experience interesting, valuable, and rewarding, it was also scary, intimidating, and awe-inspiring. It is not something I would want interrupted by things like conversation or writing. The time dilation might well be too profound for that to even work well. Also, I found noises, light, and rapid changes in sense input to be distracting. Having other people move about the room was... scary. I did not experience the extreme disconnect with reality some people describe, but it was a different mindspace in a way that all other substances I've tried were not. Doing anything other than experiencing it to the fullest would seem inappropriate. (It's possible many of these problems would fade with repeated use. I would consider such a result disappointing, and have no particular desire to attempt to produce it.) In contrast, I would be happy to talk with anyone about the experience while on any of the other substances I've taken. Depending on mood, I might feel anxious or nervous about talking to someone who was sober, especially if they had no personal experience or were someone I did not know well. Some experiences I've had would make writing about them difficult, because of distractability, visual distortion, a tendency to stop and stare at the beauty of the pencil eraser, etc. Others would be easier. DiPT might be easier to write about than talk about; auditory distortion makes conversation difficult / distracting. Have you read any books on the subject? There are many good ones out there. I could recommend a couple if you'd be interested, though I haven't read much (or partaken of the substances) lately.
Thanks for the response! :) As you can probably tell, I'm trying to decide whether it's worth my while to dabble in psychoactive drugs. I'm not actually very curious about the having the experience itself; it sounds scary and disorienting. I would, however, be willing to endure that scary experience if it's likely to teach me something interesting and important about myself, which is why I asked about recording it. I'm teaching myself lucid dreaming and meditation in the hopes that I'll be better aware of my own personal quirks/subconscious obsessions, for example. A sudden, massive shift in perception might help bring things to the forefront which I had avoided addressing before. In your experience, do drugs as a whole actually help with that, given that I'm not all that interested in the experience for its own sake? Edit: Actually, real science books would be even better, thanks. I previously avoided drugs because Drugs Are Bad, then because Drugs Are Dangerous, and now I figure I ought to do an accurate cost-benefit analysis. And because I'm biased to think drugs are awful things which awful people partake in, I should explicitly seek out some empirically supported benefits.
You're welcome! I would say my experiences with Salvia were somewhat scary and disorienting, but not problematically so. I'm not quite sure how to describe what I mean here, but "scary" should be a very minor part of the description. I certainly felt no need to do anything about it at the time, or surprise that I didn't need to after the fact. Think scary as in "I go rock climbing, and looking down makes me a bit nervous". Except without the adrenaline, and otherwise in a completely different emotional context. I hope that puts it in perspective. Anyway, personally, I would not recommend starting with salvia, though I know a couple people that did exactly that and had good things to say about it. I would say that drugs can help with what you're asking about, but that it isn't guaranteed. Of course, I didn't go into it hoping for such results at all, so it's probably far more likely that you'll get what you're looking for than not, imho. Set and setting matter a lot. On a related note, if you go into your experience expecting it to be scary, well, you'll probably get what you wished for. Basically, I think you should do this because you're expecting to enjoy the experience, and I think that's an entirely reasonable expectation. I'd also add that my description of salvia as being slightly scary does not apply to any other substance I've taken. For starters on reading, I would suggest Phenethylamines I Have Known And Loved (aka PiHKAL) by Alexander Shulgin, and its sequel TiHKAL (Tryptamines ...). Alexander Shulgin is a scientist and basically rational thinker, with a strong interest in the human mind. He's a synthetic organic chemist, and personally invented, synthesized, and took what might literally be a majority of the synthetic psychedelics known.
s/salvia/saliva/g for fun.

A similar mechanism explains delusions of persecution, the classic "the CIA is after me" form of disease. We apply the Super Mind Projection Fallacy to a garden-variety anxiety disorder: "In what case would I be justified in feeling this anxious? Only if people were constantly watching me and plotting to kill me. Who could do that? The CIA."

My mom (a psychiatrist) was listening to a continuing education program on schizophrenia, and the lecturer said that schizophrenia tends to develop slowly, and in stages; before a person ends up with delusions of persecution, they usually start out by feeling intense fear and anxiety that they can't come up with any explanations for.

Yes it can develop slowly, but also fast as hell, depends on what pulled the trigger. Its pretty relative, and it varies from person to person.. Also schizophrenia is not "one single" disease or diagnosis, its kind of many diagnosis under " schizophrenia". Very complicated and rare. And just because you are delusional, dosent mean you're schizophrenic immediately.
Not that rare. ~1%.

"Coltheart et al pretend that the prior is 1/100, but this implies that there is a base rate of your spouse being an imposter one out of every hundred times you see her (or perhaps one out of every hundred people has a fake spouse) either of which is preposterous."

What if their prior on not feeling anything upon seeing their wife is 0? What if most of the reason for reasonable people's prior on this being much lower it is low status, instrumentally bad, etc, but their rational sincere thinking about it prior is close to 50/50? I notice you call... (read more)

Similarly, I think Coltheart's criticism described here was flawed because it made the prior too specific. How often do you see a person at a distance or facing away and you "recognize" them as a loved one, but then the person comes closer or turns around and you realize you were wrong? It's not often, but it happens enough that we all know that feeling of sudden non-recognition. I often see it in children who come up to me expecting to find their father. The likelihood ratio of priors doesn't have to be for "my wife" versus "an imposter", but could be for "my wife" versus "not my wife". If that is the case, then the brain-damaged person uses the imposter theory to explain the general "not my wife" endogenous evidence.

I wonder if the same mechanisms could be invovled in conspiracy theorists. Their way of thinking seems very similar. I also suspect a reinforcement mechanism: it becomes more and more difficult for the subject to deny his own beliefs, as it would require abandonning large parts of his present (and coherent) belief system, leaving him with almost nothing left.

This could explain why patients are reluctant to accept alternative versions afterwards (such as "you have a brain damage").

It seems to me that many people who believe extremely improbable conspiracy theories may well have undiagnosed brain damage. But you probably couldn't get most of them to agree to come in for a brain scan.

Prefrontal cortex damage can be really weird. I'd really like to see how these different syndromes manifest in an fMRI.

Contextual preface: my own brand of crazy tends to interfere with getting helped by professionals, so I've done a lot of amateur-level neurobiology research on my own, trying to pin it down. An "inability to update priors" does seem to be a component of it, but it seems primarily triggered by emotional intensity.

Anyone who would like to prod me with Science is extremely welcome to do so.

By what mechanism does it interfere with professional assistance?
Twofold: 1. I tend to display resistance to authority of all kind (ESPECIALLY therapy), because as much as I try to behave as a rationalist, I appear to actually behave as if I believed that most human beings are strategizing explicitly to inflict maximum emotional harm on me, and that any human being who is "playing friendly" has a deeply sinister game that will either inflict maximum harm on me by either playing on my trustfulness ("haha! you thought I was trying to help you!") or playing on my lack of trust ("haha! I tricked you into distrusting a genuine path to getting better!"). I appear to believe that the question of which human beings want to befriend me, and which ones only want to trick me to inflict harm, is only determined after I have chosen who to trust. (Yes, I realize this is absurd.) 2. I tend to shut down whenever I attempt to motivate to help myself, because as much as I try to behave as a rationalist, I appear to actually behave as if I believed that every choice I make will ALWAYS turn out - retroactively - to be the worst choice I could have made. (Yes, I realize this is absurd.)
You might look to structured social interactions to help fit your emotional reactions to your intellectual beliefs about social interactions. For example, board games have relatively limited variation in social interaction between people who rate you a 6 and those that rate you a 4 on a 10-point likeability scale. It's a chance to gain additional data at low risk. Look to places like (I'm not sure that's international). is a chance to see what you might like. Regarding therapy, keep in mind that good fit between therapist and patient is very important. If you haven't gotten good value from therapy but are still willing to try it, finding a new therapist might yield benefit.
Well, board games (and card games, and the like) run into a problem where I'm perceived as focused, smart, and competent, so everyone tends to team up to eliminate me quickly - so I tend to get a lot of people actually reinforcing the idea that groups conspire against me. Yeah, back when I had money for therapy, I shopped around a lot. Anymore, well... you get what you pay for.
I'd recommend finding a game where the players are working together against an automated hostile environment, such as Zombicide. If it seems like you have a workable plan, the other players will go along with it out of self-interest if nothing else. (D&D /can/ work like that, but there are a lot of other tricky factors when it's a GM rather than a program) As for emotional intensity... try to find some little ritual that relaxes you, like sitting still with your eyes closed and breathing slowly in and out ten times, and start doing it at semi-random times during the day. Once that becomes habitual, focus on remembering to go through the ritual whenever you start to get excited or upset. There is no plausible mechanism by which following these instructions as intended could cause kidney failure. If self-improvement fails, what sorts of things do motivate you to act? Absurdity is a tricky thing. Have you ever tried constructing an explicit formulation of your inferred emotional beliefs and (temporarily) acting as if it was an accepted part of your intellectual beliefs, with the goal of seeing it torn down?
I've done stuff like this; in some situations, that works reasonably well, but in others I wind up send out flags that I'm too low-status to "deserve" being listened to, no matter how reasonable or workable my plans are. For a very long time, fear motivated me to act, but that wore out. After that, shame motivated me to act, but that's almost fully eroded. I don't know what I'll have once shame runs out. I have done exactly and explicitly this - I got the idea, weirdly enough, from Aleister Crowley via Robert A Wilson. Unfortunately, I'm VERY good at crafting mindsets / "reality tunnels" and following them - consciously embracing my inferred emotional beliefs tends to reinforce them, not tear them down. I can enter a sort of "1984" mode where holding onto my beliefs is explicitly more important than my own survival, and relish in the self-destructivity that the absurdity of my beliefs is inflicting upon me.
Aha! In that case, possibly what you need is a code of honor. Lay down some rules of constructive behavior (I'd recommend studying a variety of historical precedents first, particularly the ways in which they can go wrong... Bushido, Ms. Manners, etc.) and pretend to be the sort of person who thinks that following those rules is the Most Important Thing. Done correctly, you can stop worrying about the uncertainty of whether some other choice would have had a better outcome, since in any given situation there is only one honorable course of action. Simply calculate what the correct action is, and follow by rote. Under some circumstances honor may compel you to trust someone who most people would not, pass up opportunities for personal gain, dive into a frozen lake to rescue a complete stranger, openly defy the law, or otherwise engage in heroically self-destructive behavior, but it is entirely possible for the gains (from following a calculated strategy, and from other people learning to trust and rely on your consistent behavior) to predominate. This may be controversial, but I would recommend against keeping an explicit, external record of how honorable or dishonorable your behavior has been. A journal or blog can be useful in other ways, but the plan here is eternal striving toward an ideal, not 3% improvement over last month.
I actually have a code of honor, and operate explicitly as if those rules are the Most Important Thing. Rule 0 is "Should does not imply can; should only implies must." - or, put another way, "Just because you cannot do something does not excuse you for not having done it." Rule 1 is "Always fulfill other peoples' needs. If two people have mutually exclusive needs, failing to perfectly fulfill both is abject failure." Rule 2 is "All successes are private, all failures are public." Rule 3 is "Behave as if all negative criticisms of you were true; behave as if all compliments were empty flattery. Your worth is directly the lower of your adherence to these rules and your public image." Past 3 the rule-sorting gets fuzzier, but somewhere around rule 5 or 6 is "always think the best of people", around rule 7 is "It's wrong to win a challenge", somewhere around rule 10 is "losers suck".

Every rule I see there seems to be you shooting yourself in the foot. I was thinking of something which would produce exactly one correct course of action under most reasonable circumstances, whereas you seem to have quite rigorously worked out a system with fewer correct courses of action than that.

How comfortable are you with arbitrarily redefining your code, voluntarily but with external prompting? I mean, given the ambient levels of doom already involved.

Rule 0 is this one, and Rule 1 is a subcase of it, but rules 2 and (especially) 3 wouldn't work for me -- I seem to function better when my status and (especially) my self-esteem are high than when they're low. And I don't understand Rule 7.

The thing is, my rules have evolved to deal with the fact that I've ALWAYS been low-status. Most of my rules have evolved to ensure that my self-esteem stays low, because as a child and young adult, I was repeatedly punished whenever my self-esteem exceeded that of my high-status superiors. So, for me, destroying my own self-esteem and status are defensive mechanisms, designed to prevent the pack from tearing me apart (sometimes literally and physically).

Also, rule 0 ("Do the impossible") is great if you're some kind of high-status wunderkind like Eliezer, but when you're some scrawny little know-it-all that no one WANTS to succeed, it's just an invitation to get lynched, or sprayed in the face with battery acid, or beaten with a lead pipe, or sodomized with a baseball bat.

And once you're in the domain of the "impossible", you lose access to even those systems that have been put in place explicitly to protect people from being sodomized with a baseball bat or sprayed in the face with battery acid, because the bad people want it to happen, and the good people are incapable of acknowledging that "modern society" is still that capable of savagery.

I've mi... (read more)

That makes your situation make more sense. You might find Scott Sonnon's work useful-- he started out from a situation roughly as bad as yours (possibly less death threats, but with relentless bullying, learning disabilities, and a connective tissue disorder) and was able to put a good life together, including high achievement He works specifically with lowering one's panic level.
The resources on this site seem to be mostly oriented toward raising somewhat above-average nerds up to truly exceptional levels. Sounds like you need a different set of resources, for a different sort of step up, possibly something like 'feral/marginal' up to 'serving and being protected by a worthy master.'

Sure, but the problem is that I still have all the status-seeking instincts of an above-average nerd. I'm no good serving a master, worthy or otherwise. When I was younger, my problem was that every master I served was demonstrably less intelligent than I was, so I spent a lot of time trying to grant the wishes they would have made if they were smart enough to wish right, rather than granting the wishes they did make.

In status-oriented situations, this is a HUGE FUCKING MISTAKE, and taught me to understand that I am a bad samurai.

In the past few years, I've been ronin for so long that my bushido has gone rusty - and anyways, in this corporate market, no one wants a ronin in the first place.

There are non-corporate jobs. Personally, I sort scrap metal. Perhaps we could come up with a pitch for an autobiography disguised as an anti-self-help book, "How to completely cripple yourself in just six years, with no drugs, exercise, or gimmicks!" put it on Kickstarter and see how much money people throw at you?
laugh it at least has the charm of complete truth in advertising.
I must disagree, based on the technicality that there was actually some strenuous physical exercise involved with the volunteer firefighting thing. On a more serious note, would you actually like to try doing this? Whatever else is wrong, you're self-evidently capable of expressing yourself coherently and concisely in text. Most of the other prerequisites of being an author can (at least in principle) be handled as arm's-length transactions, which minimize the need for any sort of personal trust.
I'm willing to entertain any idea; can you describe further? (note: private messages on this site have not been appearing reliably for me. Is there an easier process for identity exchange?)
How silly are you willing to be about the identity-exchange thing? I could, for example, give you my username on Nightstar's forums, compromise of which would cost me nothing. You create an account there, send me a PM through that forum, I reply with some piece of information which you then repeat in reply to this comment, and (a secure channel having been established) I could then send you my e-mail address through Nightstar's private messages.
Heh, that's a little more elaborate than necessary, I think. I'm bdill(at)asu(dot)edu; it shouldn't be too problematic to make that public.
OH GOD OH GOD OH GOD And I thought of myself as someone who used to be low-status... Anyway, "do the impossible" was intended to be a paraphrase of your Rule 0, which apparently I had misunderstood.
A certain kind of personal trap has been laid out and described, quite well. There is a set of ideas or "takes" on reality that have been accepted as real, but ideas and takes are never real. The error is widespread and normal, even encouraged, but when the content goes awry, the results can be devastating. The key in the above statement is "this environment." There is no "this environment." As Buckaroo Banzai said, "Wherever you go, there you are." Any environment contains ample evidence to support almost any interpretation, and our ability as human beings to invent interpretations is vast, so everywhere we look, we can find what we have believed. We may imagine that the goal is to invent interpretations that are "true." But interpretations are neither true nor false. The problem with the value-laden interpretations being invented here is the effects they cause. There are useful interpretations, that empower us, and ones that don't. There are two kinds of interpretations. The first, and fundamental kind, is predictive, it takes raw sensory data and predicts what is coming next. That's not the problematic kind, though if we get stuck in an inefficient predictive mode, believing our predictions are "true," confirmation bias can still strike. Still, this kind of interpretation can be readily tested. The problem is in the second kind of interpretation, the division into good and bad, sane and insane, and hosts of these higher-level interpretations. They are much further from reality than the first kind of interpretation, and it is far more difficult to test them. How do we test if the world ("this environment") is actually good or evil, friendly or hostile? We are continually creating our world, but we imagine that we are only discovering it. So we are easily victims of "how it is." Yet we make up "how it is"! That's a judgment, it is actually a choice. We imagine that we are constrained in our choices by our identity, but the identity does not exist. That's anci
You could play games where this is not something people can really do. For example, Settlers of Catan would be a bad choice, but Apples to Apples would be a good one.
Is there a good way to make such games enjoyable?
Let's remember that the purpose of this activity is to give to a safe opportunity for you to have social interactions. Hopefully, this will help you be more comfortable with the idea that other people do not interact with you for the purpose of causing you distress. To that extent, beware trivial inconveniences. Still, losing is no fun - you might not be able to force yourself to keep something that only might be helpful but is not enjoyable. Games have a variety of mechanics for preventing attack the leader mechanics based solely on player reputation. First, you can anonymize player input. That's what Apples to Apples does. But it is a light party game (not my cup of tea). Second, you can restrict the player's ability to target specific other players. Dominion works that way - generally, attacks target everyone at the table equally. Third, you can pick games with much higher complexity. One of my favorite games, Brass, is at least an order of magnitude more complex than a simply game like Monopoly. You are unlikely to find that others target you simply because you are smart and analytical when it's almost a prerequisite to play. In fact, it might be worth some time looking at Boardgamegeek (warning: potential time-sink) to find interesting looking games where your analytic nature is unlikely to make you a target. I really do think that practice is safe social interactions will provide helpful to you, both because it is providing data to adjust your social predictions and because improving social skills will make you more effective at avoiding unpleasant social interactions.
I've never tried forcing myself to like a game, but why do you think that you need to? There are very many games in which you win by doing better than other players and you can't really make specific other players do worse. Odds are you'll like some of them. There's Dominion or Race for the Galaxy. There's trivia games. In general, many games classified as "party games" are good, but not all: Mafia, for example, would be a terrible choice. There's cooperative games like Pandemic. There's also two-player games (like chess) in which you at least won't have a group teaming up against you, or team games (like spades) in which you'll have (at least) one person on your side.
Before I prod any further, what would your preferred outcome be?
In the most abstract? Some way to demonstrate to people (including myself) that I'm a sapient being that deserves respect, and not a worthless, lazy, broken, scary parasite. More concretely, some mechanistic description of why I've had trouble operating within existing social norms, and why I tend to operate under different base assumptions than others - preferably a description that might suggest methods of interacting with the human world that allows me to maintain my dignity and self-respect, without having to immediately acknowledge my abject worthlessness and helplessness as a unilateral precondition for requesting assistance. It would be nice if someone could point at a bit of my brain, or a specific pattern of answers on behavioral tests, and say "you follow this descriptive pattern which we've labeled X, whereas most people follow this other descriptive pattern which we've labeled Y. There's a lot of research that shows that X does not interact well with Y", in a way that isn't an obvious attempt to reinforce their own social assumptions against a threatening Other.

In the hospital where I worked, there was a woman who was able to articulate that it was very unlikely that her neighbor could read her mind. But, she reasoned, there were a lot of people in the world, so surely someone could read minds. And she had the bad luck to live next door to that person.

So sometimes people are able to acknowledge that their beliefs are statistically unlikely but still believe them.

Feedback: I thought that this post was interesting and at times quite amusing. However, I didn't upvote (nor downvote) because I felt that the concerns you discussed under the open questions section were serious enough that this post could basically be summed up as "here are some theories which feel like they might be on the right track, but basically we're still clueless".

I want to see more posts that explain the current state of knowledge of interesting rationality-related fields, and that explicitly state what questions are still troubling. Thus I upvoted the post.

[I am unsure, whether it makes sense to write a comment to this post after such a long time, but I think my experience could be helpful regarding the open questions. I am not trained in this subject, so my use of terms is probably off and confounded with personal interpretations]

My personal experience with arriving at and holding abstruse beliefs can actually be well described by the ideas described in this post, if complemented by something like the Multiagent models of Minds:

For describing my experience, I will regard the mind as consisting loosely of su... (read more)

Related Research:

Harvard did a study on LLI (Low latent inhibition. It means that you don't block as much stimulus and can mean having a lot more ideas to sort through) and discovered that people with high LLI and high IQs tend to be more creative whereas people with low IQs and high LLI are more likely to be schizophrenic. This may be because people with higher IQs are able to evaluate a larger number of ideas whereas those with lower IQs may find themselves overwhelmed trying to do so.

This suggests that schizophrenic people could benefit from assistanc... (read more)

A Related Experiment:

I once read about an experimental mental hospital for people with schizophrenic symptoms in California called Soteria House.

At Soteria house, the philosophy was to let the mental patients do whatever they wanted with the exception of hurting people. They got to run around naked if they wanted to, and there was a room for them to break things in (with breakable objects).

The staff was trained on a method to help the schizophrenics sort out reality from delusion. They were assisted by being told which things others couldn't see and we... (read more)

Do you have any evidence of brain damage in schizophrenia that isn't explainable by drug use (including antipsychotics especially) and is fairly common among schizophrenics?

Regarding arguing oneself out of delusion, cognitive therapy for schizophrenia has a decent track record. More info on request, after my wife gets home (she's a psychologist).

3Scott Alexander12y
See for example on structural brain damage. For functional brain damage, read the above-linked paper by McKay where he starts talking about change in patterns of prediction error signal activation in the right prefrontal cortex.
Here's a better source (PDF), link-chained from yours. On brain changes due to drug use: So the answer to my question appears to be that drugs may or may not be doing some brain damage, but not nearly as much as the whole change seen in schizophrenia.

(Well-written post. There are more interesting subjects in the general 'schizophrenic reasoning' space though. If anyone ends up writing more on the subject I'd like if they sent me a draft; I know quite a bit, both theoretically and experientially.)

but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried).

At the very least (pretending that there are no ethical concerns), it seems that you ought to be able to exaggerate a patient's delusions. "We ran some tests, and it turns out that you're Jesus, John Lennon, and George Washington!".

To this same question, I can't help but notice that the brain damage being discussed is right-side brain aka "revolutionary" brain damage. So if it turns out that it isn't possible ... (read more)

The patient who believes he is Jesus and John Lennon will pretty much agree he is any famous figure you mention to him, but he never seems to make a big deal of it, whereas those two are the ones he's always going on about.

Are random people allowed to visit harmless psych patients with those patients' consent? This sounds fascinating.

Hehe. I'm a psych patient and I'm allowed to visit LessWrong.
Do you have fascinating delusions you would like to let us try to do Bayes to?
A better phrasing might be to contextualize it from someone else's viewpoint. The person having the "delusions" might not perceive them as such, and might not find them particularly fascinating at all.
I think it was a fair response in context. I did write it tongue-in-cheek.
Can you think of a way to do this that would not feel like a freak show? Psych hospitals are full of staff who actually need to talk the the patients, plus students and interns and the patients' friends and family who visit. Almost all the patients get tired of being asked about how they're doing, since they have to explain how they're doing so many times a day to a lot of near-strangers. Introducing tourists seems like a bad plan.
I think the idea was to find psych patients willing to speak with one or more Bayesians about whatever interesting beliefs that got them there in the first place, and let them furiously jot down notes and do all kinds of arcane math in the process.


"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it do... (read more)

If you don't have the right understanding of how the brain works, I've not sure this thesis adequately explains. By comparison, the expected observations from "Your car has engine damage" is a car that doesn't drive at all, not one that turns right but not left.

Once I understood the theory, my first question was has this been explained to any delusional patient with a good grasp of probability theory? I know this sort of thing generally doesn't work, but the n=1 experiment you mention is intriguing. I suppose what is more often interesting to me is what sorts of things people come up with to dismiss conflicting evidence, since it is in a strange place between completely random and clever lie. If you have a dragon in your garage about something you tend to give the most plausible excuses because you know, deep dow... (read more)

Is it possible that what specific delusions a patient develops after their brain damage correlates with their experiences before the brain damage? Maybe paranoid schizophrenics in the US tend to think the CIA is after them, but those in Soviet Russia used to think the KGB was? How would these delusions have manifested in the past, before any such organizations existed? Perhaps some of them convinced themselves that God's wrath was being brought down upon them, or that Satan was haunting them.

Also, does Capgras delusion apply to everyone the patient has an... (read more)

If there's been any research into this I haven't been able to find it; but a few people outside academia seem to have associated the "Paul is dead" meme from the Beatles era with the Capgras delusion. There's also a number of conspiracy theories that seem to fit the general pattern, including David Icke's reptilian humanoid theory.
Interesting; thanks. Also, do you know if Capgras delusion only wipes away all previous emotions you associated with faces, or if it also makes it impossible to form new emotions related to other faces? What if, for some reason, the spouse decided to go along with the charade that they were a different person, and managed to convince the Capgras patient to stay married to them anyway? Would the patient eventually form an emotional connection the way normal people do when they meet, date, and marry someone? Or if a Capgras patient had a child after the brain damage, would they associate their child's face with emotions while still considering their spouse and parents to be imposters?
There is alleged to have been a Capgras patient who wasn't very happy with her marriage beforehand, but decided she liked the "imposter" much better. No cite, I think it was in a TED talk.
It seems almost certain. In the least one should know about CIA's existence to have that sort of delusion. This is a great question to test the emotional reaction hypothesis. I would add: what about their enemies? A negative emotional response is still an emotional response (well, maybe, I wouldn't be so surprised if negative and positive emotions were each associated with a different part of the brain).

I suspect that, especially in dreams, and to a lesser degree in déjà vu, the output of place cells have the ability to be combined in novel ways that normally might be rejected when fully conscious. I am not aware that anything similar has been discovered regarding familiar people, but if so, that would work in a surprisingly similar way ("Don't I know you from somewhere?"), and would accommodate the typical example. What the unconscious mind composes as a shorthand template for my mother is later detailed, but still contains the "my mother&... (read more)

"You have brain damage" is also a theory with perfect explanatory adequacy.... Why not?

This led me to think of two alternate hypotheses:

One is that the same problem underlying the second factor ("abnormal belief evaluation") is at fault, that self-evaluation for abnormal beliefs involves the same sort of self-modelling needed for a theory like "I have brain damage" to seem explanatory (or even coherent). The other is that there are separate systems for self-evaluation and belief-probability-evaluation that are both damaged... (read more)

"Brain damage makes my brain stop working properly. If I have brain damage, I wouldn't be able to reason like this, therefore I cannot have brain damage. The CIA just told my doctor to say that I do."
There's a good check for this. I have, every 2 years or so since 2002, taken a series of IQ tests and averaged the results together. (Side note: in 1997, an in-person IQ test rated me at 155. This isn't calibrated to the other tests, of course, but it's an interesting anecdote.) In 2002, my IQ according to this process was 148. In 2004, it was 150. In 2006, it was 145. In 2009, it was 135. In 2011, it was 120. Today, it was 115. I keep asking myself "now what", but I'm not even sure I'm qualified to answer that question anymore. (This will sound hilariously cliche'd, but... I don't FEEL any dumber. It's just become more and more frustrating to think about deep problems. I feel like my domain expertise is just as good as it ever was - but how the hell could I TELL, if the very instrument which measures my expertise is the instrument which is failing?)
All the same test? Those are troubling results indeed, since the 2pt change from 2002-2004 looks like a practice effect, but a 35pt fall is surely not a practice effect. Presumably you'd measure your domain expertise by your domain results. That's how most experts get by: lots of domain knowledge, not so much need for fluid intelligence.
The problem is that, in many situations, I was so poor at playing political games that I wound up accepting other people's political measurements of my domain expertise, instead of accurate, objective measurements. I've eventually developed a sort of neurotic "learned helplessness" that makes it nigh-impossible to accept accurate, objective measurements of any of my capacities, if they would have a positive connotation.
Well, there you just said that you don't have the patience for those type of problems, which (unless your area of expertise is identifying patterns of lines) doesn't necessarily mean that you are not extremely well-suited to the work that you do. If you are worried about specific cognitive deficits, test for those--an IQ test is not going to help identify that.

There must be some fundamental difference between how one draws inferences from mental states versus everything else.

Talking about "drawing inferences from mental states" strikes me as a case of the homunculus fallacy, i.e., thinking that there's some kind of homunculus sitting inside our brains looking at the mental states and drawing inferences. Whereas in reality mental states are inferences.

4Scott Alexander12y
Really? I don't see that at all. The same mental state can be both an inference and a premise for the next inference. For example, "I feel really tired lately -> Maybe I'm sick" seems pretty straightforward, as does "I am a guy and feel really attracted to other guys -> maybe I'm gay".
You're thinking of the inference as "I don't feel affection when I see her face -> She's not my wife". Whereas, another way to think about it is "Her face looks like [insert description of wife's face here] -> She's not my wife".
This objection points largely in the right direction but I don't think it's fair to accuse the view of adopting the homunculus fallacy. After all, the very suggestion is that our brains have circuitry that (in effect) performs Bayesian updating and that neurological damage and psychiatric conditions can cause this circuitry to misbehave. This is a way the brain could have worked. If the view adopted the homunculus fallacy then the Bayesian updating machinery couldn't, itself, be broken. It could only recieve bad input. However, as I delineate in my comment, we have every reason to believe the brain doesn't have anything like a Bayesian updating module exercising control over all the other brain modules. Instead, the empirical evidence suggests a much simpler structure in which different brain regions vie to control our actions without any arbitration by some master Bayesian updating module. Otherwise, one couldn't explain our inclination to answer wrongly on tests that pit one part of the brain against another, e.g., our mistakes in identifying the color of text spelling the name of another color. Also, to be pedantic the mental states aren't inferences. .The mental states merely determine behavior patterns that we can (sometimes) usefully describe as making certain inferences.
You can have a module in a certain state and another module which draws an inference from that. No homunculus needed.

" …modern man no longer communicates with the madman […] There is no common language: or rather, it no longer exists; the constitution of madness as mental illness, at the end of the eighteenth century, bears witness to a rupture in a dialogue, gives the separation as already enacted, and expels from the memory all those imperfect words, of no fixed syntax, spoken falteringly, in which the exchange between madness and reason was carried out. The language of psychiatry, which is a monologue by reason about madness, could only have come into existence in such a silence.

—Foucault, Preface to the 1961 edition[6]"