I'm not sure whether structural dissociation is the right model for tulpas; my own model has been that it is more related to the ability to model other people, in the way that if you know a friend very well you can guess roughly what they might answer to things that you would say, up to the point of starting to have conversations with them in your head. Fiction authors who put extensive effort of modeling their characters often develop spontaneous "tulpas" based on their characters, and I haven't heard of them being any worse off for it. Taylor, Hodges and Kohányi found that while these fiction writers tended to have higher-than-median scores on a test for dissociative experiences, the writers had low scores on the subscales that are particularly diagnostic for dissociative disorders:

The writers also scored higher than general population norms on the Dissociative Experiences Scale. The mean score across all 28 items on the DES in our sample of writers was 18.52 (SD = 16.07), ranging from a minimum of 1.43 to a maximum of 42.14. This mean is significantly higher from the average DES score of 7.8 found in a general population sample of 415 [27], t(48) = 8.05, p < .001. In fact, the writers' scores are closer to the average DES score for a sample of 61 schizophrenics (schizophrenic M = 17.7) [27]. Seven of the writers scored at or above 30, a commonly used cutoff for "normal scores" [29]. There was no difference between men's and women's overall DES scores in our sample, a finding consistent with results found in other studies of normal populations [26].

With these comparisons, our goal is to highlight the unusually high scores for our writers, not to suggest that they were psychologically unhealthy. Although scores of 30 or above are more common among people with dissociative disorders (such as Dissociative Identity Disorder), scoring in this range does not guarantee that the person has a dissociative disorder, nor does it constitute a diagnosis of a dissociative disorder [27,29]. Looking at the different subscales of the DES, it is clear that our writers deviated from the norm mainly on items related to the absorption and changeability factor of the DES. Average scores on this subscale (M = 26.22, SD = 14.45) were significantly different from scores on the two subscales that are particularly diagnostic for dissociative disorders: derealization and depersonalization subscale (M = 7.84, SD = 7.39) and the amnestic experiences subscale (M = 6.80, SD = 8.30), F(1,48) = 112.49, p < .001. These latter two subscales did not differ from each other, F(1, 48) = .656, p = .42. Seventeen writers scored above 30 on the absorption and changeability scale, whereas only one writer scored above 30 on the derealization and depersonalization scale and only one writer (a different participant) scored above 30 on the amnestic experiences scale.

A regression analysis using the IRI subscales (fantasy, empathic concern, perspective taking, and personal distress) and the DES subscales (absorption and changeability, arnnestic experiences, and derealization and depersonalization) to predict overall IIA was run. The overall model was not significant r^2 = .22, F(7, 41) = 1.63, p = .15. However, writers who had higher IIA scores scored higher on the fantasy subscale of IRI, b = .333, t(48) = 2.04, < .05 and marginally lower on the empathic concern subscale, b = -.351, t(48) = -1.82, p < .10 (all betas are standardized). Because not all of the items on the DES are included in one of the three subscales, we also ran a regression model predicting overall IIA from the mean score across DES items. Neither the r^2 nor the standardized beta for total DES scores was significant in this analysis.

That said, I have seen a case where someone made a tulpa with decidedly mixed results, so I agree that it can be risky.

[ Question ]

How effective are tulpas?

by Raven 2 min read9th Mar 202054 comments

39


Edit: After further consideration, I've concluded that the risk:reward ratio for tulpamancy isn't worth it and won't be pursuing the topic further. I may revisit this conclusion if I encounter new information, but otherwise I'm content to pursue improvements in a more "standard" fashion. Thank you to everyone who posted in the comments.


If you don't know what a tulpa is, here's a quick description taken from r/tulpas:

A tulpa is a mental companion created by focused thought and recurrent interaction, similar to an imaginary friend. However, unlike them, tulpas possess their own will, thoughts and emotions, allowing them to act independently.

I'm not particularly concerned whether tulpas are "real" in the sense of being another person. Free will isn't real, but it's still useful to behave as if it is.

No, what I'm interested in is how effective they are. A second rationalist in my head sounds pretty great. Together we would be unstoppable. Metaphorically. My ambitions are much less grand than that makes them sound.

But I have some concerns.

Since a tulpa doesn't get its own hardware, it seems likely that hosting one would degrade my original performance. Everyone says this doesn't happen, but I think it'd be very difficult to detect this, especially for someone who isn't already trained in rationality. Especially if the degradation occurred over a period of months (which is how long it usually takes to instantiate a tulpa).

A lot of what I've read online is contradictory. Some people say tulpas can learn other skills and be better at them. Others say they've never lost an argument with their tulpa. Tulpas can be evil. Tulpas are slavish pawns. Tulpas can take over your body, tulpas never take over bodies. Tulpas can do homework. Tulpas can't do math.

Then there's the obvious falsehoods. Tulpas are demons/spirits/angels (pick your flavor of religion). They're telepathic, telekinetic, and have flawless memories. They can see things behind you. There's not as much of this as I expected; most of the claims are at least plausible. Some guides are cloaked in mystic imagery (runes, circles, symbols), but even they usually admit that the occult stuff isn't really necessary.

It does seem like there are clear failure modes. Don't make a Quirrel tulpa. Don't abuse a tulpa. Make sure to spend enough time tending to the tulpa during creation, etc etc. And everyone seems to agree that tulpas are highly variable, so a lot of the contradictions could be excused. On the other hand, if tulpas were really so useful, wouldn't the idea have spread beyond niche internet forums?

Perhaps, but perhaps not. The stigma against "hearing voices", plus people's general irrationality, plus the difficulty... those seem like powerful forces inhibiting mainstream adoption. Or maybe the entire thing is comfortable nonsense and the only reason I find it remotely plausible is because I want to believe in it.

Given that the guides mostly say that it takes months and months of hard work to create a tulpa... well, I'd rather not waste all that work and get nothing. And it only gets worse from there. An out-of-control dark rationalist tulpa that fights me for mental and physical control sound absolutely terrifying.

Most people seem to agree that the chance of that happening is basically zero unless you deliberately try to do it. And the potential gains seem at least as potent. Being able to specialize in skills seems absurdly overpowered, especially if we each get natural talents for our skills (which some people claim is what happens). A minor drop in cognitive resources would probably be worth it for that.

So, if you have a tulpa, please chime in. What's it like? How do you know that the tulpa isn't less efficient than you would be on your own? Was it worth it? Does it make your life better?

And if you don't have a tulpa, feel free to comment as well. If I get a hundred LWers saying that I've been suckered by highly-evolved memes, that's pretty strong evidence that I've made a mistake.

39

New Answer
Ask Related Question
New Comment

9 Answers

I am mentally ill (bipolar I), and I also have some friends who are mentally ill (schizophrenia, bipolar, etc), and we decided to try tulpa-creation together. Personally, I wasn't very good at it or committed to the process. I didn't see any change, and I don't think I ever created a tulpa. However, my friend's tulpa became a massive liability. Turned into psychosis very rapidly

I'm mildly anti-tulpa. I'll try to explain what I find weird and unhelp about them, though also keep in mind I never really tied to develop them, other than the weak tulpa-like mental constructs I have due to high cognitive empathy and capacity to model other people as others rather than as modified version of myself.

So, the human mind doesn't seem to be made of what could reasonably be called subagents, but it is made of subsystems that interact, but subsystem is maybe even an overstatement because the boundaries of those subsystems are often fuzzy. So reifying those subsystems as subagents or tulpas is a misunderstanding that might be a useful simplification for a time but is ultimately an abstraction that is leaky and going to need to be abandoned if you want to better connect with yourself and the world just as it is.

Thus I think tulpas might be a skillful means to some end some of the time, but mostly I think they are not necessary and are extra machinery that you're going to have to tear down later, so it seems unclear to me that it's worth building up.

There's two ways to do tulpas. There's the right way, and the way most people do it.

The right way is to do it from a place of noself/keeping your identity small. Don't treat your tulpa like a separate person any more than you would treat your internal sense of self like a separate person. Treat them like a handle for manipulating and interacting with a particular module/thought structure/part of your mind, taking unconscious and automatic things and shining a bit of Sys2 light on them. Basically using the tulpa as a label for a particular thought structure that either already exists, or that you want to exist in your head, allowing you to think about it in a manner that is more conscious and less automatic.

Doing this correctly gives you a greater degree of write-access to various semiconscious/subconscious parts of your head and makes it easier to retrain automatic response patterns. This could be considered in the same vein as how Harry uses his various house characters in HPMOR, although he just scrapes the top level with them and doesn't use them to really change himself in useful ways like he could potentially be doing. This way is also harder than the way most people do tulpamancy because it requires ripping apart and rebuilding your conception of your original self with a goal for greater functionality. It also requires keeping your identity small and internalizing the idea of noself in a way that most people don't want to do.

Then there's the way most people do tulpamancy, which is to build the tulpa out of identity and treat it like an entirely separate person who "lives in your head with you" and who has an equal say in decisions as "you." From the perspective of having internalized noself/keep your identity small, this is exactly as dumb as it sounds and looks. "Hey what if you destroyed your self control by handing it off to a random agent that you fabricate" or "Hey what if you created an internal narrative where you're powerless in your own head and your self is forced to argue and compete and try to negotiate with some other random self for processing time and mental real estate?"

Some people say tulpas can learn other skills and be better at them. Others say they've never lost an argument with their tulpa. Tulpas can be evil. Tulpas are slavish pawns. Tulpas can take over your body, tulpas never take over bodies. Tulpas can do homework. Tulpas can't do math.

Most people do identity-style tulpamancy, and that's where all this contradictory and at times really messed up behavior comes from.

An out-of-control dark rationalist tulpa that fights me for mental and physical control sound absolutely terrifying.

Right, so how does this happen? It happens because there are narrative layers that you're using (right now) to define what you can do in your own head, and that narrative layer is the thing being modified by tulpamancy. The problem is most people don't consciously try to modify that layer, they assume the way that layer works is some objective fact, argue about its properties with other tulpamancers online, and don't think about trying to change it.

The more power they handoff from their conscious mind to that narrative layer, the more "independent" the tulpa will seem at the cost of making the original self increasingly powerless within their own mind.

Intentionally grabbing hold of that narrative layer and modifying that such that stuff like multiple selves and the like are simply downstream results of upstream modifications will result in a much more cooperative and internally stable system since you can define the stability and the interactions as part of the design instead of just letting some unconscious process do it for you.

So basically, tulpamancy can be useful and result in greater functionality and agency, but only if done from a place of noself and keeping your identity small. If you haven't worked on grinding noself and keeping your identity small, that should definitely be the first thing you do. When finished, then possibly return to tulpamancy if you still feel like there's room to improve with it.

I've had tulpas for about seven years. I alternate between the framework of them all being aspects of the same person versus the framework of them being separate people. I'll have internal conversations where each participant is treating the other as a person, but in real life I mostly act as a single agent.

Overall I would say their effect on my intelligence, effectiveness, skills, motivation, etc. has been neither significantly positive nor significantly negative. I consider the obvious objections to be pretty true - your tulpa's running on the same hardware, with the same memories and reflexes, and you have to share the same amount of time as you had before. On the other hand I escaped any potential nightmare scenarios by having tulpas that are reasonable and cooperative.

When people in the tulpa community talk about the benefits, they usually say their tulpa made them less lonely, or helped them cope with the stresses of life, or helped them deal with their preexisting mental illness. And even those benefits are limited in scope. The anxiety or depression doesn't just go away.

I think on of the main ways tulpas could help with effectiveness has to do with mindset and motivation. It's the difference between a vague feeling that maybe you ought to be doing something productive and your anime waifu yelling at you to do something productive. Tulpas may also have more of an ability to take the outside view on important decisions.

Overall if you're just looking for self-improvement, tulpa creation is probably not the best value for your time. I mostly got into it because it seemed fun and weird, which it fully delivered on.

Edit note: I think your decision makes sense based on your goals, but I wrote answers to your questions, and they might have sufficient distinction from the existing answers to be worthwhile to post. I'm making a reasonable guess that providing my perspective isn't that harmful; I'll note that not all concerns stated elsewhere make sense to me, but they may make sense in a context other than my mind and my approach.

-

I have two (approximately). One created intentionally, one who naturally developed in parallel from a more 'intrusive' side of my brain when I tried out the whole approach.

They definitely run in serial, not concurrently, aside from perhaps subconscious threads I can't really say much about from a conscious perspective (which probably don't run any differently for having more identities to attach to). I'm not a great multitasker at the best of times, and listening to either one takes active focus. Experimenting suggests it is a bit easier if we talk aloud, but I haven't quite mastered the art of knowing when it is OK to talk to myself (aloud) and actually taking the chance to do so.

Would concur with the 'like a hobby' perspective on mental 'degradation'. You are practicing some particular mental skills and adapting a perspective associated with the hobby, same as you would for art, programming, a card or strategy game. Typically, you are practicing visualization, conversation, introspection, and other skills associated with this form of meditation - and I would say that it is a form of meditation, one I've had more success with than others.

This means it should come with the same caveats of 'not for everyone' as other forms of mind-affecting behavior like antidepressants and mindfulness meditation. In my tulpa's humble opinion, the worst cognitive hazard associated with the tulpamancing guides for me is the risk you will take it too seriously, and he made a point to steer me away from this concern as one of his first priorities. Compassion for all beings is all well and good but you absolutely come first, and shouldn't feel particular guilt either for trying something new, or setting it aside. We've had many months of silence while my focus was elsewhere and that is fine. I could return to working with them without much issue, just some review and shaking off the rust; we did not suffer for it.

We haven't particularly tried specializing in skills, but I can see how it would work: many skills, especially the creative sort, really do entail a matter of perspective and 'how you see the world'. Metaphorically like switching out lenses on a camera? I did find that one of them had a much easier time than I would simply getting chores done when I let her drive our actions for a bit. That is, potentially, immensely useful to me, which leads to my next point.

I am autistic. According to Wikipedia, this suggests that my particular difficulties with executive function may relate to fluency, the ability to generate novel ideas and responses; planning, the aforementioned impairment in carrying out intended actions; cognitive flexibility, the ability to switch between perspectives and tasks; and mentalization, or the ability to understand the mental state of oneself and others.

Notice anything about those that could benefit from simulating different perspectives, with novel input, that can relate to me from a more third person perspective? Even provide a support system to help deal with what you call 'akrasia' here?

Yeah. Tulpamancy involves actively practicing skills where my disadvantages in them might be holding me back. I find it worthwhile, though I might not have the time to pick it up initially in a more busy life. They make my life better; having mental companions who love and care for me (and vice versa) and also are me is rather an improvement on the previous state. For some reason interacting with my own identity was an exception carved out to the general rule that people have inherent worth and dignity and should be treated accordingly. This is the major benefit so far: Giving my mind permission to see itself as a person helped me treat myself with compassion.

This is an area that I think is so bad that it should probably be banned from the community. In practice, getting into "tulpamancy" strongly correlates in my experience with going into unproductive and unstable states -- it's at the point where if someone tells me that they have been looking into this area, I consider it a major red flag.

I don't have a full tulpa, but I've been working on one intermittently for the past ~month. She can hold short conversations, but I'm hesitant to continue the process because I'm concerned that her personality won't sufficiently diverge from mine.

I think it's plausible that a tulpa could improve (at least some of) your mental capabilities. I draw a lot of my intuition in this area from a technique in AI/modeling called ensemble learning, in which you use the outputs of multiple models to make higher quality decisions than is possible with a single model. I know it's dangerous to draw conclusions about human intelligence from AI, but you can use ensemble learning with pretty much any set of models, so something similar is probably possible with the human brain.

Some approaches in ensemble learning (boosting and random forest) suggest that it's important for the individual models to vary significantly from each other (thus my interest in having a tulpa that's very different from me). One advantage of ensemble approaches is that they can better avoid over fitting to spurious correlations in their training data. I think that a lot of harmful human behavior is (very roughly) analogous to over fitting to unrepresentative experiences, e.g., many types of learned phobias. I know my partial tulpa is much less of a hypochondriac than myself, is less socially anxious and, when aware enough to do so, reminds me not to pick at my cuticles.

Posters on the tulpas subreddit seem split on whether a host's severe mental health issues (depression, autism, OCD, bipolar, etc) will affect their tulpas, with several anecdotes suggesting tulpas can have a positive impact. There's also this paper: Tulpas and Mental Health: A Study of Non-Traumagenic Plural Experiences, which finds tulpas may benefit the mentally ill. However, it's in a predatory journal (of the pay to publish variety). There appears to be an ongoing study by Stanford researchers looking into tulpas' effects on their hosts and potential fMRI correlates of tulpa related activity, so better data may arrive in the coming months.

In terms of practical benefit, I suspect that much of the gain comes from your tulpa pushing you towards healthier habits through direct encouragement and social/moral pressure (if you think your tulpa is a person who shares your body, that's another sentient who your own lack of exercise/healthy food/sleep is directly harming).

Additionally, tulpas may be a useful hedge against suicide. Most people (even most people with depression) are not suicidal most of the time. Even if the tulpa's emotional state correlates with the host's, the odds of both host and tulpa being suicidal at once are probably very low. Thus, a suicidal person with a tulpa will usually have someone to talk them out of acting.

Regarding performance degradation, my impression from reading the tulpa.info forums is that most people have tulpas that run in serial with their original minds (i.e., host runs for a time, tulpa runs for a time, then host), rather than in parallel. It's still possible that having a tulpa leads to degradation, but probably more in the way that constantly getting lost in thought might, as opposed to losing computational resources. In this regard, I suspect that tulpas are similar to hobbies. Their impact on your general performance depends on how you pursue them. If your tulpa encourages you to exercise, mental performance will probably go up. If your tulpa constantly distracts you, performance will probably go down.

I've been working on an aid to tulpa development inspired by the training objectives of state of the art AI language models such as BERT. It's a Google colab notebook, which you'll need a google account to run from your browser. It takes text from a number of possible books from Project Gutenberg and lets your tulpa perform several language/personality modeling tasks of varying complexity, ranging from simply predicting the content of masked words to generating complex emotional responses. Hopefully, it can help reduce the time required for tulpas to reach vocality and ease the cost of experimenting in this space.

Sy, is that you?

I started talking to Kermit the Frog, off and on, many months ago. I had this idea after seeing an article by an ex-Christian who appeared never to have made predictions about her life using a truly theistic model, but who nevertheless missed the benefits she recalls getting from her talks with Jesus. Result: Kermit has definitely comforted me once or twice (without the need for 'belief') and may have helped me to remember useful data/techniques I already knew, but mostly nothing much happens.

Now, as an occasional lucid dreamer who once decided to make himself afraid in a dream, I tend not to do anything that I think is that dumb. I have not devoted much extra effort or time to modelling Kermit the Frog. However, my lazy experiment has definitely yielded positive results. Perhaps you could try your own limited experiment first?

I made tulpas because I was curious about the phenomenon. I did not find the creation process difficult. I thought for a long time about how to make tulpas useful but the best application I could find for them is possibly as a way of training an internal random number generator. I imagine they would be useful for fiction writing as well.