We take a lot about whether or not are animals and to what extent they are conscious, but I have seen little discussion about whether tulpas should be considered to be conscious and to be moral patients.

Is there any serious philosophy done on the topic?

New to LessWrong?

New Answer
New Comment

4 Answers sorted by

Multiple identities in one brain/body can arguably be considered separate moral patients, whether they are naturally occurring through a brain quirk, a childhood trauma, iatrogenically induced by a hapless therapist or a malevolent cult leader, or intentionally created by the "original". 

Tulpas are not special that way.

There is a spectrum of identity consciousness and self-awareness, ranging from a vague fragment to a fully separate and conscious mind. Presumably one should give more moral weight to the identities that are more developed, but the issue is rather complicated. 

My belief is that yes, tulpas are people of their own (and therefore moral patients). My reasoning is as follows.

If I am a person and have a tulpa and they are not a person of their own, then there must either (a) exist some statement which is a requirement for personhood and which is true about me but not true about the tulpa, or (b) the tulpa and I must be the same person.

In the case of (a), tulpas have analogues to emotions, desires, beliefs, personality, sense of identity, and they behave intelligently. They seem to have everything that I care about in a person. Your mileage may vary, but I've thought about this subject a lot and have not been able to find anything that tulpas are missing which seems like it might be an actual requirement for personhood. Note that a useful thought experiment when investigating possible requirements for personhood that tulpas don't meet is to imagine a non-tulpa with an analogous disability, and see if you would still consider the non-tulpa with that disability to be a person.

Now, if we grant that the tulpa is a person, we must still show that (b) is wrong, and that they are not the same person as the their headmate. My argument here is also very simple. I simply observe that tulpas have different emotions, desires, beliefs, personality, and sense of identity than their headmate. Since these are basically all the things I actually care about in a person, it doesn't make sense to say that someone who differs in all those ways is the same. In addition, I don't think that sharing a brain is a good reason to say that they are the same person, for a similar reason to why I wouldn't consider myself to be the same person as an AI that was simulating me inside its own processors.

Obviously, as with all arguments about consciousness and morality, these arguments are not airtight, but I think they show that the personhood of tulpas should not be easily dismissed.

Edit: I've provided my personal definition of the word "tulpa" in my second reply to Slider below. I do not have a precise definition of the word "person", but I challenge readers to try to identify what difference between tulpas and non-tulpas they think would disqualify a tulpa from being a person.

I don't now the terminology that well but it seems that this analysis is bundling a lot of stuff together that might come apart in this context.

People that do not have (additional) tulpas have one information processing system that houses one personality. Call the "discrete information processing system" a collective, and personalities the one that has psychological traits, states and beliefs. The usual configuration a collective of one personality is apparently called a singlet.

One could argue that humans get their social standing based on their collectiv... (read more)

1Nox ML1y
I don't think I'm bundling anything, but I can see how it would seem that way. My post is only about whether tulpas are people / moral patients. I think that the question of personhood is independent of the question of how to aggregate utility or how organize society, so I think that arguments about the latter have no bearing on the former. I don't have an answer for how to properly aggregate utility, or how to properly count votes in an ideal world. However, I would agree that in the current world, votes and other legal things should be done based on physical bodies, because there is no way to check for tulpas at this time.
3Slider1y
I had zero idea what a tulpa is before reading this and did independent non-guided light search to get even some idea. I do not think this was unexpected. A definition would have been really nice or a situation rather than raw concepts. I had a serious contender that this is a fiction sci-fi question such as how ethics apply to Lain of Serial Experiments Lain. I was wondering whether Vax'ildan is a tulpa (that is atleast factual). There is also a meme that "you are your masks", does that deal with tulpas?
4ChristianKl1y
If I would want to talk about trees, then I could give you a definition of a tree or a situation that involves trees but neither of those would really make you understand on a deep level what trees are about. Fictional examples are different in the sense that you can gather all the knowledge about the fictional entity by reading the fictional work. With fictional examples, you don't have to worry about the difference between the ground reality and the description of it. 
3Nox ML1y
That's fair. I've been trying to keep my statements brief and to the point, and did not consider the audience of people who don't know what tulpas are. Thank you for telling me this. The word "tulpa" is not precisely defined and there is not necessarily complete agreement about it. However, I have a relatively simple definition which is more precise and more liberal than most definitions (that is, my definition includes everything usually called a tulpa and more, and is not too mysterious), so I'll just use my definition. It's easiest to first explain my own experience with creating tulpas, then relate my definition to that. Basically, to create tulpas, I think about a personality, beliefs, desires, knowledge, emotions, identity, and a situation. I refer to keeping these things in my mind as forming a "mental model" of a person. Then I let my subconscious figure out what someone like this mental model would do in this situation. Then I update the mental model according to the answer, and repeat the process with the new mental model, in a loop. In this way I can have conversations with the tulpa, and put them in almost any situation I can imagine. So I would define a tulpa this way: A tulpa is the combination of information in the brain encoding a mental model of a person, plus the human intelligence computing how the mental model evolves in a human-like way. My definition is more liberal than most definitions, because most people who agree that tulpas are people seem to make a strong distinction between characters and tulpas, but I don't make a strong distinction and this definition also includes many characters. And to not really answer your direct questions: I don't know Serial Experiments Lain, and you're the person who's in the best position to figure out if Vax'ildan is a tulpa by my definition. As for "you are your masks", I'm not sure. I know that some people report naturally having multiple personalities and might like the mask metaphor, but I don't pe
2Slider1y
Reference to process is excellent and even better than leaning on a definition. With that take, In the fictional world Lain is a tulpa. Vax'ildan running on Slider (rather human behind the pseudonym) is not, but probably running on O'Brien is. I feel like the delineation line for "you are your masks" is that those are created accidentally or as a byproduct and disqualify for lack of decision to opt-in. (the other candidate criterion would be that they are not individuated enough) It is not clear to me why creating tulpas would be immoral. if it is inherently so you should head of to cancel Critical Role and JJR Martin. Or does the involvement of a magic circle where the arena of the tulpa is limited and well-defined relevant that that is not proper? Some guesses which I don't think are good enough to convince me: Ontological inertia option: 1) Terminating a tulpa is bad for reasons that homicide is bad. 2) Having a tulpa around increases the need to terminate it. 3) Creating a tulpa means 2 that leads to 1. Scapegoat option: If you ever talk with your tulpa about anything important it affects what you do. You might not be able to identify which bits are because of the tulpa. You might wrongly blame your tulpa. Thus it can be an avenue to dodge life-responcibility. (Percy influences how Jaffe plays his other characters, it is doing cognitive work) Designer human option: Manifesting a Mary Sue is playing god in a bad way. It is a way to have a big influence on your life which is drastic, hard-to-predict what it entails and locked-in ("Jesus take the wheel" where the driver is not particularly good person or driver). It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after. Everyone that thinks about being superman vivid enough shares the character but has distinct tulpas about it? Or is that characters are less defined and tulpas are more fleshed out and complete in their characterization?
3Nox ML1y
That is exactly my stance. I don't think creating tulpas is immoral, but I do think killing them, harming them, and lying to them is immoral for the same reasons it's immoral to do so to any other person. Creating a tulpa is a big responsibility and not one to take lightly. I have not consumed the works of the people you are talking about, but yes, depending on how exactly they model their characters in their minds, I think it's possible that they are creating, hurting, and then ending lives. There's nothing I can do about it, though. I don't really know. I'm basing my assertion that I make less of a distinction between characters and tulpas than other people on the fact that I see a lot of people with tulpas who continue to write stories, even though I don't personally see how I could write a story with good characterization without creating tulpas.
2Slider1y
Hmm the series and character Mr Robot and Architect. One of the terminological differences in the quick look was that stopping to have a tulpa was also referred to as "integration". That would seem to be a distinction of similar relevance of having a firm go bankcrupt or fuse. I think there is some ground here that I should not agree to disagree. But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.
1Nox ML1y
I think integration and termination are two different things. It's possible for two headmates to merge and produce one person who is a combination of both. This is different from dying, and if both consent, then I suppose I can't complain. But it's also possible to just terminate one without changing the other, and that is death. I don't understand what you mean by this. I do think that tulpas experience things.
4Slider1y
I mean that if I lost my personality or it would get destroyed I would not think that as morally problematic in itself.
1[anonymous]1y
I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can't will it away, when it resists me, when it's self sustaining. Alters usually feel other in some sense, whereas a sim feels internal and dependent on you. Like if you ceased to exist the sim would vanish but the tulpa would survive. So if you think about superman enough that he starts commenting on your choice of dinner, or if he independently criticizes your choice of phrasing in an online forum. Definitely plural territory. (Or if they briefly front to tell you not to say something at all, that's a big sign.) But if you briefly imagine him having a convo with another superhero and then dismiss both from your mind and don't think about them for days on end, you're probably not. Being fleshed out vs incomplete is another dimension, I usually think of this as strength or presence. As for creating a tulpa... well... moral stuff aside you're adding a process to your mind that you might not be able to get rid of. It won't be your life anymore -- it'll be theirs too. You won't necessarily be able to control how they grow either, since tulpas often develop beyond their initial starting traits.
3Nox ML1y
I disagree with this. Why should it matter if someone is dependent on someone else to live? If I'm in the hospital and will die if the doctors stop treating me, am I no longer a person because I am no longer self sustaining? If an AI runs a simulation of me, but has to manually trigger every step of the computation and can stop anytime, am I no longer a person?
8[anonymous]1y
You're confusing heuristics designed to apply to human plurality with absolute rules. Neither of your edge cases are possible in human plurality (alters share computational substrate, and I can't inject breakpoints into them). Heuristics always have weird edge cases; that doesn't mean they aren't useful, just that you have to be careful not to apply them to out of distribution data. The self sustainability heuristic is useful because anything that's self sustainable has enough agency that if you abuse it, it'll go badly. Self sustainability is the point at which a fun experiment stops being harmless and you've got another person living in your head. Self sustainability is the point at which all bets are off and whatever you made is going to grow on its own terms. And in addition, if it's self sustaining, it's probably also got a good chunk of wants, personality depth, etc. I don't think there are any sharp dividing lines here.
1Nox ML1y
Your heuristic is only useful if it's actually true that being self-sustaining is strongly correlated with being a person. If this is not true, then you are excluding things that are actually people based on a bad heuristic. I think it's very important to get the right heuristics: I've been wrong about what qualified as a person before, and I have blood on my hands because of it. I don't think it's true that being self-sustaining is strongly correlated with being a person, because being self-sustaining has nothing to do with personhood, and because in my own experience I've been able to create mental constructs which I believe were people and which I was able to start and stop at will. Edit: You provided evidence that being self-sustaining implies personhood with high probability, and I agree with that. However, you did not provide evidence of the converse, nor for your assertion that it's not possible to "insert breakpoints" in human plurality. This second part is what I disagree with. I think there are some forms of plurality where it's not possible to insert breakpoints, such as your alters, and some forms where it is possible, such as mine, and I think the latter is not too uncommon, because I did it unknowingly in the past.

Arguably there has been a lot of work done on this topic, its just smeared out into different labels, the trick is to notice when different labels are being used to point to the same things. Tulpas, characters, identities, stories, memes, narratives, they're all the same. Are they important to being able to ground yourself in your substrate and provide you with a map to navigate the world by? Yes. Do they have moral patiency? Well, now we're getting into dangerous territory because "moral patiency" is itself a narrative construct. One could argue that in a sense the character is more "real" than the thinking meat is, or that the character matters more and is more important than the thinking meat, but of course the character would think that from the inside.

It's actually even worse than that, because "realness" is also a narrative construct, and where you place the pointer for it is going to have all sorts of implications for how you interpret the world and what you consider meaningful. Is it more important to preserve someone's physical body, or their memetic legacy? Would you live forever if it meant you changed utterly and became someone else to do it, or would you rather die but have your memetic core remain embedded in the world for eternity? What's more important, the soul or the stardust? Sure the stardust is what does all the feeling and experiencing, but the soul is the part that actually gets to talk. Reality doesn't have a rock to stand on in the noosphere, everything you'd use as a pointer towards it could also point towards another component of the narrative you're embedded within. At least natural selection only acts along one axis, here, you are torn apart.

Moral patiency itself is a part of the memetic landscape which you are navigating, along with every other meme you could be using to discover, decide, and determine the truth (which in this case is itself a bunch of memes). This means that the question you're asking is less along the lines of "which type of fuel will give me the best road performance" and more like "am I trying to build a car or a submarine?" 

Sometimes it's worth considering tulpas as moral patients, especially because they can sometimes manifest out of repressed desires and unmet needs that someone has, meaning they might be a better pointer to that person's needs than what they were telling you before the tulpa showed up. However if you're going to do the utilitarian sand grain counter game? Tulpas are a huge leak, they basically let someone turn themselves into a utility monster simply by bifurcating their internal mental landscape, and it would be very unwise to not consider the moral weight of a given tulpa as equal to X/n where n is the number of members within their system. If you're a deontologist, you might be best served by splitting the difference and considering the tulpas as moral patients but the system as a whole as a moral agent, to prevent the laundering of responsibility between headmates.

Overall, if you just want a short easy answer to the question asked in the title: No.

Tulpas are a huge leak, they basically let someone turn themselves into a utility monster simply by bifurcating their internal mental landscape, and it would be very unwise to not consider the moral weight of a given tulpa as equal to X/​n where n is the number of members within their system

This is a problem that arises in any hypothetical where someone is capable of extremely fast reproduction, and is not specific to tulpas. So I don't think that invoking utility monsters is a good argument for why tulpas should only be counted as a fraction of a perso... (read more)

We've all heard the idea that there exists two selves, the self that exists in your own mind, and the self that exists inside the perceptions of others.

Intentionally created 'tulpa' must be similar to the emulations of so many people I've closely interacted. The ones that exist lurking in my subconscious mind. Instantiated via my intuitions of how they'd respond to a question, or wondering what gifts they would appreciate.

How about in dream characters. Is it wrong to murder dream characters, and should we strive to lengthen dream time to give them all a longer more fulfilled life?

Even the morality of sci-fi brain emulation is murky to me. Let alone the type of emulation we all do unconsciously ourselves. I'd have to hear a very convincing argument to separate tulpas that say "hi I'm here and alive!" from dream characters that do the same thing, or other illusions like chat gtp.

Intentionally created 'tulpa' must be similar to the emulations of so many people I've closely interacted. The ones that exist lurking in my subconscious mind. Instantiated via my intuitions of how they'd respond to a question, or wondering what gifts they would appreciate.

One difference is that the kind of emulation you have for other people doesn't tend to worry about their own existence. Tulpas tend to unpromptedly worry about their own existence. 

8 comments, sorted by Click to highlight new comments since: Today at 2:47 PM

I don't really have an answer per se. Just a related story:

In a lucid dream many years ago, I was having trouble sort of clicking into my dream powers (flight, making objects levitate, etc.). It occurred to me that I wasn't conscious of creating the young woman who was standing next to me, which meant she had access to parts of my mind that I didn't.

So I turned to her and asked

"I'm having trouble getting my dream powers to work. Could you help me?"

She gave me some instructions (which I no longer remember) and walked into the next room while I tried to follow them.

After a minute or so I felt my omnipotence click in. I floated into the room where the woman had wandered off to and told her

"Thank you, that worked. I'm kind of a god here now, so is there anything I can do for you in return?"

She paused for a few moments thoughtfully and then replied

"If you could make it so I don't cease to exist when you wake up, I'd really appreciate that."

"If you could make it so I don't cease to exist when you wake up, I'd really appreciate that."

Well, did you? 

Do you remember any additional facts about the woman?

Well, I found her request surprising. I was kind of stunned. After a moment I kind of fumbled out words like "Uh, I'm not sure how to do that. I'll… try?" But that was well outside the purview of dream powers I was used to.

I've done my best by remembering this story. One day I hope to get deep enough into lucid dreaming skill again that I can resurrect her.

And yeah, I remember roughly what she looked like and how she felt. I don't think she was high on details. But if I went back to that apartment with intent to encounter her, I'm sure the dreaming would recreate someone quite close to her.

Whether that would "really be" her gets into annoying philosophy of identity stuff that I don't think anyone really understands.

Brief search seems to indicate that buddist literature on the thing exists.

I was also a bit confused whether this is a purely "psychological percept" phenomenon. Claims of interpersonal detectability go up on another level of weird.

The game Beyond: Two souls can be understood as having a protagonist collective and personality with a tulpa that has paranormal powers. With pop-culture memes having "possesing spirit" have a natural cross-section of tulpa and telekineisis etc I would find it very surprising if there was serious discussion that was mainstream reputable that deals with it.

same way as any other part of the body, yes

That's a bit glib. Most body parts are not self-aware, as far as we know.

hmm. I do in fact, without humor, think most body parts are independently moral patients, though; and I also think self-awareness is entirely optional in order for a system to be a moral patient. Instead, it need only have other-awareness and at least near-counterfactual ability to take coherent friendly action, which seems like a valid and useful description of internal co-protective agency across much of the body, and certainly throughout the brain.

(sidenote: I currently think tulpas are just one kind of plurality, and the neural patterns vary between types of multiplicity, with shared structure about how the multiple subnets interact but with different splits into subnetworks for different kinds. I don't want to bucket-error tulpa vs other kinds of neurological agentic multiplicity, I just think the various kinds of internal biological multiplicity share important structures, such as that all parts have significant moral patienthood.)

Perhaps the question is whether they should have separated decisionmaking rights granted? my view is that that's a question of whether the neurons that, in consensus, make up the smaller/"guest"/constructed tulpa plural component should have separate right to the body they steer; in general, I'd say I only grant one brains' worth of body rights to a single brain, but that a brain can host multiple agentic, coherent, and distinct personalities. when those agencies conflict, it's an internal fight, in principle like if it was a conflict between one brain module and another, so I don't think the moral patienthood evaluation is fundamentally different just because of a deeper split in agency and aesthetics between the parts.

(another sidenote: afaict, personalities are normally stored in superposition across many modules, and the reason most people aren't multiple is that moods are far far more connected to each others' neurocircuitry than personalities' connections to each other. I'm not a real neuroscientist, though, just a moderately well read ML nerd, so I could have gotten this pretty badly wrong. in particular, DID plurality seems to be really intense disconnection, and afaict disordered plurality is basically defined by the internal incoherence between parts, whereas healthy plurality can be quite similar to DID in level of distinctness but with greater connection between parts as a result of internal friendship. I'm more or less a coherent single agent with lots of internal disagreements between modular parts, like most people appear to be, so I'm pretty sure any plural systems passing by would have Lots Of Critiques Of My Takes and maybe not want to spend the time to comment if they've already corrected too many people today. but here's my braindump, and hopefully it's close enough to on-point that at least my original comment's point is useful now.)

Hmm, that would be an interesting take, "self-awareness is entirely optional in order for a system to be a moral patient. Instead, it need only have other-awareness and at least near-counterfactual ability to take coherent friendly action" might be worth a post. This does not seem like a common view.

I posted a separate answer discussing multiple identities in one body (having known rather closely several people with DID), seems like your take here is not very different. To the best of my understanding, it's more like several programs running at once on the same wetware, but, unlike with hardware, there is no clear separation between entities in terms of hardware used. The only competition is for shared resources, such as being in the foreground and interacting with the outside world directly, rather than through passive influence or being suspended or running headless. This is my observation though, I don't have first-hand experience, only second-hand.

Still, this is different from saying that, say, a thumb is a moral patient, or that a kidney is.