by [anonymous]
6 min read26th Feb 201213 comments

-7

Several moral systems spend a great deal of effort in trying to resolve issues of conflict between the self and others. Indeed one of criticisms against consequentialism is that it lacks accommodation for partiality (people’s inherit tendency to give special moral weight to themselves, family and friends, etc).

However on what basis is the issue of partiality supported? If we try to define what “others” are we inevitably have to give an account of the make up of “others” is, which ends up being individual selves, which ultimately are the moral agent that can make moral decisions and the moral recipients that can benefit or be harmed by consequences. So let’s look at the self.

Take me for example. I am a self and I have to answer the question. How much of my moral concern should be assigned to myself and my interests versus the interests of others? I better have some idea of what I am first. It seems however that the more one examines what one is, the self has a way of getting constrained by the very strict logical necessity of identity a = a. I shall explain.

Assume a physical world that dismisses of a soul or any sort of supernatural entity.What I am at any at any given time (tP , the smallest time measurement) is a particular arrangement of particles. A collection of fundamental particles that ultimately make up our brains but which are indistinguishable from the fundamental particles that makes up everything else except for their general location and interactions with other particles. It seems natural and intuitive(not that these are good reasons to) for us to just delineate those particles in space and call the brain shaped arrangement above our shoulders “myself”. So for now let “M” be an exact description of all the particles inside that delineation. Let’s us also remember that “M” contains, all our memories, concepts, reasoning powers, personality, tastes. Every single thing that uniquely distinguishes you is by definition contained in “M”

Here’s the problem, let there be a time period ∆t = 50 years. what will “M” look like then? Different. that’s a good enough answer here. M(initial) != M(final). And if we let ∆t approach 0. There will be a minimum ∆t in which M(initial) != M(final). I of course have absolutely no clue what that time period would be, but it exists. Perhaps it’s a nanosecond, or less or more, for the purposes of this article the exact number isn’t relevant.

If we use these definitions, then literally we not the same self from one moment to the next. What appears to be happening then is that at every minimal ∆t a very high fidelity copy, extremely similar to previous self exists, it has nearly all the attributes of the previous self but no quite all of them and as ∆t increases those attributes accumulate and the difference in selves becomes magnified. Your memory, personality, mode of thinking, tastes, ideas they change over time this much seems also obvious.

At this point you might just reject the above definition of a self, but as over simplistic as this might seem, the constraints seem inescapable. How would you define a self without a delineation of particles in space? How would that self exist over time and process information without changing? These appear to be necessities for a physical, thinking, being like us to exist over time.

Now let’s go back to the question of how much of my moral concern should be assigned to myself and my interests versus the interests of others? The answer should be completely obvious now. We should place 100% of our moral concern in others because we quite literally, strictly speaking, can only care others.

When looked at in this way any preferential treatment we give to future copies of our present selves by virtue of them being future copies of ourselves is reduced to a simple unjustified discrimination. For example a 20 year old who spends all his energies to be retired rich at 60 might be quite literally just giving special consideration to a series of people none of which are him. This is done under the false assumption that somehow the 20 year old persists and is still in existence when the 60 year old retires. Consider now that a person might change so much in 40 years that by the time they retire at 60 there is some other person in the world who is more similar in attributes to the 20 year old self than that person is now at 60. How then does it make sense for the 20 year old to work solely to benefit that 60 year old who is less similar to him than someone else at that time might be?

So I’d like to call this what it appears to be. it’s just a discrimination without good reason. The word selfism seems easy enough to adopt here under this definition:

Selfism: The preferential treatment of a certain people for the reason that they are later iterations of one person.

 

Preempted Criticism:

1 ) But I am still historically me, there’s an unbroken history between myself at this moment and all the previous moments in my life. Doesn’t that make me uniquely me?

This is true but then it begs the question why does the historical line of yourself matter?

However similar you are to your previous version it still doesn’t make you the same person. Is a special history enough to justify special regard for someone? I wouldn’t think so. I think what should matter in justifying how we treat others at time is solely the consequences of treating them in a certain way. What consequences are important? that we can discuss later.

2 ) Are you saying then that we should treat everyone equally?

No, not necessarily we still might be justified in having a basis for preferential treatment of some people over others. For example 100 cancer researchers vs 100 unskilled, uneducated people. The point made here though is that the basis by which we discriminate among people should be something better than well that person will be a similar copy to what I am now so they get more moral status.

  1. What you’re saying is too counter-intuitive to be taken seriously, why should I care then if I walk in front of a bus? by the time the bus hits me it won’t be me?

(I’ve actually heard this, not building straw man here)

Of course we still have moral responsibility towards others and you should do the most you can for the benefit of others and it’s very dubious in most cases that stepping in front of a bus has an overall benefit to anybody. If your morals disappear because a persistent self disappears then you need to reexamine your moral system.

  1. This sounds like crap to me, it has no practical use in the world even if we accept it all as true we cannot live this way. What’s the point?

I readily accept that we might just not be able to live rationally according to the premise that we are not persistent selves.I think however that reflection and mass acceptance of this idea could have the effect of changing at least long term thinking in people. Shifting focus from individualism and self interested pursuits. This I think would be a good thing. For example, a college student deciding on picking a career path might take in consideration the idea that it’s foolish of him to become an investment banker to benefit a particular old man(his future copy) that will exist in the future and who will definitely not be him. The college student might instead pick a career that will just benefit others the most.

 

Bonus point:

This view of the self seems to “fix” the problem of teleportation. The worry some people have that you can never be teleported because wouldn’t that just be creating a copy of myself and destroying me? Well no, teleportation would in this view consist of nothing more than creating a parallel iteration of your self that is displaced in space and as long as you trust that the copy made somewhere else is sufficiently high fidelity it is just as good as your next iteration if you just standing somewhere not being teleported. It would be just a psychological hurdle to overcome to allow one copy of yourself to be vaporized, trusting the high fidelity copy has been created.

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 10:23 AM

How much of my moral concern should be assigned to myself and my interests versus the interests of others? I better have some idea of what I am first.

Some idea yes, but it's often an error to go in search for a precise definition of some concept in lieu of actually making use of it, for it's often easier and more enlightening to use it than to debate its definitions.

The answer should be completely obvious now. We should place 100% of our moral concern in others because we quite literally, strictly speaking, can only care [about] others.

Fallacy of gray. That there is difference both between yourself-now and yourself-later, and between yourself-now and Joss Whedon, doesn't mean that these differences are equivalent, of similar extent.

[-][anonymous]12y-20

Some idea yes, but it's often an error to go in search for a precise definition of some concept in lieu of actually making >use of it, for it's often easier and more enlightening to use it than to debate its definitions.

What do you mean here? my definition was too precise? I don't see how this was the case.

Fallacy of gray. That there is difference both between yourself-now and yourself-later, and between yourself-now and Joss Whedon, doesn't mean that these differences are equivalent, of similar extent.

I am not claiming that the difference is identical, if I am I retract of course chances are everyone one of your copies until you die will be more similar to you as you are now than anyone who has ever lived or will live is to you now.

>

[-][anonymous]12y140

You're spending a lot of the post arguing that I can't care about my future self and this is frustrating to me. There is a top-down description of what "future me" is that we can all agree on except for some marginal cases like teleportation, but those are mostly irrelevant to day-to-day life. I don't want to get into nitpicking about the identity of atoms or whatnot when the bottom line is that my future self is a thing that exists, and I can care about it.

A minority of your post is devoted to arguing that I shouldn't care about my future self. This is the only part of your argument that's worth addressing, but you haven't made your case very clear beyond calling it "discrimination". I don't think a productive discussion is possible unless you elaborate.

[-][anonymous]12y-20

What I mean by you can't care about your future self is that it isn't you! I am addressing someone who cares about their future self under the belief that it's still them somehow and in this sense you can't care about your future self. You can care about someone in the future who is a similar to you now.

You're right I didn't go too much into why you shouldn't care about yourself but I wasn't trying to I just made the point that you shouldn't care about your future JUST because it is YOUR future self, which just means a high fidelity copy in the short term and less so in the long term.

[-][anonymous]12y30

You're arguing against a strawman. Nobody actually believes that their present self and future self are identical.

At the same time, there is more than similarity that ties me to my future. The changes that happen from here to there aren't purposeless, random changes. They are caused by experiences that have the power to make me voluntarily change my mind. In a way, my future self is an improved version of my present self.

In general, arguments that tell someone what to care about are tricky. You might be dealing with a fundamental value, over which it is useless to argue: you might as well try to convert Clippy to using staples. Even if the value you are questioning is a derived one, you have to correctly identify what it derives from. If Clippy believes in cooperation with humans, your task is first to understand why: does it think that humans are sufficiently interested in paperclips that helping humans is currently the best way to create more paperclips? Only then can you argue that Clippy should care about something else: maybe if squirrels ruled the earth, they would care about paperclips more.

I think you're (trivially) right that my future self is different from me. I think you're wrong that I care about my future self primarily because we're the same.

[-][anonymous]12y00

Thanks for your reply

You're arguing against a strawman. Nobody actually believes that their present self and future self are identical.

They don't, but then what do they mean by I am still the same person? I think the problem is that most people are confused about this and I attempted to clear it up.

At the same time, there is more than similarity that ties me to my future. The changes that happen from here to there aren't purposeless, random changes. They are caused by experiences that have the power to make me voluntarily change my mind. In a way, my future self is an improved version of my present self.

Sure your future self could be an improved version of your present self or it could be a worse version. The point however is that on what basis do invest your efforts now in the present towards benefiting a particular group of people? mainly these new versions of yourself? My claim is that it's not a good reason to prefer those people simply because they are future versions of yourself, as far as I am concerned they are just other people. yes they are similar to you as you are now but why is that relevant?

In general, arguments that tell someone what to care about are tricky. You might be dealing with a fundamental value, over which it is useless to argue: you might as well try to convert Clippy to using staples. Even if the value you are questioning is a derived one, you have to correctly identify what it derives from. If Clippy believes in cooperation with humans, your task is first to understand why: does it think that humans are sufficiently interested in paperclips that helping humans is currently the best way to create more paperclips? Only then can you argue that Clippy should care about something else: maybe if squirrels ruled the earth, they would care about paperclips more.

I didn't try to argue in favor of a particular fundamental value someone might hold, whatever fundamental value you might have what I wanted to point out is how does that fundamental value legitimize giving preferential concern to a certain group? Someone might claim that their fundamental value is a concern for their future copies in time. And there's nothing I can say against that, except ask why? why would someone care about a particular group of people because they are a particular group of people? as opposed for example to just caring about well-being general or some other fundamental value.

I think you're (trivially) right that my future self is different from me. I think you're wrong that I care about my future self primarily because we're the same.

this may be. do you accept though that because your future self is different then it isn't you as you are now? And if so how do you justify caring about those people? I honestly would like to know.

[-][anonymous]12y30

Draw a circle around your entire series of selves, and call that "me". I think this fits the human notion of identity a lot better than trying to draw a circle about the infinitesimally-existing "present self". Treating this "me" as a single entity, it's suddenly very clear how it could act to benefit itself (again, unlike the "present self", which can't do anything to change its state). I think this is a much better first approximation of selfish motivations.

Now you can ask whether this this entity should pursue those selfish motivations, or help other such entities instead. All of a sudden we are asking a real question. This is a good sign! Altruism is complicated. If we make a definition that makes us go "What is this thing you call love" in a deep alien voice, maybe that's not the best approach to take. Because sure, our perspective can be flawed. But I can't imagine getting an answer to a problem from a perspective that can't even understand the problem.

It would be just a psychological hurdle to overcome to allow one copy of yourself to be vaporized, trusting the high fidelity copy has been created.

But why vaporize yourself at all then? Wouldn't it be better to create a high fidelity copy and let the original live in peace?

[-][anonymous]12y00

But why vaporize yourself at all then? Wouldn't it be better to create a high fidelity copy and let the original live in peace?

perhaps yes. I don't have a problem with that. As long as resources can accommodate what would amount to lots of cloning every time people teleport

Another take based on the replies. For the most part of the post you seem to be arguing that the difference between your future selves and other people is not magical. We already know that. This particular difference is nonmagical, but so are all the others, those that matter. If, say, there were no difference in principle, we'd have to discount the crazy idea of distinguishing that which can not be distinguished, but that is not the case here.

Why care more about yourself than other people? More about people than properly sorted piles of pebbles? All these things are made out of atoms, after all, the only difference is in the way those atoms are arranged. But being unable to give a precise response doesn't argue one way or the other, doesn't invalidate whatever hold on the answer you've got with the best presently available tools. Much value in a human ethical theory is in being able to add up to normality where naturalism upended traditionally magical explanations of ethical notions. So that's what we have to do for now, before better tools become available.

[-][anonymous]12y10

My which seems to have been lost in most everyone is more nuanced than you might think. I am not arguing rightness or wrongness of fundamental values. What I am merely trying to point out is that the notion that there is a me, a self that lives through time is incoherent. That really all that exists are people and that preferring one group of people over another requires justification which will be based on whatever morality you ultimately have. That being who is prepared to step forward and say well giving preference to a particular group of people who are similar to the person I am at the present that is is one my fundamental values so I am sticking to it. I think it's quite ridiculous and people should reexamine what they value. Does anyone actually value similarity in this way? I don't think so.

What's the worth of telling people they shouldn't care about their future selves? The FEELING of being the same person as your future self is not going to go away, and emotion is the entire basis for morality. I don't care if by whatever definition future me is not me. I still love me and want me to be happy, and emotionally I still view future me as me.

Your justification for caring for others is non-existant. You argue that you shouldn't care more about your future self because they don't ACTUALLY matter more than anyone else, but that also applies to everyone else. There's no reason to care about anyone beyond the fact that you DO, or that they impact the things you DO care about.