Eliezer wonders about the thread of conscious experience: "I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground."

Instead of wondering whether we should be selfish towards our future selves, let's reverse the question. Let's define our future selves as agents that we can strongly influence, and that we strongly care about. There are other aspects that round out our intuitive idea of future selves (such as having the same name and possessions, and a thread of conscious experience), but this seems the most fundamental one.

In future, this may help clarify issues of personal identity once copying is widespread:

These two future copies, Mr. Jones, are they both 'you'? "Well yes, I care about both, and can influence them both."

Mr Jones Alpha, do you feel that Mr Jones Beta, the other current copy, is 'you'? "Well no, I only care a bit about him, and have little control over his actions."

Mr Evolutionary-Jones Alpha, do you feel that Mr Evolutionary-Jones Beta, the other current copy, is 'you'? "To some extent; I care strongly about him, but I only control his actions in an updateless way."

Mr Instant-Hedonist-Jones, how long have you lived? "Well, I don't care about myself in the past or in the future, beyond my current single conscious experience. So I'd say I've lived a few seconds, a minute at most. The other Mr Instant-Hedonist-Jones are strangers to me; do with them what you will. Though I can still influence them strongly, I suppose; tell you what, I'll sell my future self into slavery for a nice ice-cream. Delivered right now."

New to LessWrong?

New Comment
24 comments, sorted by Click to highlight new comments since: Today at 11:49 AM

Eliezer wonders about the thread of conscious experience: "I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground."

Consider: people drink and get hangovers.

...and may pre-buy hangover cures the night before. And the hangovered you may feel that the night before was worth it, at least to some extent. Being somewhat altruistic towards you future self has to be balanced against your current enjoyment.

And people aren't rational and consistent and all the usual caveats.

I think the idea is more that discounting and selfishness can be separated, not that people don't discount future events.

If the definition of "future selves" will not be used to determine who to care about and how much, then it has no consequences on one's decisions, so you might as well say that there is no such thing as future selves, that there is only how much we care about various person-moments, which is essentially arbitrary.

This position seems analogous to position 4 in What Are Probabilities, Anyway?, which says there is no such thing as "reality fluid" or "measure" in an objective sense, that there's only how much we care about various universes.

But what if there is such a thing as future selves, or reality fluid? It seems to me in that case we probably want to care more about our future selves, and about universes that have more reality fluid. Shouldn't we keep these questions open until we have better arguments one way or another?

I don't see what reality fluid or similar ideas have to do with it. If you don't care about your future selve, I see no reality fluid or measure-based argument that would convince you otherwise.

I just note that "caring about them" is a strong characteristic of our current concept of future selves, and should probably be part of any definition.

It seems to me that I care about my future selves because they are my future selves (whatever that means), not the other way around. If I took a drug that made me care less, I would still have future selves.

It occurs to be that this definition isn't symmetric. I have influence over my future self, but they have little influence over me. So they are me, but I am not they.

[-][anonymous]13y30

We're defining "future self". So Oscar2012 is Oscar2011's future self, but not vice versa. Makes sense to me.

This assumes your future self will not invent or obtain a method of time travel.

Edit: Let me rephrase - your 'future self' is someone you have power over. So no, current!You is presumably not future!You's future self.

Editedit: What Misha said.

If I had a slave whom I happened to care about, then by this definition they would be me, which isn't true.

There are other aspects to the current idea of identity, as I said - I wasn't claiming this was the total solution to the problem.

But if I had a willing slave that I cared deeply about, I'd say that it would be fair to consider them as an extension of myself. Especially if we communicated a lot.

Let's define our future selves as agents that we can strongly influence, and that we strongly care about.

This predicts our children are to some degree our future selves. I'm not sure if that's a plus or a minus for this theory.

I don't think there's any metaphysical meaning to "X is the same person as Y", but our mental programs take it as a background assumption that we're the same as our future selves for the obvious evolutionary reasons. Identity with our future selves is on the bottom level of human values, like preferring pleasure to pain, and there's no need to justify it further.

I don't know if this is the same as your theory or not.

Let's define our future selves as agents that we can strongly influence, and that we strongly care about.

This predicts our children are to some degree our future selves.

Predictions seem to be a different thing in nature to definitions. The definition is terrible but it, well, by definition doesn't make a prediction.

Should I have used a different word? Probably! But I will now proceed to a complex justification of my word choice anyway!

A lot of philosophy seems to be coming up with explicit definitions that fit our implicit mental categories - see Luke's post on conceptual analysis (which I might be misunderstanding). Part of this project is the hope that our implicit mental categories are genuinely based off, or correspond to, an explicit algorithmizable definition. For example, one facet of utilitarianism is the hope that the principle of utility is a legitimate algorithmization of our fuzzy mental concept of "moral".

This kind of philosophy usually ends up in a give-and-take, where for example Plato defines Man as a featherless biped, and Diogenes says that a plucked chicken meets the definition. Part of what Diogenes is doing is saying that if Plato's definition were identical to our implicit mental category, we would implicitly common-sensically identify a chicken as human. But we implicitly common-sensically recognize a chicken is not human, therefore our minds cannot be working off the definition "featherless biped".

This is the link between defining and predicting. Plato has proposed a theory, that when the mind evaluates humanity, it uses a featherless-biped detector. Diogenes is pointing out Plato's theory makes a false prediction: that people implicitly recognize chickens as humans. This disproves Plato's theory, and so the definition is wrong.

I suppose this must be my mental concept of what we're doing when defining a term like "self", which is what impels me to use "define" and "predict" in similar ways.

Was the irony intentional? If not that is just priceless!

Humans being what they are, when they define things it will inevitably tend to influence what predictions they make. Where a boundedly rational agent prescribed a terrible definition would be merely less efficient a human will also end up with biased predictions when reasoning from the prediction. Also, as you note, declaring a definition can sometimes imply that a prediction is likely to be made that the definition matches the mental concept while also carving reality effectively at it's joints.

The above being the case definitions can and should be dismissed as wrong. This is definitely related to the predictions that accompany them. This is approximately a representation of the non-verbal reasoning that flashed through my mind prompting my own rejection of the 'self as future folks you care about and can influence' definition. It is also what flashes through my mind when I reject why I must reject any definition of 'define' and 'predict' which doesn't keep the two words distinct. Just because 'human' is closely related to 'featherless biped' it doesn't mean they are the same thing!

I suppose this must be my mental concept of what we're doing when defining a term like "self", which is what impels me to use "define" and "predict" in similar ways.

Just so long as you don't mind if you mislabel a whole lot of plucked chickens.

Understanding the various relationships between definitions and predictions is critical for anyone trying to engage in useful philosophy. But it isn't helpful just to mush the two concepts together. Instead we can let our understanding the predictions involved govern how we go about proposing and using definitions.

I don't agree the definition is terrible. I agree it's incomplete. My point boils down to: we should include "caring about" in our intuitive definition of future selves, rather than using other definition and wondering if we can deduce caring from that. Humans do generally care about their future selves, so if we ommit that from the definition, we're talking about something else.

Let's define our future selves as agents that we can strongly influence, and that we strongly care about.

Technical implication: My worst enemy is an instance of my self.

Actual implication: Relationships that don't include a massive power differential or a complete lack of emotional connection are entirely masturbatory.

It is critical to consider that thing which is "future agents that we strongly care about and can influence" but calling those things our 'future selves' makes little sense unless they are, well, actually our future selves.

Technical implication: My worst enemy is an instance of my self.

This explains so much.

Actual implication: Relationships that don't include a massive power differential [...] are entirely masturbatory.

The other way around, surely? If your 'future self' is defined as something you have power over, how could a relationship of equals be masturbatory?

The other way around, surely? If your 'future self' is defined as something you have power over, how could a relationship of equals be masturbatory?

Equal or greater implies influence therefore they are yourself. If you have much much less power than them then perhaps you have no influence and so they may not be yourself.

Is there a clear border we can draw around "strongly care about," or is it just fuzzy thing that comes out of utility functions counting what happens to somebody?

[-]Owen13y10

Upvoted because I like to see this kind of brainstorming, although I feel like the "strongly care about" criterion is a bit ad hoc and maybe unnecessary. To me it sounds more correct to say that Mr. IHJ doesn't care about his future selves, not that he doesn't have any.

Currently, I'd agree with you. But when copying, especially imperfect copying, becomes available, then "strongly care about" may become a better guide to what a future self is.

[-][anonymous]13y-10

Hello there, Mr. Heidegger. I didn't realize the zombie plague had started. Any chance you saw where Erdos went? I have a paper for him to cosign...

[This comment is no longer endorsed by its author]Reply

but I only control his actions in an updatless way."

*updateless