Consider the case of a reclusive mad scientist who uplifts his dog in the hope of getting a decent game of chess. He is likely to be disappointed as his pet uses his new intelligence to build a still and drink himself to death with homemade vodka. If you just graft intelligence on top of a short term reward system, the intelligence will game it, leading to wireheading and death.
There is no easy solution to this problem. The original cognitive architecture implements self-preservation as a list of instinctive aversions. Can one augment that list with addition aversions preventing the various slow-burn disasters that intelligence is likely to create? That seems an unpromising approach because intelligence is open ended, the list would grow and grow. To phrase it differently, an unintelligent process will ultimately be out witted by an intelligent process. What is needed is to recruit intelligence to make it part of the solution as well as part of the problem.
The intelligence of the creature can extrapolate forward in time, keeping track of which body is which by historical continuity and anticipating the pleasures and pains of future creatures. The key to making the uplift functional is to add an instinct that gives current emotional weight to the anticipated pleasures and pains of a particular future body, defined by historical continuity with the current one.
Soon our reclusive mad scientist is able to chat to his uplifted dog, getting answers to questions such as "why have you cut back on your drinking?" and "why did you decide to have puppies?". The answers are along the lines of "I need to look after my liver." or "I'm looking forward to taking my puppies to the park and throwing sticks for them." What is most interesting here probably slips by unnoticed. Somehow the dog has acquired a self.
Once you have instincts that lead the mind to extrapolate down the world line of the physical body and which activate the reward system now according to those anticipated future consequences, it becomes natural to talk in terms of a 4-dimensional, temporally extended self, leaving behind the 3-dimensional, permanent now, of organisms with less advanced cognitive architectures. The self is the verbal behaviour that results from certain instincts necessary to the functioning of a cognitive architecture with intelligence layered on top of a short term reward system. The self is nature's bridle for the mind and our words merely expressions of instinct.We can notice how slightly different instincts give rise to slightly different senses of self and we can ask engineers' questions about which instincts, and hence which sense-of-self, give the better functioning cognitive architecture. But these are questions of better or worse, not true or false.
To see how this plays out in the case of teletransportation, picture two scenarios. In both worlds the technology involves making a copy at the destination, then destroying the original. In both worlds there are copy-people who use the teletransportation machines freely, and ur-people who refuse to do so.
In scenario one, there is something wrong with the technology. The copy-people accumulate genetic defects and go extinct. (Other stories are available: the copy-people are in such a social whirl, travelling and adventuring, that few find the time to settle down and start a family). The ur-people inherent the Earth. Nobody uses teletransportation any more, because every-one agrees that it kills you.
In scenario two, teletransportation becomes embedded in the human social fabric. Ur-people are left behind, left out of the dating game, and marriage and go extinct. (Other stories are available: World War Three was brutal and only copy-people, hopping from bunker to bunker by teletransportation survived). It never occurs to any-one to doubt that the copy at the destination is really them.
The is no actual answer to the basic question because the self is an evolved instinct, and the future holds beliefs about the self that are reproductively successful. In the two and three planet scenarios, the situation is complicated by the introduction of a second kind of reproduction, copy-cloning, in addition to the usual biological process. I find it hard to imagine the Darwinian selective pressures at work in a future with two kinds of reproduction.
I think that the questions probe the issue of whether the person choosing whether to buy the lottery ticket is loyal to a particular copy, or to all of them. One copy gets to win the lottery. The other copies are down by the price of the ticket. If one is loyal to only one copy, one will choose to buy if and only if one is loyal to the winner.
But I conjecture that a balanced regard for all copies will be most reproductively successful. The eventual future will be populated by people who take note of the size of the lottery prize, and calculate the expected value, summing the probabilities over all of their copies.