Suppose that you are slowly walking into a literal physical tunnel. Almost all of your head is in the tunnel. If the part of your head that is not yet in the tunnel was destroyed, you would survive, but your personality would be different, from brain damage.
Now consider an uploaded mind being copied. The simulation process is paused, the data is copied byte for byte, and then two separate simulation processes start on separate computers.
If you cut the cable halfway through, and only look at what is on the second hard drive, then you get a partial, brain damaged mind. But at no point is that mind actually run. You are saying that if you ignore part of a mind, you see a brain damaged mind. In the case of an em being copied, that part might be on a different hard drive.
Of course, there are good moral reasons to make sure that the data cable isn't unplugged and the half-formed mind run.
I would say that I care about the simulation, not the data as such. In other words, you can encrypt the data, and decrypt it again all you want. You can duplicate the data, and then delete one copy, so long as you don't simulate the copy before deletion. You might disagree with this point of view but it is a consistent position.
Thanks for the reply. It sounds like maybe my mistake was assuming that unsimulated brain data was functionally and morally equivalent to an unconscious brain. From what you are saying it sounds like the data would need to be simulated even to generate unconsciousness.
Trait by trait doesn't seem like a likely copy means.
One hemisphere, then the other, almost does though.
I find this idea disturbing because it implies that emulating any brain (and possibly copying de novo AI as well) would inevitably result in creating and destroying multiple different personality/value sets that might count as separate people in some way. No one has ever brought this up as an ethical issue about uploads as far as I know (although I have never read "Age of Em" by Robin Hanson), and my background is not tech or neuroscience, so there is probably something I am missing .
Suppose, as you were waking up, different parts of the brain would 'come online'. In theory, it could be the same thing. (With the 'incomplete parts' running even.)
When I was thinking about the concept of human brain emulation recently, a disturbing idea occurred to me. I have never seen anyone address it, so I suspect it is probably caused by my being deeply confused about either human neurobiology or computer science. I thought I'd ask about it in the hopes someone more informed would be able to explain it and the idea could stop bothering me:
Imagine that a brain emulation is in the process of being encoded into a storage medium. I don't think that it is relevant whether the copy is being made from an organic brain or an existing emulation. Presumably it takes some amount of time to finish copying all the information onto the storage media. If the information is about a person's values or personality, and it is only halfway copied, does that mean for a brief moment, before the copying process is complete, that the partial copy has very different personality or values from the original? Are the partially copied personality/values a different, simpler set of personality/values?
Presumably the copy is not conscious during the copying process, but I don't think that affects the question. When people are unconscious they still have a personality and values stored in their brain somewhere, they are just not active at the moment.
I find this idea disturbing because it implies that emulating any brain (and possibly copying de novo AI as well) would inevitably result in creating and destroying multiple different personality/value sets that might count as separate people in some way. No one has ever brought this up as an ethical issue about uploads as far as I know (although I have never read "Age of Em" by Robin Hanson), and my background is not tech or neuroscience, so there is probably something I am missing .
Some of my theories of things I am missing include:
I'd appreciate if someone with more knowledge about this issue, or programming/neuroscience would be willing to explain where my thinking about it is going wrong. I am interested in explanations that are conditional on brain emulation working. Obviously if brain emulation doesn't work at all this issue won't arise. Thank you in advance, it is an issue that I continue to find disturbing.