A Problem With Patternism

by Bob Jacobs1 min read19th May 202052 comments


IdentityWorld ModelingAI

Patternism is the belief that the thing that makes 'you' 'you' can be described as a simple pattern. When there is a second being with the same mental pattern as you, that being is also you. This comes up mostly in the debate around mind-upoading, where if you make a digital copy of yourself, that copy will be just as much you as the current you is.

Question: How do we quantify this pattern?

And I don't mean how do we encode epigenetic information or a human mind into a string of binary numbers (even though that is a whole issue in and of itself), I will assume in this post that's all perfectly doable. How will we quantify when something is no longer part of the same general pattern?

I mean when we have a string of binary numbers with a couple of ones switched to zeros we can just tally up the errors and determine how much percent is different from the copy. But what if there is stuff missing or added? What if there is a big block of code that is duplicated in the copy? Should we count that as a one error or dozens? What if there are big chunks of codes that appear in both copies but in different places? Or in a different order? Or interspersed into the rest of the code?

But even in the first example are problems, not all ones and zeros are created equal. We care a lot about wether certain features of our being are switches 'on' or 'off' but not so much for others. Do we have to compare someone's personal desires? How do we quantify that? Or should we quantify what most people would deem an important change? Why? How? I fear that there is no real way to do this objectively and since there will always be small mutations/errors in copying you can never know which of the other "you's" is the most like you. Which I think is a pretty heavy blow for patternism.

EDIT: Apparently that last sentence caused some confusion so let me clarify. I'm not saying it's a blow for the truthfulness of the theory, since that is just a matter of definition (and I'm not interested in disputing definitions). I'm saying it's a blow for the usefulness of the theory since it doesn't help us generate new insights by making new and accurate predictions.