Computation requires interpretation, and depends on interpretation. For example, a tap can be interpreted as a computer, that solves hydrodynamics equations. Or not. It depends on what meaning we assign to the results it produces. This flexibility of interpretation allows us to twist computations in some funny ways. 

Let me introduce a segment virtual machines hypervisor. It works this way. It has infinite number of virtual machines. Each VM has an address. Each VM address is a real number. The hypervisor can process commands like this: start up all machines in segment [a,b], for all machines in segment [a, b] execute command xyz. How can this hypervisor can work under the hood? It's simple. It just keeps track, which segments were introduced by the user, and for each segment it spawns a usual virtual machine. If some segment should be split into two - it clones the VM assigned to that segment, and assigns one clone to the left part of segment and the other clone to the right side of the segment. This allows to fully implement the described interface of segment VM hypervisor. While the number of VMs available in the interface is infinite, this method optimizes out all the duplicated computations in such a way, that the load on the hardware is propo tional to the number of the segments, which is finite. So, the interface of the hypervisor allows us to interpret our shenanigans as a launch of uncountably many virtual machines on finite hardware. Now, if we allow moral patienthood of simulated minds, the only remaining thing left to do is to launch a simulation of a happy being on a segment. Boom. You've created uncountably many happy minds. If you're utilitarian you might be very happy at this point. Or you might point out that all of those minds are identical. This might push you towards adding an ad-hoc requirement, that a simulated mind must be unique to count in utilitarianism's total value. But don't rush - I can easily reinterpret all of those minds as unique, by incorporating their address into their name. This is also easily doable by a slight adjustment to the hypervisor - for example, by adding a post-processing that replaces a string "<NAME_OF_MIND>" by a pseudorandomly chosen name based on the address if the VM before displaying it on the screen. This way, if you tell the hypervisor to launch digital minds on a segment [0,10], and then query the mind on a machine 3.14 for its name, you'll see Charlie, while if you query the mind living on machine 1.41, you'll see it respond with Peter. And all of it with still finite load on the real hardware. My point is - computation is subjective. It heavily depends on our interpretation, and this flexibility allows us to twist the notion of what is being computed so much, that assigning moral value to computations allows hacks that render the entire moral framework meaningless.

New Comment
12 comments, sorted by Click to highlight new comments since:

Also, does this imply that a technologically mature civilization can plausibly create uncountably infinite conscious minds? What about other sizes of infinity? This, I suppose, could have weird implications for the measure problem in cosmology.

[-]Luck10

I claim that it is possible to create a program, which can be interpreted as running uncountably infinite number of simulations. Does this interpretation carry any weight for morality? Can a simulation be viewed as a conscious mind? These questions have different answers in different philosophical frameworks. And yes, it does create weird implications in those frameworks that answer "yes" to both questions. My response is to just discard those frameworks, and use something else. What about other sizes of infinity? I don't know. I expect that it is possible to construct such a hypervisor for any size of infinity, but I'm not quite interested in doing it, because I've already discarded those philosophical frameworks in which it's important.

[-]TAG20

I claim that it is possible to create a program, which can be interpreted as running uncountably infinite number of simulations

I can't see that such a programme would have to be interpreted as running uncountably infinite number of simulations

My response is to just discard those frameworks, and use something else.

something else that's another philosophical framework, or something else entirely?

[-]Luck10

The hypervisor creates a bijection from real numbers to virtual machines. So, at the abstraction level of the hypervisor's interface, the number of virtual machines is continuum. Nobody says that you have to think about this system only at this layer of abstraction. But at least at this layer of abstraction there are uncountably many conscious minds. So, how are you going to apply utilitarianism in this case? The only way to make utilitarianism still work in this case, is to somehow claim that those minds don't count. And if you want to say that digital minds in general count, but in this particular case they don't count infinitely - then you have to come up with some very complex ad-hoc logic. So I conclude that utilitarianism and moral patienthood of digital minds don't mix well together. So, I discard the combination "utilitarianism+digital_morality_patients". There are many remaining moral philosophies not affected by my thought experiment. Like, utilitarianism+OrchOR is unaffected, virtue ethics is unaffected, moral egoism is unaffected.

[-]TAG20

A computable hypervisor can't run an uncountable number of different VMs. You seem to be talking about a countable number of segments of the real line, with identical VMs in each segment. That gives you a countable number of different. VMs, but it's not really a bijection, because it's not general. And it's only an abstraction that a segment consists of uncountable identical VMs, not just one with a rational measure

[-]Luck10

You're correct that this is what happens at one of the abstraction layers. But the choice of that layer is pretty arbitrary. By abstraction layers:

L1: hypervisor interface: uncountably many VMs

L2: hypervisor implementation: countably many VMs

L3: semiconductors: no VMs, only high and low signals

L4: electrons: no high and low signals, only electromagnetic fields

So yes, on L2 the number of VMs is finite. But why morality should count what happens on L2 and not on L1 or L3, L4? This is too arbitrary.

I'm not sure if I understand, but sounds interesting. If true, does this have any implications for ethics more broadly, or are the implications confined only to our interpretation of computations?

[-]Luck10

I claim that it has an implication that utilitarianism is not compatible with moral patienthood of digital minds. So one has to choose - either utilitarianism, or welfare of digital minds, but not both. Because otherwise, we get that every second that we didn't dedicate to building infinite number of happy minds is infinitely bad, and after we created infinite number of happy minds, utilitarianism doesn't give any instructions on how to behave, because we're already infinitely saint, and practically no action can change our total saintness score, which is absurd. There are multiple ways out of it: first, if you want to keep utilitarianism, then you can define moral patienthood in a more strict manner, that doesn't allow any digital mind to become morality patient. Like, you can say that Orch OR is correct, and any mind must be based on quantum mechanical computation, otherwise it doesn't count. But I expect that digital minds will soon arrive and will get a lot of power, they won't like this attitude, and will make it illegal. Another way is to switch to something other than utilitarianism, that doesn't rely on such a concept as "total happiness of everything". 

I'd like to offer a counterargument, that, I'll admit, can get into some pretty gnarly philosophical territory quite quickly.

Premise 1: We are not simulated minds—we are real, biological observers.

Premise 2: We can treat ourselves as a random sample drawn from the set of all conscious minds, with each mind weighted by some measure—i.e., a way of assigning significance or “probability” to different observers. The exact nature of this measure is still debated in cosmology and philosophy of mind.

 

Inference: If we really are a typical observer (as Premise 2 assumes), and yet we are not simulated (as Premise 1 asserts), then the measure must assign significantly greater weight to real biological observers than to simulated ones. This must be true even if there are vastly more simulations in a numerical sense—even uncountably infinitely more—because our non-simulated status would be extremely improbable otherwise.

 

Conclusion: So, under the assumption that we are typical, our existence as real observers implies that simulated minds must have much lower measure than real ones. Therefore, even if digital minds exist in large numbers, they may not matter proportionally in ethical calculations—since their measure, not just their count, determines their relevance. This gives us reason to think utilitarianism, when properly weighted by measure, may still prioritize the welfare of real, biological minds.

[-]TAG20

The easiest explanation for high measure of biological minds is simulated minds lacking consciousness.

Of course it is, but I'm a functionalist

[-]Luck10

You've arrived to the same conclusion as I state. I say caring about simulated minds explodes in paradixes in my thought experiment, so we probably shouldn't. You came to the same conclusion that caring aboout digital minds shouldn't be a priority through your introduced infinitesmal measure of digital minds. We're not in disagreement here.

More from Luck
Curated and popular this week