LESSWRONG
LW

Davey
133280
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Sleeping Beauty and the Forever Muffin
Davey2mo10

I have thought a lot about anthropics.

In an infinite universe, there are an infinite number of identical observers. You cannot define a probability distribution with an infinite sample space, and infinite cardinalities cannot help you. You cannot ask for the probability that it is Monday or Tuesday upon flipping tails because there are an infinite number of observers in both cases.

Do you agree that anthropic questions like these are meaningless if we live in an infinite universe? 

Reply
On The Formal Definition of Alignment
Davey2mo30

This is not the same as CEV. CEV involves the AI extrapolating a user’s idealized future values and acting to implement them, even overriding current preferences if needed, whereas my model forbids that. In my framework, the AI never drives or predicts value change; it simply provides accurate world models and optimal plans based on the user’s current values, which only the user can update. 

CEV also assumes convergence; my model protects normative autonomy and allows value diversity to persist.

Reply
Moral patienthood of simulated minds allows uncountabe infinity of value on finite hardware
Davey4mo10

Of course it is, but I'm a functionalist

Reply
Moral patienthood of simulated minds allows uncountabe infinity of value on finite hardware
Davey4mo10

I'd like to offer a counterargument, that, I'll admit, can get into some pretty gnarly philosophical territory quite quickly.

Premise 1: We are not simulated minds—we are real, biological observers.

Premise 2: We can treat ourselves as a random sample drawn from the set of all conscious minds, with each mind weighted by some measure—i.e., a way of assigning significance or “probability” to different observers. The exact nature of this measure is still debated in cosmology and philosophy of mind.

 

Inference: If we really are a typical observer (as Premise 2 assumes), and yet we are not simulated (as Premise 1 asserts), then the measure must assign significantly greater weight to real biological observers than to simulated ones. This must be true even if there are vastly more simulations in a numerical sense—even uncountably infinitely more—because our non-simulated status would be extremely improbable otherwise.

 

Conclusion: So, under the assumption that we are typical, our existence as real observers implies that simulated minds must have much lower measure than real ones. Therefore, even if digital minds exist in large numbers, they may not matter proportionally in ethical calculations—since their measure, not just their count, determines their relevance. This gives us reason to think utilitarianism, when properly weighted by measure, may still prioritize the welfare of real, biological minds.

Reply
The Potential Impossibility of Subjective Death
Davey4mo10

This sounds like another crazy thing that the logic says is right but is probably not right, but I don't know why.

Reply
Moral patienthood of simulated minds allows uncountabe infinity of value on finite hardware
Davey4mo10

Also, does this imply that a technologically mature civilization can plausibly create uncountably infinite conscious minds? What about other sizes of infinity? This, I suppose, could have weird implications for the measure problem in cosmology.

Reply
Moral patienthood of simulated minds allows uncountabe infinity of value on finite hardware
Davey4mo10

I'm not sure if I understand, but sounds interesting. If true, does this have any implications for ethics more broadly, or are the implications confined only to our interpretation of computations?

Reply
What's Wrong With the Simulation Argument?
Davey7mo10

Maybe. But what do you mean by, "you can narrow nothing down other than pure logic"?

I interpret the first part—"you can narrow nothing down"—to mean that the simulation argument doesn't help us make sense of reality. But I don't understand the second part: "other than pure logic." Can you please clarify this statement?

Reply
What's Wrong With the Simulation Argument?
Davey7mo10

Thank you, I feel inclined to accept that for now.

But I'm still not sure, and I'll have to think more about this response at some point.

Edit: I'm still on board with what you're generally saying, but I feel skeptical of one claim:

It seems to me the main ones produce us via base physics, and then because there was an instance in base physics, we also get produced in neighboring civilizations' simulations of what other things base physics might have done in nearby galaxies so as to predict what kind of superintelligent aliens they might be negotiating with before they meet each other.

My intuition tells me there will probably be superior methods of gathering information about superintelligent aliens. To me, it seems like the most obvious reason to create sims would be to respect the past for some bizarre ethical reason, or for some weird kind of entertainment, or even to allow future aliens to temporarily live in a more primitive body. Or perhaps for a reason we have yet to understand.

I don't think any of these scenarios would really change the crux of your argument, but still, can you please justify your claim for my curiosity?

Reply
What's Wrong With the Simulation Argument?
Davey7mo10

I think I understand your point. I agree with you: the simulation argument relies on the assumption that physics and logic are the same inside and outside the simulation. In my eyes, that means we may either accept the argument's conclusion or discard that assumption. I'm open to either. You seem to be, too—at least at first. Yet, you immediately avoid discarding the assumption for practical reasons:

If we have no grasp on anything outside our virtualized reality, all is lost.

I agree with this statement, and that's my fear. However, you don't seem to be bothered by the fact. Why not? The strangest thing is that I think you agree with my claim: "The simulation argument should increase our credence that our entire understanding of everything is flawed." Yet somehow, that doesn't frighten you. What do you see that I don't see? Practical concerns don't change the territory outside our false world.

Second:

It seems to me the main reason is because we're near a point of high influence in original reality and they want to know what happened - the simulations then are effectively extremely high resolution memories.

That's surely possible, but I can imagine hundreds of other stories. In most of those stories, altruism from within the simulation has no effect on those outside it. Even worse, is that there are some stories in which inflicting pain within a simulation is rewarded outside of it. Here's a possible hypothetical:

Imagine humans in base reality create friendly AI. To respect their past, the humans ask the AI to create tons of sims living in different eras. Since some historical info was lost to history, the sims are slightly different from base reality. Therefore, in each sim, there's a chance AI never becomes aligned. Accounting for this possibility, base reality humans decide to end sims in which AI becomes misaligned and replace those sims with paradise sims where everyone is happy.

In the above scenario, both total and average utilitarianism would recommend intentionally creating misaligned AI so that paradise ensues.

I'm sure you can craft even more plausible stories. 

My point is, even if our understanding of physics and logic is correct, I don't see why we ought to privilege the hypothesis that simulations are memories. I also don't see why we ought to privilege the idea that it's in our interest to increase utility within the simulation. Can you please clarify why you're so confident about these notions?

Thank you

Reply
Load More
1How should Canada Negotiate with Trump on Tariffs?
Q
2mo
Q
2
4On The Formal Definition of Alignment
2mo
3
6What's Wrong With the Simulation Argument?
Q
7mo
Q
49