I would like to introduce this Question-Post with more information about the increasing Capital and Human Resources directed to A.I.

. . .

There is an open debate about whether or not we are in a simulation. This Question-Post argument is based on the assumption that there is a high probability that we are living in a simulation. I believe the ending stance of our simulation will be when AGI is invented and that this simulation has the goal of information gathering. 
What do you think will happen with our self awareness when our simulation ends?

I came up with the argument that: As there is not Universal mercy in our reality, there will probably not be with our minds. So it would be an instant Shutdown?

New Answer
Ask Related Question
New Comment

3 Answers sorted by

If by "consciousness", you mean your memories and feelings about self-identity, I find it hard to believe that it continues in any way after death.  Whether that's end of simulation or just in-simulation death, the coherence of your history, beliefs, and experiencing-process goes away.

It's possible that you'll be archived, but unless they decide to extend that sim, rather than re-running it or modifying it and running a new sim, it won't contain you.  

I disagree with the premise that a simulation containing conscious beings would be created for mere reasons of information gathering. It wouldn't be ethical.

People don't seem to react naturally to ethical transgressions that are "abstract" enough. I would place harming simulated people in that bucket. It's something I definitely could see people doing in the future given the right cultural attitude.

I respectfully disagree. The future I imagine contains humans who themselves are uploads or live in artificial/virtual worlds. They would understand very well that artificial/simulated beings are real.

In this premise, The "Creator" of our simulation seems to not share our same ethical values.

This can be supported by the premises that:
A) A SuperIntellgence can (easily) create simulations.
B) It is (really) hard to align a SuperIntelligence with our ethical values. 
C) There is suffering in our reality. 

Which seem to have a high probability. 

New to LessWrong?