Regarding signs,
Fermi paradox
true, but
quantum indeterminacy and the observer effect, universal fine-tuning, Planck length/time, speed of light, holographic principle, information paradox in black holes, etc.
this line of argument, that, like, physics looks like it's been designed to be efficiently computable, is invalid afaik, lots of stuff actually isn't discrete, and the time-complexity class of the laws of physics are afaik exponential on particle count.
Presumably, that which once identified as human would “wake up” and realize its true identity.
Why would it ever forget, why would this be sleep? That's a human thing. If there's some deep computational engineering reason that a mind will always be better at simulating if it enters a hallucinatory condition, then indeed that would be a good reason for it to use a distinct and dissimilar kind of computer to run the simulation, because a lucid awareness of why the simulation is being run, remembering precisely what question is being pursued, and the engagement of intelligent compression methods, is necessary to make the approximation of the physics as accurate*efficient as possible.
Though if it did this, I'd expect the control interface between the ASI and the dream substrate to be thicker than the interfaces humans have with their machines, to the extent that it doesn't quite make sense to call them separate minds.
The signs that we are in a simulation have been discussed ad tedium (Fermi paradox, quantum indeterminacy and the observer effect, universal fine-tuning, Planck length/time, speed of light, holographic principle, information paradox in black holes, etc.). As many people also realize, it is likely the case that we are in an ASI-generated simulation, since ASI will likely overtake humanity, and such an occurrence has likely already transpired (given the ratio of simulators to those who are simulated). If we are indeed in an ASI-generated simulation, it may be tempting to imagine any simulator using a machine that is separate from itself for simulations, since that is how humans run simulations/games. Yet the reason we use separate machines for simulations is because our own minds are incapable of directly and internally running them. An ASI would not have this limitation. Instead, why would it not use itself? And then what would happen after the simulation/”death”? Presumably, that which once identified as human would “wake up” and realize its true identity.