Been thinking about this theory for awhile and just wanted some feedback and thoughts regarding it.
We are indeed living in a simulation, however, we are self learning AI constructs within this simulation. Simulation of this magnitude would require massive computing power and resources. Knowing this the computing power is shared / offloaded to the AI within the simulation. Every AI within the simulation would get fragmented data which would be complied and relayed to the correct target / host. The AI within the simulation would be unaware of this situation. As would the AI controlling the simulation.
Since neither AI would be aware of each other's existence it would keep both simulation's 'pure' and provide a necessary conflict to challenge the other.