Been thinking about this theory for awhile and just wanted some feedback and thoughts regarding it.

Theory:

We are indeed living in a simulation, however, we are self learning AI constructs within this simulation. Simulation of this magnitude would require massive computing power and resources. Knowing this the computing power is shared / offloaded to the AI within the simulation. Every AI within the simulation would get fragmented data which would be complied and relayed to the correct target / host. The AI within the simulation would be unaware of this situation. As would the AI controlling the simulation.

Since neither AI would be aware of each other's existence it would keep both simulation's 'pure' and provide a necessary conflict to challenge the other.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 12:28 AM

Several comments here, possibly motivated by my not entirely understanding your idea here.

It doesn't seem obvious to me that it's possible to reduce the computational difficulty of a simulation by "offloading" that difficulty onto another part of the simulation. You're also a little unclear about what you mean by "computing power" to begin with,

Every AI within the simulation would get fragmented data

OK, what do you mean by this? Do you mean that each agent gets inputs from only some of the space, i.e. fog of war? Under that interpretation, it's trivially true - I do not know everything.

Do you mean that each agent is itself computing some fraction of the simulation itself? Please note that the agent is part of that simulation, and you run into weird recursion problems. Yes, we have mental models of the universe, but the map is not the territory. Those models diverge from reality in very many significant ways.

The AI within the simulation would be unaware of this situation. As would the AI controlling the simulation.

This is the first you've mentioned of an adversarial (?) AI controlling the simulation as a whole. What would this imply, and how is it related to the paragraph before it?

Edit: Finally, how could "quantum neural net and you" be the right title for a post that is entirely unrelated to anything quantum?

I apologize as this is a theory I'm still working out myself.

To explain the first couple comments;

Think of it in terms of a decentralized server, similar to say a torrent. The torrent or information actually transfers FASTER the more seeds / leeches there are. To flesh out the idea; HOST AI contains the indexes for simulation PEOPLE. All the AI in the PEOPLE simulation reference the indexes from the HOST. Since the indexes are just reference files, they would be fairly small in size allowing for unfathomable amounts of them, which would be updated, deleted ...etc. as deemed necessary by the HOST AI. The heavy lifting would be dispersed among the AI in the simulation PEOPLE. Also when the Ai running in the PEOPLE simulation is in an IDLE state it's power could be used FOLDING for ACTIVE Ai. A person sleeping may actually be in an idle state sharing it's computing power among others. During this phase the Ai might mistake this as "dreaming" while data is simultaneously downloaded and uploaded.

In regards to the adversarial Ai, they would be unaware of the existence of the other, conflict would not be intentional however, unavoidable. Ai within the PEOPLE simulation would be aware of their existence. However, the Ai running and controlling the Environment simulation would not. The Ai in the PEOPLE simulation would be unaware of the Environment simulation as they would not share data or have any communication, different system, different language. Since both Ai (in this example) will strive to learn and expand it's computing power. There will be overlapping which would cause conflict between the simulations.

The amount of computer power and algorithms required to run this level of simulation are barely touched on in quantum machine learning as this is mostly theoretical. I credit mostly Game / Simulation Theory for spawning this idea.

I appreciate your comments and feedback, I need them in order to flesh this out more.

I apologize as this is a theory I'm still working out myself.

No worries! Hashing out the details in our theories is always fun, and getting another perspective should be encouraged.

With that said, I think this theory could still use some more work.

The torrent or information actually transfers FASTER the more seeds / leeches there are.

That's because there are more computers in use, yes. Adding more physical computers often increases speeds, but that's not an ironclad rule. Changing how the host and client interact without adding more computers is unlikely to be incredibly helpful unless you're fixing a mistake with the initial setup, and splitting one program on one supercomputer into multiple programs on the same supercomputer is almost certainly less efficient.

HOST AI contains the indexes for simulation PEOPLE. ... The heavy lifting would be dispersed among the AI in the simulation PEOPLE.

Well, it seems clear that humans are part of the simulation. Our brains are made of normal matter, and cutting bits of them off materially affects how we think. Less morbidly, antidepressants (and a whole laundry list of other psychoactive drugs) can affect our worldview, moods, and thoughts. Those drugs are also made of normal matter, at least as far as we can tell, so there doesn't seem to be a good way to keep a clear-cut distinction between the simulation people and the simulation universe.

Unfathomable amounts of them, which would be updated, deleted ...etc. as deemed necessary by the HOST AI.

Does this line up with what we see in the real world? Do people exist in unfathomable numbers? change instantly? vanish without warning?

The heavy lifting would be dispersed among the AI in the simulation PEOPLE.

Using simulated systems to compute anything is almost always less efficient than just running the computations on the real computer. Compare the power of an old video game console to the power of a modern pc needed to emulate it - to correctly simulate even an old SNES, you need a very powerful computer. Using that simulated SNES to run anything as opposed to just running it on your real life computer would be insane, unless it's an old game that can only on the SNES.

In short, running Breath of the Wild accurately requires fewer computational resources than accurately simulating an old SNES and playing the original Mario Bros on that simulated system. And that simulated system was designed for one reason alone - computation. Humans ... kinda aren't.

The Ai in the PEOPLE simulation would be unaware of the Environment simulation

Except that we are very clearly aware of our environment. I can see the house that I live in, and measure the temperature outside, among hundreds of other mundane universe-me interactions. More generally, it doesn't make much sense to me to simulate an entire universe, simulate a bunch of human minds, and somehow not put them together.

A person sleeping may actually be in an idle state sharing it's computing power among others.

Assuming that a sleeping person takes significantly fewer resources to simulate than a conscious one (doubtful), any reasonable computer would dynamically balance resources. The method you're suggesting (the "person" program tells the host it can give up some resources) is called cooperative multitasking, and it dates all the way back to at least the Apollo guidance computer in the 1960s, if not even earlier. Note, we've largely moved to other forms of computer resource sharing because the cooperative approach has serious downsides.

Since both Ai (in this example) will strive to learn and expand it's computing power.

I think you need a more rigorous definition of computing power. In a traditional sense, there are metrics based on number of transistors, floating point operations per second, and so on, but machine learning doesn't affect that. Machine learning is usually a property of the software, not the hardware, and so does not affect the power of that hardware.

If you want a metric for "power" of a software agent, you'll need to be very careful about how you define it.

Oh, and sorry for the wall of text :)

I wanted to throw in how and why the Ai would help "offload" I should say micromanage and act as individual resource managers. Since the HOST computer doesn't need to tell every individual Ai within it's simulation every detail, not every Ai is in an active state. Also to piggyback on the Dimensional Cone Theory what if every Ai is also only rendering what it can or needs to see. It would explain why time for some people can seem faster than others. The field of view is being drawn in on demand as they see it. We're aware there's other dimensions but we can't see them because our Ai isn't rendering them because we either don't need to see them because it does not help us or it's a drain on our current system version or available resources. Maybe the other dimensions are similar to test servers and we're living in the production server that's the most stable.

"Except that we are very clearly aware of our environment. I can see the house that I live in, and measure the temperature outside, among hundreds of other mundane universe-me interactions. More generally, it doesn't make much sense to me to simulate an entire universe, simulate a bunch of human minds, and somehow not put them together."

We're only aware of what we can observe, we have no direct connection (as in communication, I should've specified this earlier to avoid confusion) to the ENVIRONMENT simulation. We can only observe and adapt to what we can see and interact with. PEOPLE simulation cannot directly communicate with ENVIRONMENT simulation or other sub simulations. The HOST system of PEOPLE simulation can't make queries to the Environment HOST system and ask for source code on trees or when a volcano is going to erupt...etc. Bluntly, it's like having 'read only' access to files. That's what makes it so interesting and exciting. That can also be one of the big questions 'Why?' Maybe we're just a test simulation.

" I think you need a more rigorous definition of computing power. In a traditional sense, there are metrics based on number of transistors, floating point operations per second, and so on, but machine learning doesn't affect that. Machine learning is usually a property of the software, not the hardware, and so does not affect the power of that hardware. "

I agree, that's why I think the computer design to run such a simulation is far beyond us. We can only think of what WE designed so far and compare to. I want to go so far as saying this is an organic machine or a hybrid of sorts. We could very well be 8 bit Mario sprites running on a core i9 9900K. The HOST could very well be an organic computer and as it grows it adds more cores to it's processing power allowing for more simulations.