Monk Treehouse: some problems defining simulation
When does one program simulate another? When does one program approximately simulate another? In some contexts, these questions have concrete answers. In other contexts they feel about the same as the problem of functionalist triviality. When simulation comes up in contexts relevant to agent foundations, it tends to feel like...
I'm interested in what happens if individual agents A, B, C merely have a probability of cooperating given that their threshold is satisfied. So, consider the following assumptions.
The last assumption being simply that w is low enough. Given these assumptions, we have ⊢□wE via the same proof as in the post.
So for example if x,y,z are all greater than two thirds, there can be some nonzero w such that the agents will cooperate with probability w. In a sense this is not a great outcome, since viable w might be quite small; but it's surprising to get any cooperation in this circumstance.