Yes, I think it's reasonable to describe this as the creatures acausally communicating. (Though I would have described this differently; I think that all the physics stuff you said is not necessary for the core idea you want to talk about.)
Hello Buck, thanks for answering. Do you have an opinion on which creatures are communicating? How would you describe the scenario without involving (something like) physics ?
Acausal deal between universes can be used to resurrect the dead via random mind generators. We generate their mind and they generate ours, so any mind is recreated somewhere.
Seems to me that's not "between universes" because no second universe need be involved: it's sampling randomly out of mind-space, where the resulting mind almost certainly is not otherwise instantiated.
Let's call the more complex agents, Agent1 and Agent2, and their purported simulations, SImulation1 and Simulation2. So in one sub-universe it's Agent1 interacting with Simulation2, and in the other sub-universe, it's Agent2 interacting with Simulation1.
Another issue here is, what about the general sensory experience of the Agents? Agent1, while talking with Simulation2, may be subjected to the equivalent of street noise, or interruptions by colleagues, or - you get the idea. Presumably, if Simulation1 doesn't experience these same interruptions, it is no longer actually a simulation of Agent1.
One might reasonably conclude that the whole concept of acausal interaction is dubious. Agent1 is not in any sense communicating with Agent2, they are communicating with an entirely independent agent, Simulation2, which has some limited similarity to Agent2. One could even go further and say, even if Simuation2 exactly tracked Agent2's experience, Agent1 is still not communicating with Agent2 in any sense, they are only communicating with Simulation2.
I feel that, to save the idea of acausal interaction, you'd really have to start with a Tegmark-like omniverse in which all possibilities are real. Then, when you talk to a simulation, you can say that you're also acausally interacting with all those agents elsewhere in the omniverse for whom your simulation happens to be 100% accurate. But how do we define these equivalence classes?
Another way to try to save the idea, is to focus on the idea of acausal coordination among identical agents. It is true by definition that all the perfect simulations of an agent, throughout the omniverse, will behave the same as the original. So maybe you can construct some scenario in which you are speaking to a simulation of an entity that is powerful within its own simulation, and elsewhere in the omniverse there will be perfect copies of that interaction in which the roles are reversed, and this can be the basis for acausal trade or acausal blackmail.
Yet another way to try to save the idea of acausal interaction, is to think of it as an interaction between classes of entities which are not identical, but simply share some trait. For example, there have been attempts to argue that voting in an election, even though your individual vote is extremely unlikely to matter, can be justified as acausal cooperation among voters. I suppose this would be a version of superrationality as defined by Hofstadter.
There seems to be ample room for a thoroughly skeptical assessment of whether acausal communication, etc, makes any sense (especially given the arguments that you can get some of the wins of timeless decision theories from causal decision theories plus uncertainty as to which copy you are). But as far as I know, there's no essay-length discussion of this nature.
Thanks for your detailed response.
"Another issue here is, what about the general sensory experience of the Agents? Agent1, while talking with Simulation2, may be subjected to the equivalent of street noise, or interruptions by colleagues, or - you get the idea. Presumably, if Simulation1 doesn't experience these same interruptions, it is no longer actually a simulation of Agent1. "
I didn't include this in the post, but I was imagining that perhaps the individual universes were the same universe at some point in the past, which then bifurcated. Everything in the two universes would therefore be exactly symmetric, apart from the indistinguishable internal conscious states of the agents (e.g. the difference between spinors in Agent 2 and vectors in Simulation 2 ) . Technically this means the universes aren't causally isolated, but they could no longer communicate causally, as the only causal connection to one another 'goes backwards in time', maybe through some almost identical evolutionary process to a point long before the creatures actually existed (it would help to assume here that physics was completely deterministic). There could be some reason why one half of the original universe contained the matter/ancestor creatures which eventually evolved into Agent 1 and Simulation 2, while the situation was reversed in the other half of the original universe.
"One could even go further and say, even if Simuation2 exactly tracked Agent2's experience, Agent1 is still not communicating with Agent2 in any sense, they are only communicating with Simulation2. "
Certainly a reasonable perspective from within causal decision theory.
"I feel that, to save the idea of acausal interaction, you'd really have to start with a Tegmark-like omniverse in which all possibilities are real. Then, when you talk to a simulation, you can say that you're also acausally interacting with all those agents elsewhere in the omniverse for whom your simulation happens to be 100% accurate. But how do we define these equivalence classes? "
I will admit that I take the idea of such a mathematical universe reasonably seriously, and I expect that defining those equivalence classes is hard, but that they probably exist. At the risk of saying something wrong, although I have not learnt Quantum Field Theory, I have heard that there is a 'path integral' interpretation in which particles take an infinitude of paths through the universe/space of configurations, and that to determine the probability of any particular chain of events occurring, an integral needs to be taken over all of these paths. I have also heard that defining this mathematically is extremely challenging, but that this doesn't prevent physicists from approximating these integrals to predict things, even in cases where 'infinities cancel out'. This seems reminiscent of the way I would expect things like utility functions to be calculated in the 'mathematical universe'; maybe it's very difficult to precisely define the process by which they are calculated (which might involve an equivalence class like you refer to), and possibly there are even cases of apparently infinite utilities( maybe in the case of something like Pascal's mugging, but worse) but this doesn't prevent the 'right answer' from existing.
On the other hand, I don't think that the 'mathematical universe hypothesis' is a necessary prerequisite to taking these acausal ideas seriously, even if it ends up being all but entailed by the ideas of timeless decision theories which are necessary to understand situations like Newcomb's problem. In the case of my thought experiment, the two sub-universes definitely exist in a physical sense, and the only way to obtain good outcomes from interacting with the Simulation of the other agent might be to behave as though there was an acausal connection to their counterpart. This would effectively mean adopting something like Functional or Timeless decision theory. ( Maybe this is just what you mean?)
"So maybe you can construct some scenario in which you are speaking to a simulation of an entity that is powerful within its own simulation, and elsewhere in the omniverse there will be perfect copies of that interaction in which the roles are reversed, and this can be the basis for acausal trade or acausal blackmail."
This seems like it would work, but run into the problem that you mentioned at the beginning of you post, which is that the rest of the other universe would also need to be identical.
For this reason I prefer what you suggest below:
"Yet another way to try to save the idea of acausal interaction, is to think of it as an interaction between classes of entities which are not identical, but simply share some trait. For example, there have been attempts to argue that voting in an election, even though your individual vote is extremely unlikely to matter, can be justified as acausal cooperation among voters. I suppose this would be a version of superrationality as defined by Hofstadter. "
When I encountered this argument for voting, it was the first thing to convince me that voting is somewhat likely to work. If you view yourself as the algorithm running on the equivalence class when you make the decision to vote a particular way, then you can think in a way which is much the same as a specific instance of an agent using causal decision theory.
"There seems to be ample room for a thoroughly skeptical assessment of whether acausal communication, etc, makes any sense (especially given the arguments that you can get some of the wins of timeless decision theories from causal decision theories plus uncertainty as to which copy you are)" That would be my interpretation of what timeless decision theories are, except that I think you need to allow yourself also to be uncertain at which level of abstraction in the mathematical universe you live. But if you think about things in this way, even if you call it 'extended causal decision theory', the notion of acausal interaction comes back.
I am confused about what the proposed "communication" is here. Does this post say more than "complicated minds might exist in multiple universes and they might simulate other universes"? On my understanding of these words, acausal communication is a contradiction in terms.
The post suggests that an acausal analogue of communication is possible by simulating one's conversation partner ( not their entire universe) , or at least asks whether it is. If causality is built into your definition of communication, then it would be a contradiction in terms. However the same could be said of things like acausal trade; the idea is that acausal communication : causal communication :: acausal trade: causal trade . My definition of communication would be some more general kind of information transfer, and I am curious in what direction information can be considered to move in scenarios like the one described here (if it can at all) .
No, I understand this part. My understanding of acausal trade is that it might work precisely because it does not require communication - I can imagine a sort of bargain I might have wanted to make with beings I cannot interact with causally, imagine the sorts of commitments they would have required of me, argue that they could have been full enough of foresight to imagine the same sorts of commitments, and thus act according to commitments that I know I would have made with such beings and that I know they would have made with me. The main point is this: by merely imagining that I want to make a trade, I have narrowed the class of entities I can trade with from "all plausible entities" to "entities who would accept the trade I want to make". (There are some other nitpicks, I think acausal trade becomes nonsense if the causal isolation is two-way, but that doesn't matter for this argument.)
If by communication you really mean information transfer, I think it's fairly obvious that this isn't possible. Say there is some proposition I'm uncertain about even upon strong reflection. How can spinning up another mind help? That I'm uncertain means I can imagine worlds containing minds resembling mine in which the proposition is either true or false. Can spinning up a mind from one of those worlds help me determine which of those types of worlds I'm in? Of course it cannot, I either sample minds according to my present understanding of the distribution of such minds and gain nothing or I sample minds according to another distribution and am predictably misled. If I spin up a mind to talk to, there is no constraint whatever on the sort of mind I will spin up, and so it's impossible to predictably get information out of this mechanism.
What would it take to predictably spin up minds which can resolve a present state of uncertainty? Precisely a further constraint on which types of minds experience that uncertainty. That is, in principle, acausal communication can only predictably tell you things you already know, and must mislead you as often as it leads you right. A discerning being could design tests to filter the applicable ideas from the non-applicable, but only weakly slower than a similarly discerning being could without acausal communication, because we cannot get information from an acausal mechanism (indeed quantum-mechanically I think this is the definition of acausal!) This just isn't what communication means to me, and I don't think this is what it means to acausal trade theorists either.
There are sorts of things I can learn from simulating other minds which might know them. I don't know the millionth digit of pi, so maybe I can spin up a mind which I know will know it. Doing this is obviously as hard as computing it directly so I don't see why I would do this. Maybe there are lots of things I don't know and so I want to spin up a mind which will know all of them at once. Doing this is called creating artificial intelligence and I don't see how it's meaningful to think of it as acausal communication with an entity from a different possible universe rather than as causal communication with an entity I've created in this universe. Can you describe a situation in which I might do acausal communication that is not actually causal communication, or where this framing could in principle be useful? I still feel that I might be missing something.
Thanks for your engagement and in-depth reply.
You say :"The main point is this: by merely imagining that I want to make a trade, I have narrowed the class of entities I can trade with from "all plausible entities" to "entities who would accept the trade I want to make" I honestly don't see how this is relevant to my question.
My question was not 'is it possible to answer questions by spinning up minds who know the answers to those questions?' , which seems to be how you've interpreted it. Nonetheless, that question is certainly an interesting one, and I'm not completely sure I agree with your answer to it, because of computational irreducibility. Just because you're uncertain of something, this does not mean that you cannot increase your certainty about it without 'causal communication with things around you' (i.e. observation), because sometimes simply thinking in more depth about it can help you to resolve logical uncertainty. Perhaps you could do this by 'spinning up' a mind. (After I read further I realized you already pointed this out. Sorry about that. I was writing my reply while reading yours.) Whether this counts as acausal communication is a subtle question though, because unlike in my thought experiment, the mind you 'spin up' is informed by you, rather than background properties of the world( which for the sake of argument we can take to be known a priori. Alternatively, see my reply to Mitchell Porter for a contrived way you could end up in possession of this information. ).
Maybe you could spin up a mind which you have theoretical reasons to think would arise in a universe you're interested in understanding, in which case you might want to simulate part of that universe as well. But this seems to suggest that there is no information transfer from the simulated universe to you. However, what if the simulation is simpler than the universe of which it is a simulation, in a way which can be shown not to have any effect on its outputs? Now the situation is closer to what I described in my post. I think it's reasonable to talk about communication occurring here because you gain knowledge you didn't have before about the other universe, by interacting with something which isn't that universe itself. Data about that universe was already within yours, in that it would have been possible for Laplace's demon to observe you in your universe and predict what you were going to do, and what you would observe when you simulated the other universe, but data and information are not exactly the same thing, at least in the way I'm using them here. You gain information about the other universe in this case, because you are not Laplace's demon.
I don't know whether I've given you what you were looking for here, but hopefully it clarified the disagreement. I would repeat that I think you're certainly correct if your definition of communication includes causality. Although, another important point that comes to my mind here is that it can be difficult to define things like causality and information transfer other than in terms of the start and end points of processes and correlleations between them, which are present in this scenario.
(Edit: It looks like the downvotes have been reversed, and the post may even be somewhat over-voted now. Thanks to whoever reversed the downvotes. I'm still curious why the karma fluctuated so violently.)
I'd appreciate knowing why this post has been downvoted. If you've downvoted this question post, I would be grateful if you could explain why. Please don't downvote it to below 0 unless you have an explanation. I say this partly because it is a question, so it's important to me that it is actually seen! Further, I struggle to understand how a question like this could be objectionable.
While I didn't downvote it, I have a potential explanation. I think that the ability to acausally communicate with other universes is either absent[1] or contradicts most humans' intuitions. As far as I understand acausal trade (e.g. coordination in The True One-Shot Prisoner's Dilemma)[2], it is based on the assumption that the other participant will think like us once it actually encounters the dilemma.
Additionally, the line about "theorems which say that the more complex minds will always output the same information as the simpler ones, all else (including their inputs, which is to say there sense-data) being equal" reminds me of Yudkowsky's case against Universally Compelling Arguments.
However, @Wei Dai's updateless DT could end up prescribing various hard-to-endorse acausal deals. See, e.g. his case for the possibility of superastronomical waste.
Unlike this one-shot dilemma, the iterated dilemma is likely to provide agents with the ability to coordinate by evolution alone with no intrinsic reasoning. I prepared a draft on the issue.
Hello again Stanislav, thanks for your comment. "I think that the ability to acausally communicate with other universes is either absent ..." On this point, that's exactly why I made this a question post; I was hoping people would explain why they agreed/disagreed with the notion that acausal communication is possible. I have the same understanding as you of acausal trade. Can you say more about the hypothetical theorems? Why does this remind you of No Universally Compelling Arguments ? I have a guess, but I would prefer to know exactly what you mean. (Comment Edited for brevity.)
It's possible to imagine two separate sub-universes, causally isolated from one another, each containing a complex, conscious, intelligent creature whose mind consists of interacting spinor fields and potentials, as well as another, computationally simpler creature interacting with it.
For the purpose of this thought experiment, it's helpful to assume that physics is fundamentally continuous within these universes, and that, given the continuity of their physical substrate, these creatures operate as analogue computers.
Suppose that there exist theorems which say that the more complex minds will always output the same information as the simpler ones, all else (including their inputs, which is to say there sense-data) being equal. This is because it turns out that the more complex creatures' minds are mathematically redundant in some way, in that their states are particular members of infinite, continuous equivalence classes which correspond to the states of the simpler creatures' minds. Perhaps there are gauge transformations which would modify the conscious experience of one of the complex creatures when applied to the potential inside its mind, but its output is invariant with respect to these transformations as it only depends on the result of somehow differentiating this potential. Or maybe its mind consists of spinors, mathematical objects like vectors whose signs flip when they experience a complete rotation, but the relationship between its sensory inputs and its externally visible behaviours is not contingent on their overall sign, even though it can feel this sign. [1]
We assume that each simpler creature exists in the sub-universe which does not contain its counterpart, but does contain the other more complex creature.
By directly exchanging information with one another's counterparts within each mini universe, is it reasonable to assert that each more complex creature acausally communicates with the other? Or are the simpler creatures communicating through the more complex ones? I would interpret this as communication but am unsure as to which of these two things happens.
If you have read and agree with the post about Homomorphically encrypted consciousness and its implications , then, if I understand it correctly, you might be inclined not to think that this is possible. However, it seems likely that arbitrarily complex phenomena could play out within the degrees of freedom contained in the redundancy between different mathematical objects of this kind, which, to me, seems to suggest that it is possible for these phenomena themselves to be conscious, which would necessarily make the conscious experiences of these minds differ.