Followup to:  The Fabric of Real ThingsStuff That Makes Stuff Happen

Previous meditation: "Does your rule forbid epiphenomenalist theories of consciousness that consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness is that we can imagine a universe where people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. For all the atoms in this universe to be in the same place - for there to be no detectable difference internally, not just externally - 'consciousness' would have to be something created by the atoms in the brain, but which didn't affect those atoms in turn. It would be an effect of atoms, but not a cause of atoms. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."

Is it coherent to imagine a universe in which a real entity can be an effect but not a cause?

Well... there's a couple of senses in which it seems imaginable. It's important to remember that imagining things yields info primarily about what human brains can imagine. It only provides info about reality to the extent that we think imagination and reality are systematically correlated for some reason.

That said, I can certainly write a computer program in which there's a tier of objects affecting each other, and a second tier - a lower tier - of epiphenomenal objects which are affected by them, but don't affect them. For example, I could write a program to simulate some balls that bounce off each other, and then some little shadows that follow the balls around.

But then I only know about the shadows because I'm outside that whole universe, looking in. So my mind is being affected by both the balls and shadows - to observe something is to be affected by it. I know where the shadow is, because the shadow makes pixels be drawn on screen, which make my eye see pixels. If your universe has two tiers of causality - a tier with things that affect each other, and another tier of things that are affected by the first tier without affecting them - then could you know that fact from inside that universe?

Again, this seems easy to imagine as long as objects in the second tier can affect each other. You'd just have to be living in the second tier! We can imagine, for example - this wasn't the way things worked out in our universe, but it might've seemed plausible to the ancient Greeks - that the stars in heaven (and the Sun as a special case) could affect each other and affect Earthly forces, but no Earthly force could affect them:

(Here the X'd-arrow stands for 'cannot affect'.)

The Sun's light would illuminate Earth, so it would cause plant growth. And sometimes you would see two stars crash into each other and explode, so you'd see they could affect each other. (And affect your brain, which was seeing them.) But the stars and Sun would be made out of a different substance, the 'heavenly material', and throwing any Earthly material at it would not cause it to change state in the slightest. The Earthly material might be burned up, but the Sun would occupy exactly the same position as before. It would affect us, but not be affected by us.

(To clarify an important point raised in the comments: In standard causal diagrams and in standard physics, no two individual events ever affect each other; there's a causal arrow from the PAST to FUTURE but never an arrow from FUTURE to PAST. What we're talking about here is the sun and stars over time, and the generalization over causal arrows that point from Star-in-Past to Sun-in-Present and Sun-in-Present back to Star-in-Future. The standard formalism dealing with this would be Dynamic Bayesian Networks (DBNs) in which there are repeating nodes and repeating arrows for each successive timeframe: X1, X2, X3, and causal laws F relating Xi to Xi+1. If the laws of physics did not repeat over time, it would be rather hard to learn about the universe! The Sun repeatedly sends out photons, and they obey the same laws each time they fall on Earth; rather than the Fi being new transition tables each time, we see a constant Fphysics over and over. By saying that we live in a single-tier universe, we're observing that whenever there are F-arrows, causal-link-types, which (over repeating time) descend from variables-of-type-X to variables-of-type-Y (like present photons affecting future electrons), there are also arrows going back from Ys to Xs (like present electrons affecting future photons). If we weren't generalizing over time, it couldn't possibly make sense to speak of thingies that "affect each other" - causal diagrams don't allow directed cycles!)

A two-tier causal universe seems easy to imagine, even easy to specify as a computer program. If you were arranging a Dynamic Bayes Net at random, would it randomly have everything in a single tier? If you were designing a causal universe at random, wouldn't there randomly be some things that appeared to us as causes but not effects? And yet our own physicists haven't discovered any upper-tier particles which can move us without being movable by us. There might be a hint here at what sort of thingies tend to be real in the first place - that, for whatever reasons, the Real Rules somehow mandate or suggest that all the causal forces in a universe be on the same level, capable of both affecting and being affected by each other.

Still, we don't actually know the Real Rules are like that; and so it seems premature to assign a priori zero probability to hypotheses with multi-tiered causal universes. Discovering a class of upper-tier affect-only particles seems imaginable[1] - we can imagine which experiences would convince us that they existed. If we're in the Matrix, we can see how to program a Matrix like that. If there's some deeper reason why that's impossible in any base-level reality, we don't know it yet. So we probably want to call that a meaningful hypothesis for now.

But what about lower-tier particles which can be affected by us, and yet never affect us?

Perhaps there are whole sentient Shadow Civilizations living on my nose hairs which can never affect those nose hairs, but find my nose hairs solid beneath their feet. (The solid Earth affecting them but not being affected, like the Sun's light affecting us in the 'heavenly material' hypothesis.) Perhaps I wreck their world every time I sneeze. It certainly seems imaginable - you could write a computer program simulating physics like that, given sufficient perverseness and computing power...

And yet the fundamental question of rationality - "What do you think you know, and how do you think you know it?" - raises the question:

How could you possibly know about the lower tier, even if it existed?

To observe something is to be affected by it - to have your brain and beliefs take on different states, depending on that thing's state. How can you know about something that doesn't affect your brain?

In fact there's an even deeper question, "How could you possibly talk about that lower tier of causality even if it existed?"

Let's say you're a Lord of the Matrix. You write a computer program which first computes the physical universe as we know it (or a discrete approximation), and then you add a couple of lower-tier effects as follows:

First, every time I sneeze, the binary variable YES_SNEEZE will be set to the second of its two possible values.

Second, every time I sneeze, the binary variable NO_SNEEZE will be set to the first of its two possible values.

Now let's say that - somehow - even though I've never caught any hint of the Matrix - I just magically think to myself one day, "What if there's a variable that watches when I sneeze, and gets set to 1?"

It will be all too easy for me to imagine that this belief is meaningful and could be true or false:

And yet in reality - as you know from outside the matrix - there are two shadow variables that get set when I sneeze. How can I talk about one of them, rather than the other? Why should my thought about '1' refer to their second possible value rather than their first possible value, inside the Matrix computer program? If we tried to establish a truth-value in this situation, to compare my thought to the reality inside the computer program - why compare my thought about SNEEZE_VAR to the variable YES_SNEEZE instead of NO_SNEEZE, or compare my thought '1' to the first possible value instead of the second possible value?

Under more epistemically healthy circumstances, when you talk about things that are not directly sensory experiences, you will reference a causal model of the universe that you inducted to explain your sensory experiences. Let's say you repeatedly go outside at various times of day, and your eyes and skin directly experience BRIGHT-WARM, BRIGHT-WARM, BRIGHT-WARM, DARK-COOL, DARK-COOL, etc. To explain the patterns in your sensory experiences, you hypothesize a latent variable we'll call 'Sun', with some kind of state which can change between 1, which causes BRIGHTness and WARMness, and 0, which causes DARKness and COOLness. You believe that the state of the 'Sun' variable changes over time, but usually changes less frequently than you go outside.

p(BRIGHT|Sun=1) 0.9
p(¬BRIGHT|Sun=1) 0.1
p(BRIGHT|Sun=0) 0.1
p(¬BRIGHT|Sun=0) 0.9

Standing here outside the Matrix, we might be tempted to compare your beliefs about "Sun = 1", to the real universe's state regarding the visibility of the sun in the sky (or rather, the Earth's rotational position).

But even if we compress the sun's visibility down to a binary categorization, how are we to know that your thought "Sun = 1" is meant to correspond to the sun being visible in the sky, rather than the sun being occluded by the Earth? Why the first state of the variable, rather than the second state?

How indeed are we know that this thought "Sun = 1" is meant to compare to the sun at all, rather than an anteater in Venezuela?

Well, because that 'Sun' thingy is supposed to be the cause of BRIGHT and WARM feelings, and if you trace back the cause of those sensory experiences in reality you'll arrive at the sun that the 'Sun' thought allegedly corresponds to. And to distinguish between whether the sun being visible in the sky is meant to correspond to 'Sun'=1 or 'Sun'=0, you check the conditional probabilities for that 'Sun'-state giving rise to BRIGHT - if the actual sun being visible has a 95% chance of causing the BRIGHT sensory feeling, then that true state of the sun is intended to correspond to the hypothetical 'Sun'=1, not 'Sun'=0.

Or to put it more generally, in cases where we have...

...then the correspondence between map and territory can at least in principle be point-wise evaluated by tracing causal links back from sensory experiences to reality, and tracing hypothetical causal links from sensory experiences back to hypothetical reality. We can't directly evaluate that truth-condition inside our own thoughts; but we can perform experiments and be corrected by them.

Being able to imagine that your thoughts are meaningful and that a correspondence between map and territory is being maintained, is no guarantee that your thoughts are true. On the other hand, if you can't even imagine within your own model how a piece of your map could have a traceable correspondence to the territory, that is a very bad sign for the belief being meaningful, let alone true. Checking to see whether you can imagine a belief being meaningful is a test which will occasionally throw out bad beliefs, though it is no guarantee of a belief being good.


Okay, but what about the idea that it should be meaningful to talk about whether or not a spaceship continues to exist after it travels over the cosmological horizon? Doesn't this theory of meaningfulness seem to claim that you can only sensibly imagine something that makes a difference to your sensory experiences?

No. It says that you can only talk about events that your sensory experiences pin down within the causal graph. If you observe enough protons, electrons, neutrons, and so on, you can pin down the physical generalization which says, "Mass-energy is neither created nor destroyed; and in particular, particles don't vanish into nothingness without a trace." It is then an effect of that rule, combined with our previous observation of the ship itself, which tells us that there's a ship that went over the cosmological horizon and now we can't see it any more.

To navigate referentially to the fact that the ship continues to exist over the cosmological horizon, we navigate from our sensory experience up to the laws of physics, by talking about the cause of electrons not blinking out of existence; we also navigate up to the ship's existence by tracing back the cause of our observation of the ship being built. We can't see the future ship over the horizon - but the causal links down from the ship's construction, and from the laws of physics saying it doesn't disappear, are both pinned down by observation - there's no difficulty in figuring out which causes we're talking about, or what effects they have.[2]


All righty-ighty, let's revisit that meditation:

"Does your rule forbid epiphenomenalist theories of consciousness in which consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness is that we can imagine a universe where people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. For all the atoms in this universe to be in the same place - for there to be no detectable difference internally, not just externally - 'consciousness' would have to be something created by the atoms in the brain, but which didn't affect those atoms in turn. It would be an effect of atoms, but not a cause of atoms. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."

The closest theory to this which definitely does seem coherent - i.e., it's imaginable that it has a pinpointed meaning - would be if there was another little brain living inside my brain, made of shadow particles which could affect each other and be affected by my brain, but not affect my brain in turn. This brain would correctly hypothesize the reasons for its sensory experiences - that there was, from its perspective, an upper tier of particles interacting with each other that it couldn't affect. Upper-tier particles are observable, i.e., can affect lower-tier senses, so it would be possible to correctly induct a simplest explanation for them. And this inner brain would think, "I can imagine a Zombie Universe in which I am missing, but all the upper-tier particles go on interacting with each other as before." If we imagine that the upper-tier brain is just a robotic sort of agent, or a kitten, then the inner brain might justifiably imagine that the Zombie Universe would contain nobody to listen - no lower-tier brains to watch and be aware of events.

We could write that computer program, given significantly more knowledge and vastly more computing power and zero ethics.

But this inner brain composed of lower-tier shadow particles cannot write upper-tier philosophy papers about the Zombie universe. If the inner brain thinks, "I am aware of my own awareness", the upper-tier lips cannot move and say aloud, "I am aware of my own awareness" a few seconds later. That would require causal links from lower particles to upper particles.

If we try to suppose that the lower tier isn't a complicated brain with an independent reasoning process that can imagine its own hypotheses, but just some shadowy pure experiences that don't affect anything in the upper tier, then clearly the upper-tier brain must be thinking meaningless gibberish when the upper-tier lips say, "I have a lower tier of shadowy pure experiences which did not affect in any way how I said these words." The deliberating upper brain that invents hypotheses for sense data, can only use sense data that affects the upper neurons carrying out the search for hypotheses that can be reported by the lips. Any shadowy pure experiences couldn't be inputs into the hypothesis-inventing cognitive process. So the upper brain would be talking nonsense.

There's a version of this theory in which the part of our brain that we can report out loud, which invents hypotheses to explain sense data out loud and manifests physically visible papers about Zombie universes, has for no explained reason invented a meaningless theory of shadow experiences which is experienced by the shadow part as a meaningful and correct theory.  So that if we look at the "merely physical" slice of our universe, philosophy papers about consciousness are meaningless and the physical part of the philosopher is saying things their physical brain couldn't possibly know even if they were true.  And yet our inner experience of those philosophy papers is meaningful and true. In a way that couldn't possibly have caused me to physically write the previous sentence, mind you. And yet your experience of that sentence is also true even though, in the upper tier of the universe where that sentence was actually written, it is not only false but meaningless.

I'm honestly not sure what to say when a conversation gets to that point. Mostly you just want to yell, "Oh, for the love of Belldandy, will you just give up already?" or something about the importance of saying oops.

(Oh, plus the unexplained correlation violates the Markov condition for causal models.)

Maybe my reply would be something along the lines of, "Okay... look... I've given my account of a single-tier universe in which agents can invent meaningful explanations for sense data, and when they build accurate maps of reality there's a known reason for the correspondence... if you want to claim that a different kind of meaningfulness can hold within a different kind of agent divided into upper and lower tiers, it's up to you to explain what parts of the agent are doing which kinds of hypothesizing and how those hypotheses end up being meaningful and what causally explains their miraculous accuracy so that this all makes sense."

But frankly, I think people would be wiser to just give up trying to write sensible philosophy papers about lower causal tiers of the universe that don't affect the philosophy papers in any way.


MeditationIf we can only meaningfully talk about parts of the universe that can be pinned down inside the causal graph, where do we find the fact that 2 + 2 = 4? Or did I just make a meaningless noise, there? Or if you claim that "2 + 2 = 4" isn't meaningful or true, then what alternate property does the sentence "2 + 2 = 4" have which makes it so much more useful than the sentence "2 + 2 = 3"?


Mainstream status.


 [1] Well, it seems imaginable so long as you toss most of quantum physics out the window and put us back in a classical universe. For particles to not be affected by us, they'd need their own configuration space such that "which configurations are identical" was determined by looking only at those particles, and not looking at any lower-tier particles entangled with them. If you don't want to toss QM out the window, it's actually pretty hard to imagine what an upper-tier particle would look like.

 [2] This diagram treats the laws of physics as being just another node, which is a convenient shorthand, but probably not a good way to draw the graph. The laws of physics really correspond to the causal arrows Fi, not the causal nodes Xi. If you had the laws themselves - the function from past to future - be an Xi of variable state, then you'd need meta-physics to describe the Fphysics arrows for how the physics-stuff Xphysics could affect us, followed promptly by a need for meta-meta-physics et cetera. If the laws of physics were a kind of causal stuff, they'd be an upper tier of causality - we can't appear to affect the laws of physics, but if you call them causes, they can affect us. In Matrix terms, this would correspond to our universe running on a computer that stored the laws of physics in one area of RAM and the state of the universe in another area of RAM, the first area would be an upper causal tier and the second area would be a lower causal tier. But the infinite regress from treating the laws of determination as causal stuff, makes me suspicious that it might be an error to treat the laws of physics as "stuff that makes stuff happen and happens because of other stuff". When we trust that the ship doesn't disappear when it goes over the horizon, we may not be navigating to a physics-node in the graph, so much as we're navigating to a single Fphysics that appears in many different places inside the graph, and whose previously unknown function we have inferred. But this is an unimportant technical quibble on Tuesdays, Thursdays, Saturdays, and Sundays. It is only an incredibly deep question about the nature of reality on Mondays, Wednesdays, and Fridays, i.e., less than half the time.

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Proofs, Implications, and Models"

Previous post: "Stuff That Makes Stuff Happen"

242 comments, sorted by
magical algorithm
Highlighting new comments since Today at 1:20 AM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

Epiphenominal theories of consciousness are kind of silly, but here's another situation I can wonder about... some cellular automata rules, including the Turing-complete Conway's Game of Life, can have different "pasts" that can lead to the same present. From the point of view of a being living in such a universe (one in which information can be destroyed), is there a fact of the matter as to which "past" actually happened?

I had always thought that our physical universe had this property as well, i.e. the Everett multiverse branches into the past as well as into the future.

If you take a single branch and run it backward, you'll find that it diverges into a multiverse of its own. If you take all the branches and run them backward, their branches will cohere instead of decohering, cancel out in most places, and miraculously produce only the larger, more coherent blobs of amplitude they started from. Sort of like watching an egg unscramble itself.

If you take all the branches and run them backward, their branches will cohere instead of decohering, cancel out in most places, and miraculously produce only the larger, more coherent blobs of amplitude they started from.

And the beings in them will only have memories of further-cohered (further "pastward") events, just as if you didn't run anything backwards.

And at the beginning of the universe we have a set of states which just point time-backwards at each other, which is why we cannot go meaningfully more backwards in time.

Something like:
A1 goes with probability 1% to B1, 1% to C1, and 98% to A2.
B1 goes with probability 1% to A1, 1% to C1, and 98% to B2.
C1 goes with probability 1% to A1, 1% to B1, and 98% to C2.

So if you ask about the past of A2, you get A1, which is the part that makes intuitive sense for us. But trying to go deeper in the past just gives us that the past of A1 is B1 or C1, and the past of B1 is A1 or C1, etc. Except that the change does not clearly happen in one moment (A2 has a rather well-defined past, A1 does not), but more gradually.

As I understand it, this is not how standard physics models the beginning of time.

I don't think anyone takes seriously the way standard physics models the beginning of time (temperature and density of the universe approaching infinity as its age approaches zero), anyway, as it's most likely incorrect due to quantum gravity effects.

I don't think anyone takes seriously the way standard physics models the beginning of time (temperature and density of the universe approaching infinity

This is a correct usage of terminology but the irony still made me smile.

I think wedrifid is pointing to the irony in saying that the 'standard' model is (on some issue) standardly rejected.

Oh. I tried to find something, but the only thing that partially pattern-matches it was the Hartle–Hawking state. If we mix it with the "universe as a Markov chain over particle configurations" model, it could lead to something like this. Or could not.

Interesting question! I'd say that you could refer to the possibilities as possibilities, e.g. in a debate over whether a particular past would in fact have led to the present, but to speak of the 'actual past' might make no sense because you couldn't get there from there... no, actually, I take that back, you might be able to get there via simplicity. I.e. if there's only one past that would have evolved from a simply-tiled start state for the automaton.

But does it really matter? If both states are possible, why not just say "my past contains ambiguity?"

With quantum mechanics, even though the "future" itself (as a unified wavefunction) evolves forward as a whole, the bit-that-makes-up-this-pseudofactor-of-me has multiple possible outcomes. We live with future ambiguity just fine, and quantum mechanics forces us to say "both experienced futures must be dealt with probabilistically". Even though the mechanism is different, what's wrong with treating the "past" as containing the same level of branching as the future?

EDIT: From a purely global, causal perspective, I understand the desire to be able to say, "both X and Y can directly cause Z, but in point of fact, this time it was Y." But you're inside, so you don't get to operate as a thing that can distinguish between X and Y, and this isn't necessarily an "orbital teapot" level of implausibility. If configuration Y is 10^4 more likely as a 'starting' configuration than configuration X according to your understanding of how starting configurations are chosen, then sure - go ahead and assert that it was (or may-as-well-have-been) configuration Y that was your "actual" past - but if the configuration probabilities are more like 70%/30%, or if your confidence that you understand how starting configurations are chosen is low enough, then it may be better to just swallow the ambiguity.

EDIT2: Coming from a completely different angle, why assert that one or the other "happened", rather than looking at it as a kind of path-integral? It's a celular automaton, instead of a quantum wave-function, which means that you're summing discrete paths instead of integrating infinitesimals, but it seems (at first glance) that the reasoning is equally applicable.

If both states are possible, why not just say "my past contains ambiguity?"

Ambiguity it is, but we usually want to know the probabilities. If I tell you that whether you win or not win a lottery tomorrow is "ambiguous", you would not be satisfied with such answer, and you would ask how much likely it is to win. And this question somehow makes sense even if the lottery is decided by a quantum event, so you know that each future happens in some Everett branch.

Similarly, in addition to knowing that the past is ambiguous, we should ask how likely are the individual pasts. In our universe you would want to know how the pasts P1 and P2 are likely to become NOW. The Conway's Game of Life does not branch time-forward, so if you have two valid pasts, their probabilities of becoming NOW are 100% each.

But that is only a part of the equation. The other part are the prior probabilities of P1 and P2. Even if both P1 and P2 deterministically evolve to NOW, their prior probabilities influence how likely did NOW really evolve from each of them.

I am not sure what would be the equivalent of Solomonoff induction for the Conway's Game of Life. Starting with a finite number of "on" cells, where each additional "on" cell decreases the prior probability of the configuration? Starting with an infinite plane where each cell has a 50% probability to be "on"? Or an infinite plane with each cell having a p probability of being "on", where p has the property that after one step in such plane, the average ratio of "on" cells remain the same (the p being kind-of-eigenvalue of the rules)?

But the general idea is that if P1 is somehow "generally more likely to happen" than P2, we should consider P1 to be more likely the past of NOW than P2, even if both P1 and P2 deterministically evolve to NOW.

In the Game of Life, a single live cell with no neighbours will become a dead cell in the next step. Therefore, any possible present state that has at least one past state has an infinite number of one-step-back states (which differ from the one state merely in having one or more neighbourless cells at random locations, far enough from anything else to have no effect).

Some of these one-step-back states may end up having evolved from simpler starting tilesets than the one with no vanishing cells.

no, actually, I take that back, you might be able to get there via simplicity. I.e. if there's only one past that would have evolved from a simply-tiled start state for the automaton.

The simplest start state might actually be a program that simulates the evolution of every possible starting state in parallel. If time and space are unbounded and an entity is more complex than the shortest such program then it is more likely that the entity is the result of the program and not the result of evolving from another random state.

I am unable to see the appeal of a view in which there is no fact of the matter. It seems to me that there is a fact of the matter concerning the past, even if it is impossible for us to know. This is not similar to the case where sneezing alters two shadow variables, and it is impossible for us to meaningfully refer to variable 1 as opposed to variable 2; the past has a structure, so assertions will typically have definite referents.

The Standard Model of particle physics with MWI is time-symmetric (to be precise: CPT symmetric) and conserves information. If you define the precise state at one point in time, you can calculate the unique past which lead to that state and the unique future which will evolve from that state. Note that for general states, "past" and "future" are arbitrary definitions.

(Which is why I specified a different set of laws of physics.)

Or, a Boltzmann brain that flickered into existence with memories of a past that never happened.

In that particular case, "never happened" has some weird ontological baggage. If a simulated consciousness is still conscious, then isn't its simulated past still a past?

Perhaps "didn't happen" in the sense that its future reality will not conform to its memory-informed expectations, but it seems like, if those memories form a coherent 'past', then in a simulationist sense that past did happen, even if it wasn't simulated with perfect fidelity.

This is actually one of the reasons I have to doubt Cryonics. You can talk about nano-tech being able to "reverse" the damage, but it's possible (and I think likely), that it's very hard to go from damaged states to the specific non-damaged state that actually constitutes your consciousness/memory.

Assuming that "you" are a point in consciousness phase-space, and not a "smear". If "you-ness" is a locus of similar-but-slightly-different potential states, then "mostly right" is going to be good enough.

And, given that every morning when you wake up, you're different-but-still-you, I'd say that there's strong evidence that "you-ness" is a locus of similar-but-slightly-different potential states, rather than a singular point.

This means, incidentally, that it may be possible to resurrect people without a physical copy of their brains at all, if enough people who remember them well enough when the technology becomes available.

Of course, since it's a smear, the question becomes "where do you want to draw the line between Bob and not-Bob?" - since whatever you create will believe it's Bob, and will act the way everyone alive remembers that Bob acted, and the "original" isn't around to argue (assuming you believe in concepts like "original" to begin with, but if you do, you have some weirder paradoxes to deal with).

Which is why it's better for there to be more people signed up, but not actually being frozen yet. The more money they get while the later you get frozen, the better the odds. If immortality is something you want, this still seems like the best gamble.

Just for the Least Convenient World, what if the zombies build a supercomputer and simulate random universes, and find that in 98% of simulated universes life forms like theirs do have shadow brains, and that the programs for the remaining 2% are usually significantly longer?

How can the version without shadow brains be significantly longer? Even in the worst possible world, it seems like the 2% of non-shadow-brain programs could be encoded by copying their corresponding shadow-brain programs and adding a few lines telling the computer how to garbage-collect shadows using a straightforward pruning algorithm on the causal graph.

by the programs being short enough in the first place that those few lines still doubles the length? By the universe like part not being straightforwardly encoded so that to distinguish anything about it you first need a long AI-like interpreter just to get there?

That would strongly indicate that something caused the zombies to write a program for generating simulations that was likely to create simulated shadow brains in most of the simulations. (The compiler's built in prover for things like type checking was inefficient and left behind a lot of baggage that produced second tier shadow brains in all but 2% of simulations). It might cause the zombies to conclude that they probably had shadow brains and start talking about the possibility of shadow brains, but it should be equally likely to do that whether the shadow brains were real or not. (Which means any zombie with a sound epistemology would not give additional credence to the existence of shadow brains after the simulation caused other zombies to start talking about shadow brains than it would if the source of the discussion of shadow brains had come from a random number generator producing a very large number, and that large number being interpreted as a string in some normal encoding for the zombies producing a paper that discussed shadow brains. Shadow brains in that world should be an idea analogous to Russell's teapot, astrology, or the invisible pink unicorn in our world.)

Now, if there was some outside universe capable of looking at all of the universes and seeing some universes with shadow brains and some without, and in the the universes with shadow brains zombies were significantly more likely to produce simulations that created shadow brains than in the universe in which zombies had shadow brains they were much more likely to create simulations that predicted shadow brains similar to their actual shadow brains -- then, we would be back to seeing exactly what we see when philosophers talk about shadow brains directly: namely, the shadow brains are causing the zombies to imagine shadow brains which means that the shadow brains aren't really shadow brains because they are affecting the world (with probability 1).

Either the result of the simulations points to gross inefficiency somewhere (their simulations predicted something that their simulations shouldn't have been able to predict) or the shadow brains not really being shadow brains because they are causally impacting the world. (This is slightly more plausible than philosopher's postulating shadow brains correctly for no reason only because we don't necessarily know that there is anything driving the zombies to produce simulations efficiently; whereas, we know in our world that we can assume that brains typically produce non-gibberish because enormous selective pressures have caused brains to create non-gibberish.)

I were talking about the logical counter-factual, where it genuinely is true and knowably so through rationality.

It might be easier to think about it like this: there is a large number of civilizations in T4, each of which can observe that almost all the other have shadow brains but none of which can see if they have them themselves.

Some thoughts about "epiphenomena" in general, though not related to consciousness.

Suppose there are only finitely many events in the entire history of the universe (or multiverse), so that the universe can be represented by a finite casual graph. If it is an acrylic graph (no causal cycles), then there must be some nodes which are effects but not causes, that is, they are epiphenomena. But then why not posit a smaller graph with the epiphenomenal nodes removed, since they don't do anything? And then that reduced graph is also finite, and also has epiphenomenal nodes.... so why not remove those?

So, is the conclusion that the best model of the universe is a strictly infinite graph, with no epiphenomenal nodes that can be removed e.g. no future big crunches or other singularities? This seems like a dubious piece of armchair cosmology.

Or are there cases where the larger finite graph (with the epiphenomenal nodes) is strictly simpler as a theory than the reduced graph (with the epiphenomena removed), so that Occam's razor tells us to believe in the larger graph? But then Occam's razor is justifying a belief in epiphenomena, which sounds rather odd when put like that!

The last nodes are never observed by anyone, but they descend from the same physics, the same F(physics), that have previously been pinned down, or so I assume. You can thus meaningfully talk about them for the same reason you can meaningfully talk about a spaceship going over the cosmological horizon. What we're trying to avoid is SNEEZE_VARs or lower qualia where there's no way that the hypothesis-making agent could ever have observed, inducted, and pinned down the causal mechanism - where there's no way a correspondence between map and territory could possibly be maintained.

Following this reasoning, if there is a finite causal state machine, then your pruning operation would eventually remove me, you, the human race, the planet Earth.

Now, from inside the universe, I cannot tell whether your hypothesis of a finite state graph universe is true or not - but I do have a certain self-interest in not being removed from existence. I find, therefore, that I am scrambling for justifications for why the finite-state-model universe nodes containing myself are somehow special, that they should not be removed (to be fair, I extend the same justifications to all sentient life).

Is it coherent to imagine a universe in which a real entity can be an effect but not a cause?

Your favorite example of event horizons, cosmological or otherwise, is like that. GR suggests that there can be a ring singularity inside an eternal spinning black hole (but not one spun up from rest), near/around which you can go forever without being crushed. (it also suggests that there could be closed timelike curves around it, but I'll ignore this for now.) So maybe there are particles/objects/entities living there.

Stuff thrown into such a black hole can certainly affect the hypothetical entities living inside. Like a meteor shower from the outside. But the outside is not affected by anything happening inside, the horizon prevents it.

Fair, but quantum mechanics gives us Hawking radiation, which may or may not provide information (in principle) about what went into the black hole.

Also, there are causal arrows from the black hole to everything it pulls on, and those are ultimately the sum of causal arrows from each particle in the black hole even if from the outside we can't discern the individual particles.

Stuff thrown into such a black hole can certainly affect the hypothetical entities living inside. Like a meteor shower from the outside. But the outside is not affected by anything happening inside, the horizon prevents it.

That is not entirely true: stuff thrown into the black hole increases the horizon area and possibly modifies its geometry, and in return the horizon affects the spatial infinity (the area around the horizon). The debate is about how much information the horizon deletes in the process. The same is for the cosmological horizon, which is effectively just another kind of singularity.

stuff thrown into the black hole increases the horizon area and possibly modifies its geometry, and in return the horizon affects the spatial infinity (the area around the horizon).

That's outside affecting outside, not inside affecting outside.

The same is for the cosmological horizon, which is effectively just another kind of singularity.

Horizon is not a singularity.

That's outside affecting outside, not inside affecting outside.

Hmm... let's taboo "outside" and "inside". The properties of stuff within the horizon affect the properties of the horizon, which in turn affect the properties of space-matter at spatial infinity. Is this formulation more acceptable?

Horizon is not a singularity.

Right, I'll rephrase: the same goes for the cosmological horizon, which effectively 'surrounds' just another kind of singularity.

The properties of stuff within the horizon affect the properties of the horizon

Wrong. There could be tons of different things going on inside, absolutely indistinguishable from outside, which only sees mass, electric charge and angular momentum. There is no causal connection from inside to outside whatsoever, barring FTL communication.

Right, I'll rephrase: the same goes for the cosmological horizon, which effectively 'surrounds' just another kind of singularity.

Wrong again. There is no singularity of any kind behind the cosmological horizon (which is not a closed surface to begin with, so it cannot "surround" anything). Well, there might be black holes and stuff, or there might not be, but there is certainly not a requirement of anything singular being there. Consider googling the definition of singularity in general relativity.

Wrong. There could be tons of different things going on inside, absolutely indistinguishable from outside, which only sees mass, electric charge and angular momentum.

Also entropy. Anyway, those are determined by the mass, electrical charge and angular momentum of the matter that fell inside. We may not want to call it a causal connection, but it's certainly a case of properties within determining properties outside.

There is no causal connection from inside to outside whatsoever, barring FTL communication.

There is no direct causal connection, meaning a worldline from the inside to the outside of the black hole. But even if the horizon screens almost all of the infalling matter properties, it doesn't screen everything (and probably, but this is a matter of quantum gravity, doesn't screen nothing).

Wrong again. There is no singularity of any kind behind the cosmological horizon (which is not a closed surface to begin with, so it cannot "surround" anything). Well, there might be black holes and stuff, or there might not be, but there is certainly not a requirement of anything singular being there. Consider googling the definition of singularity in general relativity.

I'll admit to not have much knowledge about this specific theme, and I'll educate myself more properly, but in the case of my earlier sentence I used "singularity" as a mathematical term, referring to a region of spacetime in which the GR equations acquire a singular value, so not specifically to a gravitational singularity like a black-hole or a domain wall. In the case of most commonplace cosmological horizons, this region is simply space-like infinity.

Wrong. There could be tons of different things going on inside, absolutely indistinguishable from outside, which only sees mass, electric charge and angular momentum. There is no causal connection from inside to outside whatsoever, barring FTL communication.

Unless the "inside" was spontaneously materialized into existence while simultaneously a different chunk of the singularity's mass blinked out of existence in manners which defy nearly all the physics I know, then there still remains a causal connection from the "disappearance" of this stuff that's "inside" from the world "outside" at some point in outside time frames, AFAICT. This disappearance of specific pieces of matter and energy seems to more than qualify as a causal effect, when compared to counterfactual futures where they do not disappear.

Also, the causal connection [Inside -> Mass -> Outside] pretty much looks like a causal connection from inside to outside to me. There's this nasty step in the middle that blurs all the information such that under most conceivable circumstances there's no way to tell which of all possible insides is the "true" one, but combined with the above about matter disappearance can still let you concentrate your probability mass, compared to meaningless epiphenomena that cover the entire infinite hypothesis space (minus one single-dimensional line representing its interaction with anything that interacts with our reality in any way) with equal probability because there's no way it could even in principle affect us even in CTCs, FTL, timeless or n-dimensional spaces, etc.

(Note: I'm not an expert on mind-bending hypothetical edge cases of theoretical physics, so I'm partially testing my own understanding of the subject here.)

I'm partially testing my own understanding of the subject here.

Most of what you said is either wrong or meaningless, so I don't know where to begin unraveling it, sorry. Feel free to ask simple questions of limited scope if you want to learn more about black holes, horizons, singularities and related matters. The subject is quite non-trivial and often counter-intuitive.

Hmm, alright.

In more vague, amateur terms, isn't the whole horizon thing always the same case, i.e. it's causally linked to the rest of the universe by observations in the past and inferences using presumed laws of physics, even if the actual state of things beyond the horizon (or inside it or whatever) doesn't change what we can observe?

The event horizon in an asymptotically flat spacetime (which is not quite the universe we live in, but a decent first step) is defined as the causal past of the infinite causal future. This definition guarantees that we see no effects whatsoever from the part of the universe that is behind the event horizon. The problem with this definition is that we have to wait forever to draw the horizon accurately. Thus there are several alternative horizons which are more instrumentally useful for theorem proving and/or numerical simulations, but are not in general identical to the event horizon. The cosmological event horizon is a totally different beast (it is similar to the Rindler horizon, used to derive the Unruh effect), though it does share a number of properties with the black hole event horizon. There are further exciting complications once you get deeper into the subject.