Epiphenominal theories of consciousness are kind of silly, but here's another situation I can wonder about... some cellular automata rules, including the Turing-complete Conway's Game of Life, can have different "pasts" that can lead to the same present. From the point of view of a being living in such a universe (one in which information can be destroyed), is there a fact of the matter as to which "past" actually happened?
I had always thought that our physical universe had this property as well, i.e. the Everett multiverse branches into the past as well as into the future.
If you take a single branch and run it backward, you'll find that it diverges into a multiverse of its own. If you take all the branches and run them backward, their branches will cohere instead of decohering, cancel out in most places, and miraculously produce only the larger, more coherent blobs of amplitude they started from. Sort of like watching an egg unscramble itself.
Just for the Least Convenient World, what if the zombies build a supercomputer and simulate random universes, and find that in 98% of simulated universes life forms like theirs do have shadow brains, and that the programs for the remaining 2% are usually significantly longer?
Is it coherent to imagine a universe in which a real entity can be an effect but not a cause?
Your favorite example of event horizons, cosmological or otherwise, is like that. GR suggests that there can be a ring singularity inside an eternal spinning black hole (but not one spun up from rest), near/around which you can go forever without being crushed. (it also suggests that there could be closed timelike curves around it, but I'll ignore this for now.) So maybe there are particles/objects/entities living there.
Stuff thrown into such a black hole can certainly affect the hypothetical entities living inside. Like a meteor shower from the outside. But the outside is not affected by anything happening inside, the horizon prevents it.
Some thoughts about "epiphenomena" in general, though not related to consciousness.
Suppose there are only finitely many events in the entire history of the universe (or multiverse), so that the universe can be represented by a finite casual graph. If it is an acrylic graph (no causal cycles), then there must be some nodes which are effects but not causes, that is, they are epiphenomena. But then why not posit a smaller graph with the epiphenomenal nodes removed, since they don't do anything? And then that reduced graph is also finite, and also has epiphenomenal nodes.... so why not remove those?
So, is the conclusion that the best model of the universe is a strictly infinite graph, with no epiphenomenal nodes that can be removed e.g. no future big crunches or other singularities? This seems like a dubious piece of armchair cosmology.
Or are there cases where the larger finite graph (with the epiphenomenal nodes) is strictly simpler as a theory than the reduced graph (with the epiphenomena removed), so that Occam's razor tells us to believe in the larger graph? But then Occam's razor is justifying a belief in epiphenomena, which sounds rather odd when put like that!
The last nodes are never observed by anyone, but they descend from the same physics, the same F(physics), that have previously been pinned down, or so I assume. You can thus meaningfully talk about them for the same reason you can meaningfully talk about a spaceship going over the cosmological horizon. What we're trying to avoid is SNEEZE_VARs or lower qualia where there's no way that the hypothesis-making agent could ever have observed, inducted, and pinned down the causal mechanism - where there's no way a correspondence between map and territory could possibly be maintained.
That said, I can certainly write a computer program in which there's a tier of objects affecting each other, and a second tier - a lower tier - of epiphenomenal objects which are affected by them, but don't affect them.
I would like to point out that any space-like surface (technically 3-fold) divides our universe into two such tiers.
Okay, I can see that I need to spell out in more detail one of the ideas here - namely that you're trying to generalize over a repeating type of causal link and that reference is pinned down by such generalization. The Sun repeatedly sends out light in individual Sun-events, electrons repeatedly go on traveling through space instead of vanishing; in a universe like ours, rather than the F(i) being whole new transition tables randomly generated each time, you see the same F(physics) over and over. This is what you can pin down and refer to. Any causal graph is acyclic and can be divided as you say; the surprising thing is that there are no F-types, no causal-link-types, which (over repeating time) descend from one kind of variable to another, without (over time) there being arrows also going back from that kind to the other. Yes, we're generalizing and inducting over time, otherwise it would make no sense to speak of thingies that "affect each other". No two individual events ever affect each other!
Can anyone explain why epiphenomenalist theories of consciousness are interesting? There have been an awful lot of words on them here, but I can't find a reason to care.
It seems that you get similar questions as a natural outgrowth of simple computational models of thought. E.g. if one performs Solomonoff induction on the stream of camera inputs to a robot, what kind of short programs will dominate the probability distribution over the next input? Not just programs that simulate the physics of our universe: one would also need additional code to "read off" the part of the simulated universe that corresponded to the camera inputs. That additional code looks like epiphenomenal mind-stuff. Using this framework you can pose questions like "if the camera is expected to be rebuilt using different but functionally equivalent materials, will his change the inputs Solomonoff induction predicts?" or "if the camera is about to be duplicated, which copy's inputs will be predicted by Solomonoff induction?"
If we go beyond Solomonoff induction to allow actions, then you get questions that map pretty well to debates about "free will."
Pretty much the same reason religion needs to be talked about. If no one had invented it wouldn't be useful to dispute notions of god creating us for a divine purpose, but because many people think this indeed happened you have to talk about it. It's especially important for reasonable discussions of AI.
Because epiphenomenalist theories are common but incorrect, and the goal of LessWrong is at least partially what its name implies.
I think the question "does consciousness affect neurons?" is as meaningful as "does the process of computation in a computer affect bits?".
FWIW, my old post 'Zombie Rationality' explores what I think the epiphenomenalist should say about the worry that "the upper-tier brain must be thinking meaningless gibberish when the upper-tier lips [talk about consciousness]"
One point to flag is that from an epiphenomenalist's perspective, mere brains never really mean anything, any more than squiggles of ink do; any meaning we attribute to them is purely derivative from the meaning of appropriately-related thoughts (which, on this view, essentially involve qualia).
Another thing to flag is that...
I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they're all returned. From an outside standpoint, this agent's mental bucket is meaningful because there's a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and the world so that you can imagine that your map corresponds to the territory, with a side-side order of your brain making the simplifying assumption that (your map of) the map has a primitive intrinsic correspondence to (your map of) the territory.
In actuality this correspondence is not the primitive and local quality it feels like; it's maintained by the meeting of hypotheses and reality in sense data. A third party or reflecting agent would be able to see the globally maintained correspondence by simultaneously tracing back actual causes of sense data and hypothesized causes of sense data, but this is a chain property involving r...
Mainstream status points to /Eliezer_Yudkowsky-drafts/ (Forbidden: You aren't allowed to do that.)
I haven't yet happened to run across a philosophical position which says that meaningful correspondences between hypotheses and reality can only be pinned down by following Pearl-style causal links inferred as the simplest explanation of observed experiences, and that only this can allow an agent to consistently believe that its beliefs are meaningful.
In fact, I haven't seen anything at all about referential meaningfulness requiring cause-and-effect links with the phenomenon, just like I haven't seen anything about a universe being a conn...
As bryjnar points out, all the stuff you say here (subtracting out the Pearl stuff) is entailed by the causal theory of reference. The reason quick summaries of that view will seem unfamiliar is that most of the early work on the causal theory was primarily motivated by a different concern -- accounting for how our words acquire their meaning. Thus the focus on causal chains from "original acts of naming" and whatnot. However, your arguments against epiphenomenalism all hold in the causal theory.
It is true that nobody (that I know of) has developed an explicitly Pearlian causal theory of reference, but this is really accounted for by division of labor in philosophy. People working on reference will develop a causal theory of reference and use words like "cause" without specifying what they mean by it. If you ask them what they mean, they will say "Whatever the best theory of causation is. Go ask the people working on causation about that." And among the people working on causation, there are indeed philosophers who have built on Pearlian ideas. Christopher Hitchcock and James Woodward, for instance.
The issue is broached by Chalmers himself in The Conscious Mind (p. 201). He says:
... it is sometimes said that reference to an entity requires a causal connection to that entity; this is known as the causal theory of reference. If so, then it would be impossible to refer to causally irrelevant experiences.
He goes on to reject the causal theory of reference.
Here is a relevant excerpt from the SEP article on zombies:
...But, arguably, it is a priori true that phenomenal consciousness, whether actual or possible, involves being able to refer to and know about one's qualia. If that is right, any zombie-friendly account faces a problem. According to the widely accepted causal theory of reference — accepted by many philosophers — reference and knowledge require us to be causally affected by what is known or referred to (Kripke 1972/80); and it seems reasonable to suppose that this too is true a priori if true at all. On that basis, in those epiphenomenalistic worlds whose conceivability seems to follow from the conceivability of zombies — (worlds where qualia are inert) — our counterparts cannot know about or refer to their qualia. That contradicts the assumption that phenomenal consc
EDIT: After thinking things through, I concluded that Eliezer was right, and that epiphenomalism was indeed confused and incoherent. Leaving this comment here as a record of how I came to agree with that conclusion.
...The closest theory to this which definitely does seem coherent - i.e., it's imaginable that it has a pinpointed meaning - would be if there was another little brain living inside my brain, made of shadow particles which could affect each other and be affected by my brain, but not affect my brain in turn. This brain would correctly hypothesize
After pondering both Eliezer's post and your comments for a while, I concluded that you were right, and that my previous belief in epiphenomenalism was incoherent and confused. I have now renounced it, for which I thank you both.
Hmm. I tried to write a response, but then I noticed that I was confused. Let me think about that for a while.
Well we do have one-way causal arrows. You just need to draw them through the (dun dun dun) Fourth Dimensionnnnn.
I'm not convinced I'm keeping my levels of reference straight, but if I can knowingly consistently accurately talk about epiphenomena, doesn't the structure or contents of the uncausing stuff cause me to think in this way rather than that way? I'm not sure how to formalize this intuition to tell if it's useful or trivial.
Try reading this charitably as expressing confusion about how we can (knowingly, consistently) talk about epiphenomena, since they (obviously, duh) don't cause us to think in this way rather than that way.
If we can only meaningfully talk about parts of the universe that can be pinned down inside the causal graph, where do we find the fact that 2 + 2 = 4? Or did I just make a meaningless noise, there? Or if you claim that "2 + 2 = 4" isn't meaningful or true, then what alternate property does the sentence "2 + 2 = 4" have which makes it so much more useful than the sentence "2 + 2 = 3"?
PA proves "2 + 2 = 4" using the associative property. PA does not prove "2 + 2 = 3". "2 + 2 = 4" is actually shorthand for "((1+1) + (1+1)) = (((1+1)+1)+1)". Moving stuff next to other stuff in our universe happens to follow the associative property; this is why the belief is useful.
I have myself usually seen Peano arithmetic described with 0 and the successor operation (such as in the context of actually implementing it in a computer). in this case,
S(S(0)) + S(S(0))
= S(S(S(0))) + S(0)
= S(S(S(S(0)))) + 0
= S(S(S(S(0))))
where the two theorems needed are that x + S(y) = S(x) + y and that x + 0 = x. I find this to have less incidental complexity (given that we are interested in working up from axioms, not down from conventional arithmetic) perhaps because the tree of the final expression has no branches. The first theorem can be looked at as expressing that “moving stuff results in the same stuff”, i.e. a conservation law; note that the expression has precisely the same number of nodes.
(I like that! The idea that it follows just from the associative property and no other features of PA is quite elegant.)
Followup to: The Fabric of Real Things, Stuff That Makes Stuff Happen
Previous meditation: "Does your rule forbid epiphenomenalist theories of consciousness that consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness is that we can imagine a universe where people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. For all the atoms in this universe to be in the same place - for there to be no detectable difference internally, not just externally - 'consciousness' would have to be something created by the atoms in the brain, but which didn't affect those atoms in turn. It would be an effect of atoms, but not a cause of atoms. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."
Is it coherent to imagine a universe in which a real entity can be an effect but not a cause?
Well... there's a couple of senses in which it seems imaginable. It's important to remember that imagining things yields info primarily about what human brains can imagine. It only provides info about reality to the extent that we think imagination and reality are systematically correlated for some reason.
That said, I can certainly write a computer program in which there's a tier of objects affecting each other, and a second tier - a lower tier - of epiphenomenal objects which are affected by them, but don't affect them. For example, I could write a program to simulate some balls that bounce off each other, and then some little shadows that follow the balls around.
But then I only know about the shadows because I'm outside that whole universe, looking in. So my mind is being affected by both the balls and shadows - to observe something is to be affected by it. I know where the shadow is, because the shadow makes pixels be drawn on screen, which make my eye see pixels. If your universe has two tiers of causality - a tier with things that affect each other, and another tier of things that are affected by the first tier without affecting them - then could you know that fact from inside that universe?
Again, this seems easy to imagine as long as objects in the second tier can affect each other. You'd just have to be living in the second tier! We can imagine, for example - this wasn't the way things worked out in our universe, but it might've seemed plausible to the ancient Greeks - that the stars in heaven (and the Sun as a special case) could affect each other and affect Earthly forces, but no Earthly force could affect them:
(Here the X'd-arrow stands for 'cannot affect'.)
The Sun's light would illuminate Earth, so it would cause plant growth. And sometimes you would see two stars crash into each other and explode, so you'd see they could affect each other. (And affect your brain, which was seeing them.) But the stars and Sun would be made out of a different substance, the 'heavenly material', and throwing any Earthly material at it would not cause it to change state in the slightest. The Earthly material might be burned up, but the Sun would occupy exactly the same position as before. It would affect us, but not be affected by us.
(To clarify an important point raised in the comments: In standard causal diagrams and in standard physics, no two individual events ever affect each other; there's a causal arrow from the PAST to FUTURE but never an arrow from FUTURE to PAST. What we're talking about here is the sun and stars over time, and the generalization over causal arrows that point from Star-in-Past to Sun-in-Present and Sun-in-Present back to Star-in-Future. The standard formalism dealing with this would be Dynamic Bayesian Networks (DBNs) in which there are repeating nodes and repeating arrows for each successive timeframe: X1, X2, X3, and causal laws F relating Xi to Xi+1. If the laws of physics did not repeat over time, it would be rather hard to learn about the universe! The Sun repeatedly sends out photons, and they obey the same laws each time they fall on Earth; rather than the Fi being new transition tables each time, we see a constant Fphysics over and over. By saying that we live in a single-tier universe, we're observing that whenever there are F-arrows, causal-link-types, which (over repeating time) descend from variables-of-type-X to variables-of-type-Y (like present photons affecting future electrons), there are also arrows going back from Ys to Xs (like present electrons affecting future photons). If we weren't generalizing over time, it couldn't possibly make sense to speak of thingies that "affect each other" - causal diagrams don't allow directed cycles!)
A two-tier causal universe seems easy to imagine, even easy to specify as a computer program. If you were arranging a Dynamic Bayes Net at random, would it randomly have everything in a single tier? If you were designing a causal universe at random, wouldn't there randomly be some things that appeared to us as causes but not effects? And yet our own physicists haven't discovered any upper-tier particles which can move us without being movable by us. There might be a hint here at what sort of thingies tend to be real in the first place - that, for whatever reasons, the Real Rules somehow mandate or suggest that all the causal forces in a universe be on the same level, capable of both affecting and being affected by each other.
Still, we don't actually know the Real Rules are like that; and so it seems premature to assign a priori zero probability to hypotheses with multi-tiered causal universes. Discovering a class of upper-tier affect-only particles seems imaginable[1] - we can imagine which experiences would convince us that they existed. If we're in the Matrix, we can see how to program a Matrix like that. If there's some deeper reason why that's impossible in any base-level reality, we don't know it yet. So we probably want to call that a meaningful hypothesis for now.
But what about lower-tier particles which can be affected by us, and yet never affect us?
Perhaps there are whole sentient Shadow Civilizations living on my nose hairs which can never affect those nose hairs, but find my nose hairs solid beneath their feet. (The solid Earth affecting them but not being affected, like the Sun's light affecting us in the 'heavenly material' hypothesis.) Perhaps I wreck their world every time I sneeze. It certainly seems imaginable - you could write a computer program simulating physics like that, given sufficient perverseness and computing power...
And yet the fundamental question of rationality - "What do you think you know, and how do you think you know it?" - raises the question:
How could you possibly know about the lower tier, even if it existed?
To observe something is to be affected by it - to have your brain and beliefs take on different states, depending on that thing's state. How can you know about something that doesn't affect your brain?
In fact there's an even deeper question, "How could you possibly talk about that lower tier of causality even if it existed?"
Let's say you're a Lord of the Matrix. You write a computer program which first computes the physical universe as we know it (or a discrete approximation), and then you add a couple of lower-tier effects as follows:
First, every time I sneeze, the binary variable YES_SNEEZE will be set to the second of its two possible values.
Second, every time I sneeze, the binary variable NO_SNEEZE will be set to the first of its two possible values.
Now let's say that - somehow - even though I've never caught any hint of the Matrix - I just magically think to myself one day, "What if there's a variable that watches when I sneeze, and gets set to 1?"
It will be all too easy for me to imagine that this belief is meaningful and could be true or false:
And yet in reality - as you know from outside the matrix - there are two shadow variables that get set when I sneeze. How can I talk about one of them, rather than the other? Why should my thought about '1' refer to their second possible value rather than their first possible value, inside the Matrix computer program? If we tried to establish a truth-value in this situation, to compare my thought to the reality inside the computer program - why compare my thought about SNEEZE_VAR to the variable YES_SNEEZE instead of NO_SNEEZE, or compare my thought '1' to the first possible value instead of the second possible value?
Under more epistemically healthy circumstances, when you talk about things that are not directly sensory experiences, you will reference a causal model of the universe that you inducted to explain your sensory experiences. Let's say you repeatedly go outside at various times of day, and your eyes and skin directly experience BRIGHT-WARM, BRIGHT-WARM, BRIGHT-WARM, DARK-COOL, DARK-COOL, etc. To explain the patterns in your sensory experiences, you hypothesize a latent variable we'll call 'Sun', with some kind of state which can change between 1, which causes BRIGHTness and WARMness, and 0, which causes DARKness and COOLness. You believe that the state of the 'Sun' variable changes over time, but usually changes less frequently than you go outside.
Standing here outside the Matrix, we might be tempted to compare your beliefs about "Sun = 1", to the real universe's state regarding the visibility of the sun in the sky (or rather, the Earth's rotational position).
But even if we compress the sun's visibility down to a binary categorization, how are we to know that your thought "Sun = 1" is meant to correspond to the sun being visible in the sky, rather than the sun being occluded by the Earth? Why the first state of the variable, rather than the second state?
How indeed are we know that this thought "Sun = 1" is meant to compare to the sun at all, rather than an anteater in Venezuela?
Well, because that 'Sun' thingy is supposed to be the cause of BRIGHT and WARM feelings, and if you trace back the cause of those sensory experiences in reality you'll arrive at the sun that the 'Sun' thought allegedly corresponds to. And to distinguish between whether the sun being visible in the sky is meant to correspond to 'Sun'=1 or 'Sun'=0, you check the conditional probabilities for that 'Sun'-state giving rise to BRIGHT - if the actual sun being visible has a 95% chance of causing the BRIGHT sensory feeling, then that true state of the sun is intended to correspond to the hypothetical 'Sun'=1, not 'Sun'=0.
Or to put it more generally, in cases where we have...
...then the correspondence between map and territory can at least in principle be point-wise evaluated by tracing causal links back from sensory experiences to reality, and tracing hypothetical causal links from sensory experiences back to hypothetical reality. We can't directly evaluate that truth-condition inside our own thoughts; but we can perform experiments and be corrected by them.
Being able to imagine that your thoughts are meaningful and that a correspondence between map and territory is being maintained, is no guarantee that your thoughts are true. On the other hand, if you can't even imagine within your own model how a piece of your map could have a traceable correspondence to the territory, that is a very bad sign for the belief being meaningful, let alone true. Checking to see whether you can imagine a belief being meaningful is a test which will occasionally throw out bad beliefs, though it is no guarantee of a belief being good.
Okay, but what about the idea that it should be meaningful to talk about whether or not a spaceship continues to exist after it travels over the cosmological horizon? Doesn't this theory of meaningfulness seem to claim that you can only sensibly imagine something that makes a difference to your sensory experiences?
No. It says that you can only talk about events that your sensory experiences pin down within the causal graph. If you observe enough protons, electrons, neutrons, and so on, you can pin down the physical generalization which says, "Mass-energy is neither created nor destroyed; and in particular, particles don't vanish into nothingness without a trace." It is then an effect of that rule, combined with our previous observation of the ship itself, which tells us that there's a ship that went over the cosmological horizon and now we can't see it any more.
To navigate referentially to the fact that the ship continues to exist over the cosmological horizon, we navigate from our sensory experience up to the laws of physics, by talking about the cause of electrons not blinking out of existence; we also navigate up to the ship's existence by tracing back the cause of our observation of the ship being built. We can't see the future ship over the horizon - but the causal links down from the ship's construction, and from the laws of physics saying it doesn't disappear, are both pinned down by observation - there's no difficulty in figuring out which causes we're talking about, or what effects they have.[2]
All righty-ighty, let's revisit that meditation:
"Does your rule forbid epiphenomenalist theories of consciousness in which consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness is that we can imagine a universe where people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. For all the atoms in this universe to be in the same place - for there to be no detectable difference internally, not just externally - 'consciousness' would have to be something created by the atoms in the brain, but which didn't affect those atoms in turn. It would be an effect of atoms, but not a cause of atoms. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."
The closest theory to this which definitely does seem coherent - i.e., it's imaginable that it has a pinpointed meaning - would be if there was another little brain living inside my brain, made of shadow particles which could affect each other and be affected by my brain, but not affect my brain in turn. This brain would correctly hypothesize the reasons for its sensory experiences - that there was, from its perspective, an upper tier of particles interacting with each other that it couldn't affect. Upper-tier particles are observable, i.e., can affect lower-tier senses, so it would be possible to correctly induct a simplest explanation for them. And this inner brain would think, "I can imagine a Zombie Universe in which I am missing, but all the upper-tier particles go on interacting with each other as before." If we imagine that the upper-tier brain is just a robotic sort of agent, or a kitten, then the inner brain might justifiably imagine that the Zombie Universe would contain nobody to listen - no lower-tier brains to watch and be aware of events.
We could write that computer program, given significantly more knowledge and vastly more computing power and zero ethics.
But this inner brain composed of lower-tier shadow particles cannot write upper-tier philosophy papers about the Zombie universe. If the inner brain thinks, "I am aware of my own awareness", the upper-tier lips cannot move and say aloud, "I am aware of my own awareness" a few seconds later. That would require causal links from lower particles to upper particles.
If we try to suppose that the lower tier isn't a complicated brain with an independent reasoning process that can imagine its own hypotheses, but just some shadowy pure experiences that don't affect anything in the upper tier, then clearly the upper-tier brain must be thinking meaningless gibberish when the upper-tier lips say, "I have a lower tier of shadowy pure experiences which did not affect in any way how I said these words." The deliberating upper brain that invents hypotheses for sense data, can only use sense data that affects the upper neurons carrying out the search for hypotheses that can be reported by the lips. Any shadowy pure experiences couldn't be inputs into the hypothesis-inventing cognitive process. So the upper brain would be talking nonsense.
There's a version of this theory in which the part of our brain that we can report out loud, which invents hypotheses to explain sense data out loud and manifests physically visible papers about Zombie universes, has for no explained reason invented a meaningless theory of shadow experiences which is experienced by the shadow part as a meaningful and correct theory. So that if we look at the "merely physical" slice of our universe, philosophy papers about consciousness are meaningless and the physical part of the philosopher is saying things their physical brain couldn't possibly know even if they were true. And yet our inner experience of those philosophy papers is meaningful and true. In a way that couldn't possibly have caused me to physically write the previous sentence, mind you. And yet your experience of that sentence is also true even though, in the upper tier of the universe where that sentence was actually written, it is not only false but meaningless.
I'm honestly not sure what to say when a conversation gets to that point. Mostly you just want to yell, "Oh, for the love of Belldandy, will you just give up already?" or something about the importance of saying oops.
(Oh, plus the unexplained correlation violates the Markov condition for causal models.)
Maybe my reply would be something along the lines of, "Okay... look... I've given my account of a single-tier universe in which agents can invent meaningful explanations for sense data, and when they build accurate maps of reality there's a known reason for the correspondence... if you want to claim that a different kind of meaningfulness can hold within a different kind of agent divided into upper and lower tiers, it's up to you to explain what parts of the agent are doing which kinds of hypothesizing and how those hypotheses end up being meaningful and what causally explains their miraculous accuracy so that this all makes sense."
But frankly, I think people would be wiser to just give up trying to write sensible philosophy papers about lower causal tiers of the universe that don't affect the philosophy papers in any way.
Meditation: If we can only meaningfully talk about parts of the universe that can be pinned down inside the causal graph, where do we find the fact that 2 + 2 = 4? Or did I just make a meaningless noise, there? Or if you claim that "2 + 2 = 4" isn't meaningful or true, then what alternate property does the sentence "2 + 2 = 4" have which makes it so much more useful than the sentence "2 + 2 = 3"?
Mainstream status.
[1] Well, it seems imaginable so long as you toss most of quantum physics out the window and put us back in a classical universe. For particles to not be affected by us, they'd need their own configuration space such that "which configurations are identical" was determined by looking only at those particles, and not looking at any lower-tier particles entangled with them. If you don't want to toss QM out the window, it's actually pretty hard to imagine what an upper-tier particle would look like.
[2] This diagram treats the laws of physics as being just another node, which is a convenient shorthand, but probably not a good way to draw the graph. The laws of physics really correspond to the causal arrows Fi, not the causal nodes Xi. If you had the laws themselves - the function from past to future - be an Xi of variable state, then you'd need meta-physics to describe the Fphysics arrows for how the physics-stuff Xphysics could affect us, followed promptly by a need for meta-meta-physics et cetera. If the laws of physics were a kind of causal stuff, they'd be an upper tier of causality - we can't appear to affect the laws of physics, but if you call them causes, they can affect us. In Matrix terms, this would correspond to our universe running on a computer that stored the laws of physics in one area of RAM and the state of the universe in another area of RAM, the first area would be an upper causal tier and the second area would be a lower causal tier. But the infinite regress from treating the laws of determination as causal stuff, makes me suspicious that it might be an error to treat the laws of physics as "stuff that makes stuff happen and happens because of other stuff". When we trust that the ship doesn't disappear when it goes over the horizon, we may not be navigating to a physics-node in the graph, so much as we're navigating to a single Fphysics that appears in many different places inside the graph, and whose previously unknown function we have inferred. But this is an unimportant technical quibble on Tuesdays, Thursdays, Saturdays, and Sundays. It is only an incredibly deep question about the nature of reality on Mondays, Wednesdays, and Fridays, i.e., less than half the time.
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "Proofs, Implications, and Models"
Previous post: "Stuff That Makes Stuff Happen"