Illusionism makes the (unintuitive) claim that we have no first-person experience of the world or ourselves. Anthropic reasoning makes the (also unintuitive) claim that we learn something new about the world just by knowing that we exist. How do these two claims interact with each other? Is there work on this that I can read? For example, is there an illusionist account of the Sleeping Beauty Problem?

New Answer
New Comment

4 Answers sorted by

dadadarren

Oct 06, 2022

7-2

The two are incompatible. Anthropic reasoning makes explicit use of first-person experience in their question formulation. E.g. in the sleeping beauty problem, "what is the probability that now is the first awakening?" or "today is Monday?" The meaning of "now", and "today" is considered to be apparent, it is based on their immediacy to the subjective experience. Just like which person "I" am is inherently obvious based on a first-person experience. Denying first-person experience would make anthropic problems undefined.

Another example is the doomsday argument. Which says my birth rank, or the current generation's birth rank, is evidence for doom-soon. Without a first-person experience who "me" or "the current generation" refers to would be unclear. 

they're perfectly compatible, they don't even say anything about each other [edit: invalidated]. anthropics is just a question of what systems are likely. illusionism is a claim about whether systems have an ethereal self that they expose themselves to by acting; I am viciously agnostic about anything epiphenomenal like that, I would instead assert that all epiphenomenal confusions seem to me to be the confusion "why does [universe-aka-self] exist", and then there's a separate additional question of the surprise any highly efficient chemical processing sys... (read more)

1dadadarren2y
Try this for practice, reasoning purely objectively and physically, can you recreate the anthropic paradoxes such as the Sleeping Beauty Problem? That means without resorting to any particular first-person perspective, nor using words such as "I" "now" "here", or putting them in a unique logical position. 
1Shiroe2y
That sounds like a plausible theory. But, if we reject that there is a separate 1st person perspective, doesn't that entail that we should be Halfers in the SBP? Not saying it's wrong. But it does seem to me like illusionism/elimitivism has anthropic consequences.
2the gears to ascension2y
hmm. it seems to me that the sleeping mechanism problem is missing a perspective - there are more types of question you could ask the sleeping mechanism that are of interest. I'd say the measure increased by waking is not able to make predictions about what universe it is; but that, given waking, the mechanism should estimate the average of the two universe's wake counts, and assume the mechanism has 1.5 wakings of causal impact on the environment around the awoken mechanism. In other words, it seems to me that the decision-relevant anthropic question is how many places a symmetric process exists; inferring the properties of the universe around you, it is invalid to update about likely causal processes based on the fact that you exist; but on finding out you exist, you can update about where your actions are likely to impact, a different measure that does not allow making inferences about, eg, universal constants. if, for example, the sleeping beauty problem is run ten times, and each time the being wakes, it is written to a log; after the experiment, there will be on average 1.5x as many logs as there are samples. but the agent should still predict 50%, because the predictive accuracy score is a question of whether the bet the agent makes can be beaten by other knowledge. when the mechanism wakes, it should know it has more action weight in one world than the other, but that doesn't allow it to update about what bet most accurately predicts the most recent sample. two thirds of the mechanism's actions occur in one world, one third in the other, but the mechanism can't use that knowledge to infer about the past. I get the sense that I might be missing something here. the thirder position makes intuitive sense on some level. but my intuition is that it's conflating things. I've encountered the sleeping beauty problem before and something about it unsettles me - it feels like a confused question, and I might be wrong about this attempted deconfusion. but this expla

The two are unrelated. Illusionism is specifically about consciousness (or rather its absence), while anthropics is about particular types of conditional probabilities and does not require any reference to consciousness or its absence. Denying first person experience does not make anthropic problems any more undefined than they already are.

4dadadarren2y
One way to understand the anthropic debate is to consider them as different ways of interpreting the indexicals (such as "I" "now" "today" "our generation" etc) in probability calculation. And they are based on the first-person perspective. Furthermore, there is the looming question of "what should be considered observers?". Which lacks any logical indicator, unless we bring in the concept of consciousness.  We can easily make the sleeping beauty problem more undefined. For example, by asking "Is the day Monday?". Before attempting to answer it one would have to ask: "which day exactly are we talking about?". Compare that question to "is today Monday?", the latter is obviously more defined. Even though by using "now" or "today" no physical feature is used, we inherently think the latter question is clear because we can imagine being in Beauty's perspective as she wakes up during the experiment: "today" is the one most closely connected to the first-person experience. 
1Shiroe2y
So you'd say that it's coherent to be an illusionist who rejects the Halfer position in the SBP?
2JBlack2y
Sure. Also coherent to be an illusionist who accepts the Halfer position in the SBP. It's an underdetermined problem.
1Shiroe2y
If I program a simulation of the SBP and run it under illusionist principles, aren't the simulated Halfers going to inevitably win on average? After all, it's a fair coin.
2JBlack2y
It depends upon how you score it, which is why both the original problem and various decision-problem variants are underdetermined.
2Shiroe2y
Can you explain what you mean by "underdetermined" in this context? How is there any ambiguity in resolving the payouts if the game is run as a third person simulation?

lc

Oct 07, 2022

40

A computer with no first-person experience can still do anthropic reasoning. They don't really interact with each other.

I can see how a computer could simulate any anthropic reasoner's thought process. But if you ran the sleeping beauty problem as a computer simulation (i.e. implemented the illusionist paradigm) aren't the Halfers going to be winning on average?

Imagine the problem as a genetic algorithm with one parameter, the credence. Wouldn't the whole population converge to 0.5?

4Viliam2y
I think the solution to the sleeping beauty problem depends on how exactly the bets are evaluated. The entire idea is that in one branch you make a bet once, but on the other branch you make a bet twice. Does it mean that if you make a correct guess in the latter branch, you win twice as much money? Or despite making (the same) bet twice, you only get the money once? Depending on the answer, the optimal bet probability is either 1/2 or 1/3.
1Shiroe2y
You're right. I'm updating towards illusionism being orthogonal to anthropics in terms of betting behavior, though the upshot is still obscure to me.

avturchin

Oct 06, 2022

31

I think that anthropic beats illusionism. If there are many universes, in some of them consciousness (=qualia) is real, and because of anthropics I will find myself only is such universes. 

I didn't already know what illusionism argues, so I tried to understand it by skimming two related wiki articles that may be the ones you meant.

https://en.wikipedia.org/wiki/Illusionism_(philosophy) - this one doesn't seem like what you were talking about; it's relevant anyway, and I think the answer is undefined.

https://en.wikipedia.org/wiki/Eliminative_materialism#Illusionism this seems like what you're talking about. The issue I always hear is, an illusion to whom? and the answer I give is effectively EC Theory: consciousness to whom is a confused question, "to whom" is answered by access consciousness ie the question of when information becomes locally available to a physical process; the hard problem of consciousness boils down to "wat, the universe exists?" which is something that all matter is surprised by.

As for anthropics: I think anthropics must be rephrased into the third person to make any sense at all anyhow. you update off your own existence the same way you do on anything else: huh, the parts of me seem to have informed each other that they are a complex system; that is a surprising amount of complexity! and because we neurons have informed each other of a complex world and therefore have access consciousness of it, to the degree that our dance of representations is able to point to shapes we will experience in the future, such that the neuron weights will light up and match them when the thing they point to occurs, and our physical implementation of approximately bayesian low-level learning can find a model for the environment -

well, that model should probably be independent of where it's applied to physics; no matter what a network senses, the universe has the same mechanisms to implement the network, and that network must figure out what those invariants are in order to work most reliably. whether that network is a cell, a bio neural net, a social net, or a computer network, the task of building quorum representation involves a patch of universe building a model of what is around it. no self is needed for that.

So, okay, I've said too many words into my speech recognition and should have used more punctuation. My point about anthropics boils down to the claim that the best way to learn about anthropics is by example. Most or all math and physics works by making larger scale systems with different rules by arbitrarily choosing to virtualize those rules, so a system can only learn about other things that could have been in its place by learning what things can be and then inferring how likely that patch of stuff is and where it is in the possibility-space of things that can be.

This is a lot of words to say, you can do anthropic reasoning in an entirely materialist-first worldview where you don't even believe mathematical objects are distinctly real separate from physics. you don't need self-identity, because any network of interacting physical systems can reason about its own likelihood.

Alright, I said way the hell too many words in order to say the same thing enough ways that I have any chance in hell of saying what I intend to be. Let me know if this made any sense.