.
MCMC is a simpler example to ensure that we’re on the same page on the general topic of how randomness can be involved in algorithms.
Thanks for clarifying :)
Are we 100% on the same page about the role of randomness in MCMC? Is everything I said about MCMC super duper obvious from your perspective?
Yes.
If I run MCMC with a PRNG given random seed 1, it outputs 7.98 ± 0.03. If I use a random seed of 2, then the MCMC spits out a final answer of 8.01 ± 0.03. My question is: does the random seed entering MCMC “have a causal effect on the execution of the algorithm”, in whatever sense you mean by the phrase “have a causal effect on the execution of the algorithm”?
Yes, the seed has a causal effect on the execution of the algorithm by my definition. As was talked about in the comments of the original post, causal closure comes in degrees, and in this case the MCMC algorithm is somewhat causally closed from the seed. An abstract description of the MCMC system that excludes the value of the seed is still a useful abstract description of that system - you can reason about what the algorithm is doing, predict the output within the error bars, etc.
In contrast, the algorithm is not very causally closed to, say, idk, some function f() that is called a bunch of times on each iteration of the MCMC. If we leave f() out of our abstract description of the MCMC system, we don't have a very good description of that system, we can't work out much about what the output would be given an input.
If the 'mental software' I talk about is as causally closed to some biophysics as the MCMC is causally closed to the seed, then my argument in that post is weak. If however it's only as causally closed to biphysics as our program is to f(), then it's not very causally closed, and my argument in that post is stronger.
My MCMC code uses a PRNG that returns random floats between 0 and 1. If I replace that PRNG with return 0.5, i.e. the average of the 0-to-1 interval, then the MCMC now returns a wildly-wrong answer of 942. Is that replacement the kind of thing you have in mind when you say “just take the average of those fluctuations”?
Hmm, yea this is a good counterexample to my limited "just take the average of those fluctuations" claim.
If it's important that my algorithm needs a pseudorandom float between 0 and 1, and I don't have access to the particular PRNG that the algorithm calls, I could replace it with a different PRNG in my abstract description of the MCMC. It won't work exactly the same, but it will still run MCMC and give out a correct answer.
To connect it to the brain stuff: say I have a candidate abstraction of the brain that I hope explains the mind. Say temperatures fluctuate in the brain between 38°C and 39°C. Here are 3 possibilities of how this might effect the abstraction:
I'm not saying anything about MCMC. I'm saying random noise is not what I care about, the MCMC example is not capturing what I'm trying to get at when I talk about causal closure.
I don't disagree with anything you've said in this comment, and I'm quite confused about how we're able to talk past each other to this degree.
The most obvious examples are sensory inputs—vision, sounds, etc. I’m not sure why you don’t mention those.
Obviously algorithms are allowed to have inputs, and I agree that the fact that the brain takes in sensory input (and all other kinds of inputs) is not evidence against practical CF. The way I'm defining causal closure is that the algorithm is allowed to take in some narrow band of inputs (narrow relative to, say, the inputs being the dynamics of all the atoms in the atmosphere around the neurons, or whatever). My bad for not making this more explicit, I've gone back and edited the post to make it clearer.
Computer chips have a clear sense in which they exhibit causal closure (even though they are allowed to take in inputs through narrow channels). There is a useful level of abstraction of the chip: the charges in the transistors. We can fully describe all the computations executed by the chip at that level of abstraction plus inputs, because that level of abstraction is causally closed from lower-level details like the trajectories of individual charges. If it wasn't so, then that level of abstraction would not be helpful for understanding the behavior of the computer -- executions would branch conditional on specific charge trajectories, and it would be a rubbish computer.
random noise enters in
I think this is a big source of the confusion, another case where I haven't been clear enough. I agree that algorithms are allowed to receive random noise. What I am worried about is the case where the signals entering the from smaller length scales are systematic rather than random.
If the information leaking into the abstraction can be safely averaged out (say, we just define a uniform temperature throughout the brain as an input to the algorithm), then we can just consider this a part of the abstraction: a temperature parameter you define as an input or whatever. Such an abstraction might be able to create consciousness on a practical classical computer.
But imagine instead that (for sake of argument) it turned out that high-resolution details of temperature fluctuations throughout the brain had a causal effect on the execution of the algorithm such that the algorithm doesn't do what it's meant to do if you just take the average of those fluctuations. In that case, the algorithm is not fully specified on that level of abstraction, and whatever dynamics are important for phenomenal consciousness might be encoded in the details of temperature fluctuations, not be captured by your abstraction.
If you believe there exists "a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness" you believe you can define consciousness with a computation.
I'm not arguing against the claim that you could "define consciousness with a computation". I am arguing against the claim that "consciousness is computation". These are distinct claims.
So, most people who take the materialist perspective believe the material world comes from a sort of "computational universe", e.g. Tegmark IV.
Massive claim, nothing to back it up.
This person's thinking is very loosey-goosey and someone needed to point it out.
when you define the terms properly (i.e. KL-divergence from the firings that would have happened)
I think I have a sense of what's happening here. You don't consider an argument precise enough unless I define things in more mathematical terms. I've been reading a lot more philosophy recently so I'm a lot more of a wordcell than I used to be. You are only comfortable with grounding everything in maths and computation, which is chill. But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away.
I'd be excited to actually see this counterargument. Is it written down anywhere that you can link to?
I don't really understand the point of this thought experiment, because if it wasn't phrased in such a mysterious manner, it wouldn't seem relevant to computational functionalism.
I'm sorry my summary of the thought experiment wasn't precise enough for you. You're welcome to read Chalmers' original paper for more details, which I link to at the top of that section.
I also don't understand a single one of your arguments against computational functionalism
I gave very brief recaps of my arguments from the other posts in the sequence here so I can connect those arguments to more general CF (rather than theoretical & practical CF). Sorry if they're too fast. You are welcome to go into the previous posts I link to for more details.
and that's because I think you don't understand them either.
What am I supposed to do with this? The one effect this has is to piss me off and make me less interested in engaging with anything you've said.
You can't just claim that consciousness is "real"
This is an assumption I state at the top of this very article.
and computation is not
I don't "just claim" this, this is what I argue in the theoretical CF post I link to.
You haven't even defined what "real" is.
I define this when I state my "realism about phenomenal consciousness" assumption, to the precision I judge is necessary for this discussion.
most people actually take the opposite approch: computation is the most "real" thing out there, and the universe—and any consciouses therein—arise from it
Big claims. Nothing to back it up. Not sure why you expect me to update on this.
how is computation being fuzzy even related to this question? Consciousness can be the same way.
This is all covered in the theoretical CF post I link to.
Could you recommend any good (up-to-date) reading defending the neuron doctrine?
How would the alien know when they've found the correct encoding scheme?
I'm not sure I understand this. You're saying the alien could look at the initial conditions, since they're much simpler than the quantum fields as the simulation runs? In that case, how could it track down those initial conditions and interpret them?
Ah I see, thanks for clarifying.
Perhaps I should have also given the alien access to infinite compute. I think the alien still wouldn't be able to determine the correct simulation.
And also infinite X if you hit me with another bottleneck of the alien not having enough X in practice.
The thought experiment is intended to be about in-principle rather than practical.
I mean something along the lines of "if you specify all aspects of the mind (e.g. using a program), you have also specified all aspects of the conscious experience"
Eek, thanks for the heads up, fixed!