into digital sentience these days
fixed, thanks. Careless exponent juggling
Thanks for the comment Steven.
Your alternative wording of practical CF is indeed basically what I'm arguing against (although, we could interpret different degrees of the simulation having the "exact" same experience, and I think the arguments here don't only argue against the strongest versions but also weaker versions, depending on how strong those arguments are).
I'll explain a bit more why I think practical CF is relevant to CF more generally.
Firstly, functionalist commonly say things like
Computational functionalism: the mind is the software of the brain. (Piccinini)
Which, when I take at face value, is saying that there is actually a program being implemented by the brain that is meaningful to point to (i.e. it's not just a program in the sense that any physical process could be a program if you simulate it (assuming digital physics etc)). That program lives on a level of abstraction above biophysics.
Secondly, computational functionalism, taken at face value again, says that all details of the conscious experience should be encoded in the program that creates it. If this isn't true, then you can't say that conscious experience is that program because the experience has properties that the program does not.
Putnam advances an opposing functionalist view, on which mental states are functional states. (SEP)
He proposes that mental activity implements a probabilistic automaton and that particular mental states are machine states of the automaton’s central processor. (SEP)
the mind is constituted by the programs stored and executed by the brain (Piccinini)
I can accept the charge that this still is a stronger version of CF that a number of functionalists subscribe to. Which is fine! My plan was to address quite narrow claims at the start of the sequence and move onto broader claims later on.
I'd be curious to hear which of the above steps you think miss the mark on capturing common CF views.
thanks, corrected
If I understand your point correctly, that's what I try to establish here
the speed of propagation of ATP molecules (for example) is sensitive to a web of more physical factors like electromagnetic fields, ion channels, thermal fluctuations, etc. If we ignore all these contingencies, we lose causal closure again. If we include them, our mental software becomes even more complicated.
i.e., the cost becomes high because you need to keep including more and more elements of the dynamics.
The statement I'm arguing against is:
Practical CF: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality, would cause the conscious experience of that brain.
i.e., the same conscious experience as that brain. I titled this "is the mind a program" rather than "can the mind be approximated by a program".
Whether or not a simulation can have consciousness at all is a broader discussion I'm saving for later in the sequence, and is relevant to a weaker version of CF.
I'll edit to make this more clear.
Yes, perfect causal closure is technically impossible, so it comes in degrees. My argument is that the degree of causal closure of possible abstractions in the brain is less than one might naively expect.
Are there any measures of approximate simulation that you think are useful here?
I am yet to read this but I expect it will be very relevant! https://arxiv.org/abs/2402.09090
Especially if it's something as non-committal as "this mechanism could maybe matter". Does that really invalidate the neuron doctrine?
I agree each of the "mechanisms that maybe matter" are tenuous by themselves, the argument I'm trying to make here is hits-based. There are so many mechanisms that maybe matter, the chances of one of them mattering in a relevant way is quite high.
Thanks for the feedback Garrett.
This was intended to be more of a technical report than a blog post, meaning I wanted to keep the discussion reasonably rigorous/thorough. Which always comes with the downside of it being a slog to read, so apologies for that!
I'll write a shortened version if I find the time!
Thanks James!
One failure mode is that the modification makes the model very dumb in all instances.
Yea, good point. Perhaps an extra condition we'd need to include is that the "difficulty of meta-level questions" should be the same before and after the modification - e.g. - the distribution over stuff it's good at and stuff its bad at should be just as complex (not just good at everything or bad at everything) before and after
Yea, you might be hitting on at least a big generator of our disagreement. Well spotted