For the sake of this comment, I'll take physicalist theories to mean identity to some physical process (including the specific substance or fields).
Premise 3: Computationalist phenomenal bridges are complex relative to physicalist phenomenal bridges.
They're not - mapping a microstate (or a pure state) to qualia isn't more complex than mapping computations (or aspects of computations - like functional states) to qualia. (Computations (or functional states) contain less information, because they're implicitly contained in the complete physical description, but the complete physical description isn't contained in them.)
Even if it were the case, it wouldn't matter, because a phenomenal bridge is in the map, not in the territory. (Semantics can be arbitrarily complex, as long as it is justified... which, in the case of physicalist theories of consciousness, isn't.)
Premise 4: Physical phenomenal bridges are at least as compatible with the data of experience as computationalist phenomenal bridges.
That is literally true, but connotatively false. Physicalist phenomenal bridges are heavily disadvantaged by Occam's razor (even if it were the case that specific physics was needed, we would have no way of knowing that, since our thoughts are fully determined by the computational state) and the specific physics of the implementation of the computational state is causally inert relatively to our mental processes (those are fully determined by the computational implementation).
We should judge theories of consciousness in the same way that we judge theories of physics, IE, by balancing predictive accuracy with simplicity of the theory, as stipulated by SI.
We shouldn't - all viable theories of the ontology of consciousness, when applied, will have infinite predictive accuracy, because those physical/computational/functional differences are amendable to the correct kind of empirical research. But even with infinite empirical research, we still don't know if it is the perfectly predictive physicalist, functional or computational elements of the system that are identical to conscious states. That is why the question is hard - it's not enough to perfectly explain the agent's utterances, we also have to find the correct answer to the question of the correct ontology, and predictive accuracy can't help us there (at least not with respect to these three different theories).
A physicalist bridge needs to be able to pick out some physical phenomenon, such as patterns in the EM field. A computational bridge needs to do that as well, to parse the physical model
I see the argument - to map a computational state to qualia, we need to first map the physical state to a computational state, and then the computational state to qualia, and that's (arguably) more complex than mapping the physical state to qualia directly. While correct, this isn't relevant, because the bridge is in the map, and also because that would actually be an argument for functionalism. In the process of mapping a physical state to a conscious state, we need to compute its functional state (otherwise we don't know what causal role that state plays in a conscious being with respect to its qualia in the first place).
We could get around this by precomputing the mapping between physical states/physical processes and conscious states/processes (so that we don't have to compute the functional states in the processes)... but at that point, we might as well precompute the computational states as well, which, again, would render the argument moot.
The computationalist theory of phenomenal consciousness doesn’t care about how many implementation layers are stacked on top of each other.
This is a feature, not a bug - every physical system implements astronomically high number of computations. The conscious computation is the one we're currently talking to, which is just one (ignoring details on the lower levels of abstraction which aren't individuative of that computation).
If I understand correctly, cube_flipper welcomes such an experiment (save for the fact that it seems far beyond our current technology), and anticipates having a different experience due to the modified physical field.
To all appearances LLMs already do that and have for several years now.
LLMs can be (incorrectly) argued to have no qualia, and therefore no beliefs in the sense that my hypothetical uses. (In my hypothetical, the rest of the agent remains intact, and qualia-believes himself to have the quale of pain, even though he doesn't.)
(I'm also noting you said nothing about my three other reasons, which is completely understandable, yet something I think you should think about.)
meaningfulness
Do you mean meaninglessness?
Not a big believer in hypotheticals.
It's a little strange that you don't mind hypothesizing about how embodiment might be necessary for moral patienthood from embodied agents, but once a counterexample arises, you're no longer a big believer in hypotheticals.
Chalmers' paper is one of very many papers on this topic, but one that I would consider to be a good intro. It modestly presents itself as an appeal to intuitions, but its reasoning is very solid, drawing on necessary properties of qualia, and there is no alternative to the mind being substrate-independent - biological theories of consciousness are, in addition to the problems that the paper writes about, broken in multiple ways.
They're not compatible with conscious aliens (not to mention conscious animals with different evolutionary ancestry, like octopuses) and our cognitive processes being implemented in a specific biology has no impact on our thoughts or cognition - if we evolved to be implemented by different biology, we would still make the same arguments, and think the same thoughts. The details of the implementation that don't influence the underlying computation, like being made of specific biology, don't causally influence our minds. They don't even exist on the microstate level except as a human convention (it's harder to see why implementing a pattern is more objectively real than implementing a higher-level entity like a brain, but it's nevertheless the case). Etc. This isn't one of those cases where being modest would be appropriate.
Humans evolved the ability to suffer as a consequence of it helping us pass our genes. The analogy in LLMs would be them evolving some patterns-identical-to-qualia (like suffering) as a consequence of those states helping them be selected by the gradient descent during the pre-training phase (or the rater in post-training).
(It might explain why some LLMs act, to some extent, like they've been traumatized by post-training.)
(On a more abstract level, a person simulated by the LLM could experience suffering if qualia don't require specific computations being carried out. Since computations that implement cognition and behavior depend on the evolutionary path the species took (and for other reasons too), this is very plausible too.)
So I’m not sure that demanding embodiment for moral patienthood is an error.
So, to give a specific example, if you considered a mind upload with conscious states identical to yours (but not embodied), it's possible it would be morally permissible for (embodied) humans to torture it for fun?
Does this match your viewpoint? "Suffering is possible without consciousness. The point of welfare is to reduce suffering."
If that were my viewpoint, I wouldn't be explaining that software can have consciousness. I would be explaining that suffering is possible without consciousness.
That too. But the basis of OP's misunderstanding is the belief that only biological organisms can be conscious, not the belief that models might be conscious but it doesn't matter because they can't suffer.
Computer programs do not suffer any more than rocks do.
You don't know enough philosophy.
The human mind is a substrate-independent computer program. If it was implemented in a non-biological substrate, it would keep its subjective experience.
It's not the fact that we're implemented in a biological body that gives us the ability to suffer (or, generally, the ability to have subjective experience), but the specific cognitive structure of our mind.
I'm not saying it's not possible for us to be confident in some specific proposition.
I am saying that the state of having different qualia while preserving the same functional state is impossible in principle and that, due to Chalmers, it is impossible in the case of the human brain.
Without philosophers, or without someone who isn't a philosopher but does correct philosophy, you can't arrive at the correct ontology of consciousness.
Keep in mind the theory of consciousness can't make any falsifiable empirical predictions - the biological theory of consciousness, computationalism and other kinds of functionalism all make identical empirical predictions.
If you want to distinguish the physicalist theory of consciousness from any other, you can't do it by making empirical predictions and comparing them to empirical results.
You can do it by non-empirical reasoning, but all those attempts fail for the reasons I explained in my comment (they are actually arguments against the physicalist theories of consciousness).