Scientist by training, coder by previous session,philosopher by inclination, musician against public demand.
Team Piepgrass: "Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I’m wrong."
By far the best definition I’ve ever heard of the supernatural is Richard Carrier’s: A “supernatural” explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.
Physicalism, materialism, empiricism, and reductionism are clearly similar ideas, but not identical. Carrier's criterion captures something about a supernatural ontology, but nothing about supernatural epistemology. Surely the central claim of natural epistemology is that you have to look...you can't rely on faith , or clear ideas implanted in our minds by God.
it seems that we have very good grounds for excluding supernatural explanations a priori
But making reductionism aprioristic arguably makes it less scientific...at least, what you gain in scientific ontology, you lose in scientific epistemology.
I mean, what would the universe look like if reductionism were false
We wouldn't have reductive explanations of some apparently high level phenomena ... Which we don't.
I previously defined the reductionist thesis as follows: human minds create multi-level models of reality in which high-level patterns and low-level patterns are separately and explicitly represented. A physicist knows Newton’s equation for gravity, Einstein’s equation for gravity, and the derivation of the former as a low-speed approximation of the latter. But these three separate mental representations, are only a convenience of human cognition. It is not that reality itself has an Einstein equation that governs at high speeds, a Newton equation that governs at low speeds, and a “bridging law” that smooths the interface. Reality itself has only a single level, Einsteinian gravity. It is only the Mind Projection Fallacy that makes some people talk as if the higher levels could have a separate existence—different levels of organization can have separate representations in human maps, but the territory itself is a single unified low-level mathematical object. Suppose this were wrong.
Suppose that the Mind Projection Fallacy was not a fallacy, but simply true.
Note that there are four possibilities here...
I assume a one level universe, all further details are correct.
I assume a one level universe, some details may be incorrect
I assume a multi level universe, all further details are correct.
I assume a multi level universe, some details may be incorrect.
How do we know that the MPF is actually fallacious, and what does it mean anyway?
If all forms of mind projection projection are wrong, then reductive physicalism is wrong, because quarks, or whatever is ultimately real, should not be mind projected, either.
If no higher level concept should be mind projected, then reducible higher level concepts shouldn't be ...which is not EY's intention.
Well, maybe irreducible high level concepts are the ones that shouldn't be mind projected.
That certainly amounts to disbelieving in non reductionism...but it doesn't have much to do with mind projection. If some examples of mind projection are acceptable , and the unacceptable ones coincide with the ones forbidden by reductivism, then MPF is being used as a Trojan horse for reductionism.
And if reductionism is an obvious truth , it could have stood on its own as apriori truth.
Suppose that a 747 had a fundamental physical existence apart from the quarks making up the 747. What experimental observations would you expect to make, if you found yourself in such a universe?
Science isn't 100% observation,it's a mixture of observation and explanation.
A reductionist ontology is a one level universe: the evidence for it is the success of reductive explanation , the ability to explain higher level phenomena entirely in terms of lower level behaviour. And the existence of explanations is aposteriori, without being observational data, in the usual sense. Explanations are abductive,not inductive or deductive.
As before, you should expect to be able to make reductive explanations of all high level phenomena in a one level universe....if you are sufficiently intelligent. It's like the Laplace's Demon illustration of determinism,only "vertical". If you find yourself unable to make reductive explanations of all phenomena, that might be because you lack the intelligence , or because you are in a non reductive multi level universe or because you haven't had enough time...
Either way, it's doubtful and aposteriori, not certain and apriori.
If you can’t come up with a good answer to that, it’s not observation that’s ruling out “non-reductionist” beliefs, but a priori logical incoherence"
I think I have answered that. I don't need observations to rule it out. Observations-rule it-in, and incoherence-rules-it-out aren't the only options.
People who live in reductionist universes cannot concretely envision non-reductionist universes.
Which is a funny thing to say, since science was non-reductionist till about 100 years ago.
One of the clinching arguments for reductionism.was the Schrödinger equation, which showed that in principle, the whole of chemistry is reducible to physics, while the rise of milecular biology showeds th rreducxibility of Before that, educators would point to the de facto hierarchy of the sciences -- physics, chemistry, biology, psychology, sociology -- as evidence of a multi-layer reality.
Unless the point is about "concretely". What does it mean to concretely envision a reductionist universe? Pehaps it means you imagine all the prima facie layers, and also reductive explanations linking them. But then the non-reductionist universe would require less envisioning, because byit's the same thing without the bridging explanations! Or maybe it means just envisioing huge arrays of quarks. Which you can't do. The reductionist world view , in combination with the limitations of the brain, implies that you pretty much have to use higher level, summarised concepts...and that they are not necessarily wrong.
But now we get to the dilemma: if the staid conventional normal boring understanding of physics and the brain is correct, there’s no way in principle that a human being can concretely envision, and derive testable experimental predictions about, an alternate universe in which things are irreducibly mental. Because, if the boring old normal model is correct, your brain is made of quarks, and so your brain will only be able to envision and concretely predict things that can predicted by quarks.
"Your brain is made of quarks" is aposteriori, not apriori.
Your brain being made of quarks doesn't imply anything about computability. In fact, the computatbolity of the ultimately correct version of quantum physics is an open question.
Incomputability isn't the only thing that implies irreducibility, as @ChronoDas points out.
Non reductionism is conceivable, or there would be no need to argue for reductionism.
This Cartesian dualism in various disguises is at the heart of most “paradoxes” of consciousness. P-zombies are beings materially identical to humans but lacking this special res cogitans sauce, and their conceivability requires accepting substance dualism.
Only their physical possibility requires some kind of nonphysicality. Physically impossible things can be conceivable if you don't know why they are physically impossible, if you can't see the contradiction between their existence and the laws of physics. The conceivability of zombies is therefore evidence for phenomenal consciousness not having been explained, at least. Which it hasn't anyway: zombies are in no way necessary to state the HP.
The famous “hard problem of consciousness” asks how a “rich inner life” (i.e., res cogitans) can arise from mere “physical processing” and claims that no study of the physical could ever give a satisfying answer.
A rich inner life is something you have whatever your metaphysics. It doesn't go.away when you stop believing in it. It's the phenomenon to be explained. Res Cogitans, or some other dualistic metaphysics, is among an number of ways explaining it...not something needed to pose the problem.
The HP only claims that the problem of phenomenal consciousness is harder-er than other aspects of consciousness. Further arguments by Chalmers tend towards the lack of a physical solution, but you are telescoping them all into the same issue.
We have also solved the mystery of “the dress”:
But not the Hard Problem: the HP is about having any qualia at all, not about ambiguous or anomalous qualia. There would be an HP if everyone just saw the same.uniform shade of red all the time.
As with life, consciousness can be broken into multiple components and aspects that can be explained, predicted, and controlled. If we can do all three we can claim a true understanding of each
If. But we in fact lag in understanding the phenomenal aspect, compared to the others. In that sense, there is a defacto hard-er problem.
The important point here is that “redness” is a property of your brain’s best model for predicting the states of certain neurons. Redness is not “objective” in the sense of being “in the object".
No, that's not important. The HP starts with the subjectivity of qualia, it doesn't stop with it.
Subjectivity isn't just the trivial issue of being had by a subject, it is the serious issue of incommunicability, or ineffability.
Philosophers of consciousness have committed the same sins as “philosophers of life” before them: they have mistaken their own confusion for a fundamental mystery, and, as with élan vital, they smuggled in foreign substances to cover the gaps. This is René Descartes’ res cogitans, a mental substance that is separate from the material.
No, you can state and justify the HP without assuming dualism.
Are you truly exercising free will or merely following the laws of physics?
Or both?
And how is the topic of free will related to consciousness anyway?
There is no “spooky free will”
There could be non spooky free will...that is more than a mere feeling. Inasmuch as Seth has skipped that issue -- whether there is a physically plausible, naturalistic free will -- he hasn't solved free will.
There are ways in which you could have both, because there are multiple definitions of free will, as well as open questions about physics. Apart from compatibilist free will, which is obviously compatible with physics, including deterministic physics, naturalistic libertarian free will is possible in an indeterministic universe. NLFW is just an objectively determinable property of a system, a man-machine. Free will doesn't have to be explained away, and isn't direct require an assumption of dualism.
But selfhood is itself just a bundle of perceptions, separable from each other and from experiences like pain or pleasure.
The subjective e, sense -of-self is,.pretty much by definition. Whether there are any further objective facts, that would answer questions about destructive teleportation and the like, is another question. As with free will, explaining the subjective aspect doesn't explain away the objective.aspect.
First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly,
Thats a rather small nit. The vast majority of computationalists are talking about classical computation.
Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies,
That's not much of a boast: pure logic can't solve metaphysical problems about consciousness, time, space, identity, and so on. That's why they are still problems. There's a simple logical theory of identity, but it doesn't answer the metaphysical problems, what I have called the synchronic and diachronic problems.
Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs.
Physicalism doesn't answer the problems. You need some extra information about how similar or different physical things are in order to answer questions about whether they are the same or different individuals. At least, if you want to avoid the implications of raw physicalism --along the lines of "if one atom changes, you're a different person". An abstraction would be useful -- but it needs to be the right one.
Third, I gave a somewhat more specific theory of identity in my linked answer, and it’s compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicaliskt answer for specialized questions.
You seem to be saying that consciousness is nothing but having a self model, and whatever the self believes about itself is the last word...that there are no inconvenient objective facts that could trump a self assessment ("No you're not the original Duncan Idaho, you're ghola number 476. You think you're the one and only Duncan because you're brain state is a clone of the original Duncan's"). That makes things rather easy. But the rationalist approach to the problem of identity generally relies on bullet biting about whatever solution is appealing -- if computationalism is is correct, you can be cloned, and the you really are on two places at once.
My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.
Well, how? If you could predict qualia from self control, you'd have a solution --not a dissolution --to the HP.
Another reason why the hard problem seems hard is that way too many philosophers are disinclined to gather any data on the phenomenon of interest at all, because they don’t have backgrounds in neuroscience, and instead want to purely define consciousness without reference to any empirical reality.
Granting that "empirical" means "outer empirical" .... not including introspection.
I don't think there is much evidence for the "purely". Chalmers doesn't disbelieve in the easy problem aspects of conscious.
We’re talking about “physical processes”
We are talking about functionalism -- it's in the title. I am contrasting physical processes with abstract functions.
In ordinary parlance, the function of a physical thing is itself a physical effect...toasters toast, kettles boil, planes fly.
In the philosophy of mind, a function is an abstraction, more like the mathematical sense of a function. In maths, a function takes some inputs and or produces some outputs. Well known examples are familiar arithmetic operations like addition, multiplication , squaring, and so on. But the inputs and outputs are not concrete physical realities. In computation,the inputs and outputs of a functional unit, such as a NAND gate, always have some concrete value, some specific voltage, but not always the same one. Indeed, general Turing complete computers don't even have to be electrical -- they can be implemented in clockwork, hydraulics, photonics, etc.
This is the basis for the idea that a compute programme can be the same as a mind, despite being made of different matter -- it implements the same.abstract functions. The abstraction of the abstract, philosopy-of-mind concept of a function is part of its usefulness.
Searle is famous critic of computationalism, and his substitute for it is a biological essentialism in which the generation of consciousness is a brain function -- in the concrete sense of function.It's true that something whose concrete function is to generate consciousness will generate consciousness..but it's vacuously, trivially true.
The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.
If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.
I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.
I'm less optimistic because of my.arguments.
But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious?
No, not necessarily. That , in the "not necessary" form --is what I've been arguing all along. I also don't think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.
The controversial point is whether consciousness in the hard problem sense --phenomenal consciousness, qualia-- will be reproduced with reproduction of function. It's not controversial that easy problem consciousness -- capacities and behaviour-- will be reproduced by functional reproduction. I don t know which you believe, because you are only talking about consciousness not otherwise specified.
If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-
You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn't possibly make a difference.
@rife
Yes, I’m specifically focused on the behaviour of an honest self-report
Well,. you're not rejecting phenomenal consciousness wholesale.
fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn’t, smaller noise doesn’t matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn’t materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour.
But outward behaviour is not what I am talking about. The question is whether functional duplication preserves (full) consciousness. And, as I have said, physicalism is not just about fine grained details. There's also the basic fact of running on the metal
I’m saying that we have ruled out that a functional duplicate could lack conscious experience because: we have established conscious experience as part of the causal chain
"In humans". Even if it's always the case that qualia are causal in humans, it doesn't follow that reports of qualia in any entity whatsoever are caused by qualia. Yudkowsky's argument is no help here, because he doesn't require reports of consciousness to be *directly" caused by consciousness -- a computational zombies reports would be caused , not by it's own consciousness, but by the programming and data created by humans.
to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality.
Neural firings are specific physical behaviour, not abstract function. Computationalism is about abstract function
I don’t find this position compelling for several reasons:
First, if consciousness really required extremely precise physical conditions—so precise that we’d need atom-by-atom level duplication to preserve it, we’d expect it to be very fragile.
Don't assume that then. Minimally, non computation physicalism only requires that the physical substrate makes some sort of difference. Maybe approximate physical resemblance results in approximate qualia.
Yet consciousness is actually remarkably robust: it persists through significant brain damage, chemical alterations (drugs and hallucinogens) and even as neurons die and are replaced.
You seem to be assuming a maximally coarse-grained either-conscious-or-not model.
If you allow for fine grained differences in functioning and behaviour , all those things produce fine grained differences. There would be no point in administering anaesthesia if it made no difference to consciousness. Likewise ,there would be no point in repairing brain injuries. Are you thinking of consciousness as a synonym for personhood?
We also see consciousness in different species with very different neural architectures.
We don't see that they have the same kind of level of consciousness.
Given this robustness, it seems more natural to assume that consciousness is about maintaining what the state is doing (implementing feedback loops, self-models, integrating information etc.) rather than their exact physical state.
Stability is nothing like a sufficient explabation of consciousness, particularly the hard problem of conscious experience...even if it is necessary.But it isn't necessary either , as the cycle of sleep and waking tells all of us every day.
Second, consider what happens during sleep or under anaesthesia. The physical properties of our brains remain largely unchanged, yet consciousness is dramatically altered or absent.
Obviously the electrical and chemical activity changes. You are narrowing "physical" to "connectome". Physcalism is compatible with the idea that specific kinds of physical.acriviry are crucial.
Immediately after death (before decay sets in), most physical properties of the brain are still present, yet consciousness is gone. This suggests consciousness tracks what the brain is doing (its functions)
No, physical behaviour isn't function. Function is abstract, physical behaviour is concrete. Flight simulators functionally duplicate flight without flying. If function were not abstract, functionalism would not lead to substrate independence. You can build a model of ion channels and synaptic clefts, but the modelled sodium ions aren't actual sodium ion, and if the universe cares about activity being implemented by actual sodium ions, your model isn't going to be conscious
Rather than what it physically is. The physical structure has not changed but the functional patterns have changed or ceased.
Physical activity is physical.
I acknowledge that functionalism struggles with the hard problem of consciousness—it’s difficult to explain how subjective experience could emerge from abstract computational processes. However, non-computationalist physicalism faces exactly the same challenge. Simply identifying a physical property common to all conscious systems doesn’t explain why that property gives rise to subjective experience.
I never said it did. I said it had more resources. It's badly off, but not as badly off.
Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.
If we can see that someone is a human, we know that they gave a high degree of biological similarity. So webl have behavioural similarity, and biological similarity, and it's not obvious how much lifting each is doing.
Functionalism doesn’t require giving up on qualia, but only acknowledging physics. If neuron firing behavior is preserved, the exact same outcome is preserved,
Well, the externally visible outcome is.
If I say “It’s difficult to describe what it feels like to taste wine, or even what it feels like to read the label, but it’s definitely like something”—There are two options—either -it’s perpetual coincidence that my experience of attempting to translate the feeling of qualia into words always aligns with words that actually come out of my mouth or it is not Since perpetual coincidence is statistically impossible, then we know that experience had some type of causal effect.
In humans.
So far that tells us that epiphenomenalism is wrong, not that functionalism is right.
The binary conclusion of whether a neuron fires or not encapsulates any lower level details, from the quantum scale to the micro-biological scale
What does "encapsulates"means? Are you saying that fine grained information gets lost? Note that the basic fact of running on the metal is not lost.
—this means that the causal effect experience has is somehow contained in the actual firing patterns.
Yes. That doesn't mean the experience is, because a computational Zombie will produce the same outputs even if it lacks consciousness, uncoincidentally.
A computational duplicate of a believer in consciousness and qualia will continue to state that it has them , whether it does or not, because its a computational duplicate , so it produces the same output in response to the same input
We have already eliminated the possibility of happenstance or some parallel non-causal experience,
You haven't eliminated the possibility of a functional duplicate still being a functional duplicate if it lacks conscious experience.
Basically
Aren't the only options.
Imagine that we could successfully implement a functional isomorph of the human brain in silicon. A proponent of 2) would need to explain why this functional isomorph of the human brain which has all the same functional properties as an actual brain does not, in fact, have consciousness.
Physicalism can do that easily,.because it implies that there can be something special about running running unsimulated , on bare metal.
Computationalism, even very fine grained computationalism, isn't a direct consequence of physicalism. Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That's the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn't imply computationalism, and arguments against p-zombies don't imply the non existence of c-zombies -- unconscious duplicates that are identical computationally, but not physically.
So it is possible,given physicalism , for qualia to depend on the real physics , the physical level of granularity, not on the higher level of granularity that is computation.
Anil Seth where he tries to pin down the properties X which biological systems may require for consciousness https://osf.io/preprints/psyarxiv/tz6an. His argument suggests that extremely complex biological systems may implement functions which are non-Turing computable
It presupposes computationalism to assume that the only possible defeater for a computational theory is the wrong kind of computation.
My contention in this post is that if they’re able to reason about their internal experience and qualia in a sophisticated manner then this is at least circumstantial evidence that they’re not missing the “important function.”
There's no evidence that they are not stochastic-parrotting , since their training data wasn't pruned of statements about consciousness.
If the claim of consciouness is based on LLMs introspecting their own qualia and report on them , there's no clinching evidence they are doing so at all. You've got the fact computational functionalism isn't necessarily true, the fact that TT type investigations don't pin down function, and the fact that there is another potential explanation diverge results.
Whether computationalism functionalism is true or not depends on the nature of consciousness as well as the nature of computation.
While embracing computational functionalism and rejecting supernatural or dualist views of mind
As before, they also reject non -computationalist physicalism, eg. biological essentialism whether they realise it or not.
It seems to privilege biology without clear justification. If a silicon system can implement the same information processing as a biological system, what principled reason is there to deny it could be conscious?
The reason would be that there is more to consciousness than information processing...the idea that experience is more than information processing not-otherwise-specified, that drinking the wine is different to reading the label.
It struggles to explain why biological implementation specifically would be necessary for consciousness. What about biological neurons makes them uniquely capable of generating conscious experience?
Their specific physics. Computation is an abstraction from physics, so physics is richer than computation. Physics is richer than computation, so it has more resources available to explain conscious experience. Computation has no resources to explain conscious experience -- there just isn't any computational theory of experience.
It appears to violate the principle of substrate independence that underlies much of computational theory.
Substrate independence is an implication of computationalism, not something that's independently known to be true. Arguments from substrate independence are therefore question begging.
Of course, there is minor substrate independence, in that brains which have biological differences able to realise similar capacities and mental states. That could be explained by a coarse graining or abstraction other than computationalism. A standard argument against computationalism, not mentioned here is that it allows to much substrate independence and multiple realisability -- blockheads and so on.
It potentially leads to arbitrary distinctions. If only biological systems can be conscious, what about hybrid systems? Systems with some artificial neurons? Where exactly is the line?
Consciousness doesn't have to be a binary. We experience variations in our conscious experience every day.
However, this objection becomes less decisive under functionalism. If consciousness is about implementing certain functional patterns, then the way these patterns were acquired (through evolution, learning, or training) shouldn’t matter. What matters is that the system can actually perform the relevant functions
But that can't be inferred from responses alone, since, in general, more than one function can generate the same output for a given input.
It’s not clear what would constitute the difference between “genuine” experience and sophisticated functional implementation of experience-like processing
You mean there is difference to an outside observer, or to the subject themself?
The same objection could potentially apply to human consciousness—how do we know other humans aren’t philosophical zombies
It's implausible given physicalism, so giving up computationalism in favour of physicalism doesn't mean embracing p-zombies.
If we accept functionalism, the distinction between “real” consciousness and a perfect functional simulation of consciousness becomes increasingly hard to maintain.
It's hard to see how you can accept functionalism without giving up qualia, and easy to see how zombies are imponderable once you have given up qualia. Whether you think qualia are necessary for consciousness is the most important crux here.
We de-empahsized QM in the post
You did a bit more than de-emphasize it in the title!
Also:
Like latitude and longitude, chances are helpful coordinates on our mental map, not fundamental properties of reality.
"Are"?
**Insofar as we assign positive probability to such theories, we should not rule out chance as being part of the world in a fundamental way. **Indeed, we tried to point out in the post that the de Finetti theorem doesn’t rule out chances, it just shows we don’t need them in order to apply our standard statistical reasoning. In many contexts—such as the first two bullet points in the comment to which I am replying—I think that the de Finetti result gives us strong evidence that we shouldn’t reify chance.
The perennial source of confusion here is the assumption that the question is whether chance/probability is in the map or the territory... but the question sidelines the "both" option. If there were
strong evidence of mutual exclusion, of an XOR rather than IOR premise, the question would be appropriate. But there isn't.
If there is no evidence of an XOR, no amount of evidence in favour of subjective probability is evidence against objective probability, and objective probability needs to be argued for (or against), on independent grounds. Since there is strong evidence for subjective probability, the choices are subjective+objective versus subjective only, not subjective versus objective.
(This goes right back to "probability is in the mind")
Occams razor isn't much help. If you assume determinism as the obvious default, objective uncertainty looks like an additional assumption...but if you assume randomness as the obvious default, then any deteministic or quasi deteministic law seems like an additional thing
In general, my understanding is that in many worlds you need to add some kind of rationality principle or constraint to an agent in the theory so that you get out the Born rule probabilities, either via self-locating uncertainty (as the previous comment suggested) or via a kind of decision theoretic argument.
There's a purely mathematical argument for the Born rule. The tricky thing is explaining why observations have a classical basis -- why observers who are entangled with a superposed system don't go into superposition with themselves. There are multiple aspects to the measurement problem...the existence or otherwise if a fundamental measurement process, the justification the Born rule, the reason for the emergence of sharp pointer states, and reason for the appearance of a classical basis. Everett theory does rather badly on the last two.
If the authors claim that adding randomness in the territory in classical mechanics requires making it more complex, they should also notice that for quantum mechanics, removing the probability from the territory for QM (like Bohmian mechanics) tends to make the the theories more complex.
OK, but people here tend to prefer many worlds to Bohmian mechanics.. it isn't clear that MWI is more complex ... but it also isn't clear that it is a actually simpler than the alternatives ...as it's stated to be in the rationalsphere.
Computationalism is a bad theory of synchronic non-identity (in the sense of "why am I a unique individual, even though I have an identical twin"), because computations are so easy to clone -- computational states are more cloneable than physical states.
Computationalism might be a better theory of diachronic identity (in the sense of "why am I still the same person, even though I have physically changed"), since it's abstract, and so avoids the "one atom has changed" problem of naive physicalism. Other abstractions are available, though. "Having the same memories" is a traditional one unadulterated computation.
Its still a bad theory of consciousness-qua-awareness (phenomenal consciousness , qualia, hard problem stuff) because, being an abstraction, it has fewer resources than physicalism to explain phenomenal experience. There is no computational theory of qualia whatsoever, no algorithm for seeRed().
It's still an ok explanation of consciousness-qua-function (easy problem stuff), but not obviously the best.
Most importantly: it's still the case that, if you answer one of these four questions, you don't get answers to the other three automatically.
I believe computationalism is a very general way to look at effectively everything,
Computation is an abstraction, and its not guaranteed to be the best.
This also answers andeslodes’s point around physicalism, as the physicalist ontology is recoverable as a special case of the computationalist ontology
A perfect map has the same structure as the territory, but still is not the territory. The on-the-metalness is lacking. Flight simulators don't fly. You can grow potatoes in a map, not even a 1:1 one.
...also hears that the largest map considered really useful would be six inches to the mile; although his country had learnt map-making from his host Nation, it had carried it much further, having gone through maps that are six feet to the mile, then six yards to the mile, next a hundred yards to the mile—finally, a mile to the mile (the farmers said that if such a map was to be spread out, it would block out the sun and crops would fail, so the project was abandoned).
https://en.m.wikipedia.org/wiki/Sylvie_and_Bruno
my biggest view on what consciousness actually is, in that it’s essentially a special case of modeling the world, where in order to give your own body at one time alive, you need to have a model of the body and brain, and that’s what consciousness basically is, a model of ourselves
So..it's nothing to do with qualia/phenomenality/HP stuff? Can't self modelling and phenomenality be separate questions?
"it" isn't a single theory.
The argument that Everettian MW is favoured by Solomonoff induction, is flawed.
If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you gave to identify the subset of bits relating to your world. That's extra complexity which isn't accounted for because it's being done by hand, as it were..