I'm confused by what you mean that GPT-4o is bad? In my experience it has been stronger than plain GPT-4, especially at more complex stuff. I do physics research and it's the first model that can actually improve the computational efficiency of parts of my code that implement physical models. It has also become more useful for discussing my research, in the sense that it dives deeper into specialized topics, while the previous GPT-4 would just respond in a very handwavy way.
You're missing the bigger picture and pattern-matching in the wrong direction. I am not saying the above because I have a need to preserve my "soul" due to misguided intuitions. On the contrary, the reason for my disagreement is that I believe you are not staring into the abyss of physicalism hard enough. When I said I'm agnostic in my previous comment, I said it because physics and empiricism lead me to consider reality as more "unfamiliar" than you do (assuming that my model of your beliefs is accurate). From my perspective, your post and your conclusions are written with an unwarranted degree of certainty, because imo your conception of physics and physicalism is too limited. Your post makes it seem like your conclusions are obvious because "physics" makes them the only option, but they are actually a product of implicit and unacknowledged philosophical assumptions, which (imo) you inherited from intuitions based on classical physics. By this I mean the following:
It seems to me that when you think about physics, you are modeling reality (I intentionally avoid the word "universe" because it evokes specific mental imagery) as a "scene" with "things" in it. You mentally take the vantage point of a disembodied "observer/narrator/third person" observing the "things" (atoms, radiation etc) moving, interacting according to specific rules and coming together to create forms. However, you have to keep in mind that this conception of reality as a classical "scene" that is "out there" is first and foremost a model, one that is formed from your experiences obtained by interacting specifically with classical objects (biliard balls, chairs, water waves etc). You can extrapolate from this model and say that reality truly is like that, but the map is not the territory, so you at least have to keep track of this philosophical assumption. And it is an assumption, because "physics" doesn't force you to conclude such a thing. Seen through a cautious, empirical lens, physics is a set of rules that allows you to predict experiences. This set of rules is produced exclusively by distilling and extrapolating from first-person experiences. It could be (and it probably is) the case that reality is ontologically far weirder than we can conceive, but that it still leads to the observed first-person experiences. In this case, physics works fine to predict said experiences, and it also works as an approximation of reality, but this doesn't automatically mean that our (merely human) conceptual models are reality. So, if we want to be epistemically careful, we shouldn't think "An apple is falling" but instead "I am having the experience of seeing an apple fall", and we can add extra philosophical assumptions afterwards. This may seem like I am philosophizing too much and being too strict, but it is extremely important to properly acknowledge subjective experience as the basis for our mental models, including that of the observer-independent world of classical physics. This is why the hard problem of consciousness is called "hard". And if you think that it should "obviously" be the other way around, meaning that this "scene" mental model is more fundamental than your subjective experiences, maybe you should reflect on why you developed this intuition in the first place. (It may be through extrapolating too much from your (first-person, subjective) experiences with objects that seemingly possess intrinsic, observer-independent properties, like the classical objects of everyday life.)
At this point it should be clearer why I am disagreeing with your post. Consciousness may be classical, it may be quantum, it may be something else. I have no issue with not having a soul and I don't object to the idea of a bunch of gears and levers instantiating my consciousness merely because I find it a priori "preposterous" or "absurd" (though it is not a strong point of your theory). My issue is not with your conclusion, it's precisely with your absolute certainty, which imo you support with cyclical argumentation based on weak premises. And I find it confusing that your post is receiving so much positive attention on a forum where epistemic hygiene is supposedly of paramount importance.
First off, would you agree with my model of your beliefs? Would you consider it an accurate description?
Also, let me make clear that I don't believe in cartesian souls. I, like you, lean towards physicalism, I just don't commit to the explanation of consciousness based on the idea of the brain as a **classical** electronic circuit. I don't fully dismiss it either, but I think it is worse on philosophical grounds than assuming that there is some (potentially minor) quantum effect going on inside the brain that is an integral part of the explanation for our conscious experience. However, even this doesn't feel fully satisfying to me and this is why I say that I am agnostic. When responding to my points, you can assume that I am a physicalist, in the sense that I believe consciousness can probably be described using physical laws, with the added belief that these laws **may** not be fully understandable by humans. I mean this in the same way that a cat for example would not be able to understand the mechanism giving rise to consciousness, even if that mechanism turned out to be based on the laws of classical physics (for example if you can just explain consciousness as software running on classical hardware).
To expand upon my model of your beliefs, it seems to me that what you do is that you first reject cartesian souls and other such things on philosophical grounds and you thus favour physicalism. I agree on this. However I dont see why you are immediately assuming that physicalism means that your consciousness must be a result of classical computation. It could be the result of quantum computation. It could be something even subtler in some deeper theory of physics. At this point you may say that a quantum explanation may be more "unlikely" than a classical one, but I think that we both can agree that the "absurdity distance" between the two is much smaller than say a classical explanation and a soul-based one, and thus we now have to weigh the two much options much more carefully since we cannot dismiss one in favour of the other as easily. What I would like to argue is that a quantum-based consciousness is philosophically "nicer" than a classical one. Such an explanation does not violate physicalism, while at the same time rendering a lot of points of your post invalid.
Let's start by examining the copier argument again but now with the assumption that conscious experience is the result of quantum effects in the brain and see where it takes us. In this case, to fully copy a consciousness from one place to another you would have to copy an unknown quantum state. This is physically impossible even in theory, based on the no-cloning theorem. Thus the "best" copier that you can have is the copier from my previous comment, which just copies the classical connectivity of the brain and all the current and voltages etc, but which now fails to copy the part that is integral to **your** first person experience. So what would be your first person experience if you were to enter the room? You would just go in, hear the scanner work, get out. You can do this again and again and again and always find yourself experiencing getting out of the same initial room. At the same time the copier does create copies of you, but they are new "entities" that share the same appearance as you and which would approximate to some (probably high) degree your external behaviour. These copies may or may not have their own first person experience (and we can debate this further) but this does not matter for our argument. Even if they have a first person experience, it would be essentially the same as the copier just creating entirely new people while leaving your first person experience unchanged. In this way, you can step into the room with zero expectation that you may walk out of a room on the other side of the copier, in the same way that you dont expect to suddenly find yourself in some random stranger's body while going about your daily routine. Even better, this belief is nicely consistent with physicalism, while still not violating our intuitions that we have private and uncopiable subjective experiences. It also doesn't force us to believe that a bunch of water pipes or gears functioning as a classical computer can ever have our own first person experience. Going even further, unknown quantum states may not be copyable but they are transferable (see quantum teleportation etc), meaning that while you cannot make a copier you can make a transporter, but you always have to be at only one place at each instant.
Let me emphasize again that I am not arguing **for** quantum consciousness as a solution. I am using it as an example that a "philosophically nicer" physicalist option exists compared to what I assume you are arguing for. From this perspective, I don't see why you are so certain about the things you write in your post. In particular, you make a lot of arguments based on the properties of "physics", which in reality are properties of classical physics together with your assumption that consciousness must be classical. When I said that I find issue with the fact that you start from an unstated assumption, I didnt expect you to argue against cartesian dualism. I expected you to start from physicalism and then motivate why you chose to only consider classical physics. Otherwise, the argumentation in your post seems lacking, even if I start from the physicalist position. To give one example of this:
You say that "there isn't an XML tag in the brain saying `this is a new brain, not the original`" . By this I assume you mean that the physical state of the brain is fungible, it is copyable, there is nothing to serve as a label. But this is not a feature of physics in general. An unknown quantum state cannot be copied, it is not fungible. My model of what you mean: "(I assume that) first person experience can be fully attributed to some structure of the brain as a classical computer. It can be fully described by specifying the connectivity of the neurons and the magnitudes of the currents and voltages between each point. Since (I assume) consciousness physically manifests as a classical pattern and since classical patterns can be copied, then by definition there can be many copies of "the same" consciousness". Thus, what you write about XML tags is not an argument for your position - it is not imposed to you by physics to consider a fungible substrate for consciousness - it is just a manifestation of your assumption. It's cyclical. A lot of your arguments which invoke "physics" are like that.
I find myself strongly disagreeing with what is being said in your post. Let me preface by saying that I'm mostly agnostic with respect to the possible "explanations" of consciousness etc, but I think I fall squarely within camp 2. I say mostly because I lean moderately towards physicalism.
First, an attempt to describe my model of your ontology:
You implicitly assume that consciousness / subjective experience can be reduced to a physical description of the brain, which presumably you model as a classical (as opposed to quantum) biological electronic circuit. Physically, to specify some "brain-state" (which I assume is essentially the equivalent of a "software snapshot" in a classical computer) you just need to specify a circuit connectivity for the brain, along with the currents and voltages between the various parts of the circuit (between the neurons let's say). This would track with your mentions of reductionism and physicalism and the general "vibe" of your arguments. In this case I assume you treat conscious experience roughly as "what it feels like" to be software that is self-referential on top of taking in external stimuli from sensors. This software is instantiated on a biological classical computer instead of a silicon-based one.
With this in mind, we can revisit the teleporter scenario. Actually, let's consider a copier instead of a teleporter, in the sense that you dont destroy the original after finishing the procedure. Then, once a copy is made, you have two physical brains that have the same connectivity, the same currents and the same voltages between all appropriate positions. Therefore, based on the above ontology, the brains are physically the same in all the ways that matter and thus the software / the experience is also the same. (Since software is just an abstract "grouping" which we use to refer to the current physical state of the hardware)
Assuming this captures your view, let me move on to my disagreements:
My first issue with your post is that this initial ontological assumption is neither mentioned explicitly nor motivated. Nothing in your post can be used as proof of this initial assumption. On the contrary, the teleporter argument, for example, becomes simply a tautology if you start from your premise - it cannot be used to convince someone that doesn't already subscribe to your views on the topic. Even worse, it seems to me that your initial assumption forces you to contort (potential) empirical observation to your ontology, instead of doing the opposite.
To illustrate, let's assume we have the copier - say it's a room you walk into, you get scanned and then a copy is reconstructed in some other room far away. Since you make no mention of quantum, I guess this can be a classical copy, in the sense that it can copy essentially all of the high-level structure, but it cannot literally copy the positions of specific electrons, as this is physically impossible anyways. Nevertheless, this copier can be considered "powerful" enough to copy the connectivity of the brain and the associated currents and voltages. Now, what would be the experience of getting copied, seen from a first-person, "internal", perspective? I am pretty sure it would be something like: you walk into the room, you sit there, you hear say the scanner working for some time, it stops, you walk out. From my agnostic perspective, if I were the one to be scanned it seems like nothing special would have happened to me in this procedure. I didnt feel anything weird, I didnt feel my "consciousness split into two" or something. Namely, if I consider this procedure as an empirical experiment, from my first person perspective I dont get any new / unexpected observation compared to say just sitting in an ordinary room. Even if I were to go and find my copy, my experience would again be like meeting a different person which just happens to look like me and which claims to have similar memories up to the point when I entered the copying room. There would be no way to verify or to view things from their first person perspective.
At this point, we can declare by fiat that me and my copy are the same person / have the same consciousness because our brains, seen as classical computers, have the same structure, but this experiment will not have provided any more evidence to me that this should be true. On the contrary, I would be wary to, say, kill myself or to be destroyed after the copying procedure, since no change will have occured to my first person perspective, and it would thus seem less likely that my "experience" would somehow survive because of my copy.
Now you can insist that philosophically it is preferable to assume that brains are classical computers etc, in order to retain physicalism which is preferable to souls and cartesian dualism and other such things. Personally, I prefer to remain undecided, especially since making the assumption brain= classical hardware, consciousness=experience as software leads to weird results. It would force me to conclude that the copy is me even though I cannot access their first person perspective (which defeats the purpose) and it would also force me to accept that even a copy where the "circuit" is made of water pipes and pumps, or gears and levers also have an actual, first person experience as "me", as long as the appropriate computations are being carried out.
One curious case where physicalism could be saved and all these weird conclusions could be avoided would be if somehow there is some part of the brain which does something quantum, and this quantum part is the essential ingredient for having a first person experience. The essence would be that, because of the no-cloning theorem, a quantum-based consciousness would be physically impossible to copy, even in theory. This would get around all the problems which come with the copyability implicit in classical structures. The brain would then be a hybrid of classical and quantum parts, with the classical parts doing most of the work (since neural networks which can already replicate a large part of human abilities are classical) with some quantum computation mixed in, presumably offering some yet unspecified fitness advantage. Still, the consensus is that it is improbable that quantum computation is taking place in the brain, since quantum states are extremely "fragile" and would decohere extremely rapidly in the environment of the brain...
Could you explain what made you change your mind and update back to zero? It's nice to write down your beliefs but it would be much more helpful for the rest of us if you could share what information actually helped you update.
Thank you for the kind words. I do think that the probability is too low, especially given the new revelations, but I believe that this is also due to the choice of wording. The "alien technology has visited our solar system" part smuggles in a few assumptions which "uncorrelate" the question a bit from the recent evidence. To clarify:
The "alien technology" part makes this refer to extraterrestrials and the "solar system" part seems to indicate that said extraterrestrials originate from outside our solar system. So the question alludes to the category of cases where "aliens advanced enough to cross interstellar distances come all the way here only to crash on our planet and to fail at observing us without being noticed" which, as Eliezer notes, does have strong arguments against it. So, I do think it should be higher, but imo the (hypothetical) question that would warrant the largest jump in probability after the publication of the UAP disclosure act, would be something along the lines of "Will this current UAP situation turn out to have an ontologically-shocking explanation?".
You are right in saying that the UAP topic has been discussed on Lesswrong. I acknowledge this in the introduction of my post. Could you indicate which post you want me to find by "using the search function"?
I also think it is unfair to say that nothing in my post is worth updating over. Has there been any other document where serious politicians so strongly signal towards a connection of UAPs and non-human intelligence?
Thank you for taking the time to clarify! With this new info I think I can now give a stronger outline of my argument.
First, let me say that I do agree to a high degree with what Eliezer is saying in his tweet. Based on this, I can see why your prior for specifically "Aliens with visible craft" is so low. However, I strongly believe that his argument is focusing too much on a specific case, namely of "extraterrestrials with advanced technology coming from far far away", which is why I also think he is overconfident in his bet. My point is similar to what you are saying about the breadth of Something Else but this time applied to your priors. Notice that I have been trying to refer to non-human intelligence and not aliens, exactly because I believe we have to be careful with our assumptions. For example, I could argue that what we are observing are malfunctioning Von Neumann probes from a long-extinct civilization, or glitches in the simulation, or aliens that are not actually super advanced, they just happened to invent warp drives early in their development and are now clumsily trying to observe other civilizations. I could even go as far as to suggest that our reality is created by our collective consciousness and UAPs are observed because some of us believe in them. I am not saying all this because I believe one such option is true, I am just trying to illustrate that in such topics, our priors should be selected carefully because there are many options that we could argue as likely using rhetorical arguments. Now my reading of what Eliezer is doing is that he is taking the most probable "incredible" explanation, namely "aliens with visible craft" and then he is giving strong counterarguments. However, the unspoken assumption is that this "most probable" explanation stems from our current ontology. If this ontology is wrong, for example, if consciousness is the fundamental substrate of our reality, then all these assumptions go out the window. What I am trying to say with all this is that, when we are trying to reason about events that would challenge our ontology if they were true and especially when there is credible evidence for such events, it is a bit of a shaky move to choose priors based on our current ontology and to hold on to them too strongly.
Another way to say this is that if you notice some evidence A,B, C and P(A|non-human intelligence (NHI)) is high, P(B|NHI) is high, P(C|NHI) is high but in the end you get that the probability of NHI given A,B and C is really low because your prior for P(NHI) is absurdly low, then maybe your choice of prior should be reconsidered. This is the crux of my disagreement. Because it seems to me that the community is doing the opposite, in that a low prior is used to subconsciously dismiss evidence which I believe to be strong and which, if considered carefully by itself, would indicate that maybe said priors should be reconsidered.
I would argue that two pieces of strong evidence exist in the current situation:
What you are saying about your prior for "Actually non-human intelligence (NHI)" being tiny closely agrees with what I said about the community being too certain in its ontology. You have to keep in mind that your choice of prior by default assumes that your ontology is not wrong. While I agree at surface level with Eliezer's statements, I think this kind of reasoning is used to reject otherwise strong evidence before even thinking about it enough to realize it is strong evidence.
Having a look at your link, I see you give 3% to the probability that serious politicians would propose the UAP disclosure act if NHI did actually exist. I'm really puzzled by this. Could you explain why, in a world where NHI exists, you wouldn't expect politicians to pass a law disclosing information about it at some point? Do you expect that they would keep it a secret indefinitely or is it something else?
The examples you give sound to me like curiosity-stoppers and I don't find them convincing. Not to say that the reason politicians sponsor this amendment is definitely NHI and not something else, but it seems to me that you are handwaving away a strong signal of something going on. For example:
Your comment is an example of what I said initially, that because your prior is ultra-low, you don't notice confusion and you handwave away the evidence. Again, this is not to say that aliens are here, but at least that this topic warrants more serious discussion.
Could you expand upon your points in the second-to-last paragraph? I feel there are a lot of interesting thoughts leading to these conclusions, but it's not immediately clear to me what they are.