Thank you. You made me realize that I am blaming scientists here in a way. And as my ultimate goal is to create a better reality for everyone, blaming seems like a poor way of achieving that. I will try to remind myself that dividing and polarizing is not helpful to the cause. :) 🙏
There I presented three out of many reasons why science doesn't disprove God. I'm sorry, I don't understand what you are criticizing. Can you say what's wrong about it?
Hmm, I don't understand what you mean. Can you give some specific examples of my arguments that are inadequate?
2) Yes, that is true. I did leave out a sentence saying that "this assumes that there are no higher P(doom) realities in our list of plausible realities." I left it our for readibility for the audience of the original publication (Phi/AI). I concede that for LW I should have done a more rigorous version.
But still I think the logic to lower our P(doom) holds in that specific analysis (all 3 alternatives might have some failsafes). And in my eyes it would hold also if we look at the current landscape of the top most plausible metaphysics, where there really is not much more "unsafe" than physicalism in terms of human survival.
3) I think you are not correct in your conclusions about physicalism. Physicalism is, by its proper definition, a philosophical belief: "Physicalism is the philosophical view that everything in existence is fundamentally physical, or at least ultimately depends on the physical, meaning there is "nothing over and above" the physical world." This means that physicalism goes beyond the "simple logic" you described. The simple logic you described can only ever explain the parts of our reality that can be subjected to experimental observation - i.e. it's limited by the descriptive scope of science. But physicalism goes beyond that by believing that there is nothing "extra" added beyond that.
For example, if our world would be a simulation with fixed rules (physical laws) run by an alien, your simple logic could not distinguish that from a scenario where our world just "popped up from nothing." So the only "special place" physicalism holds among philosophical views is that it introduces the least amount of "extra assumptions." But that says nothing about its ultimate plausibility.
Another way to picture this is that everytime we want to build a complete model of reality, there will be two parts: one verifiable by experiment (science) and the other inherently unverifiable (philosophy). The fact that physicalism is picking the "simplest, least complicated philosophical framework" should in no way lead us to ignore all the other, equally unverifiable, alternatives.
4) I am not the one originally making the claim that the experiments that proved non-locality of QM had profound implications on metaphysical and philosophical discourse. In the post, I link to the article "Enter experimental metaphysics" by Hans Busstra, which might help you understand the context.
5) This article is only the very introduction to my ultimate goal of exploring alternative metaphysical frameworks to find novel approaches to AI safety. I'm sure the rationale will be clearer as I release further articles and I warmly invite you to read the full series.
Thank you for the feedback. I'll try to address the key points.
1) I actually have looked into EY's metaphysical beliefs and my conclusion is they are inconsistent and opaque at best, and have been criticized for that here. In any case, when I say someone operates from a single metaphysical viewpoint like physicalism, this is not any kind of criticism of their inability to consider something properly or whatnot. It just tries to put things into wider context by explaining that changing the metaphysical assumptions might change their conclusions or predictions.
2) The post in no way says that there is something that would "prevent" the existential risk. It clearly states such risk would not be mitigated. I could have made this more explicit. What the post says is that by introducting a "possibility," no matter how remote, of certain higher coordination or power that would attempt to prevent X-risk because it is not in its interest, then in such a universe the expected p(doom) would be lower. Does that make sense?
3) You say that
If you want to argue against physicalism, there's a very simple, inarguable method that would prove it. All you need to do is find one single reproducible example, anywhere, ever, of any part of the universe behaving differently than the laws of physics say they should
My reaction to that is that here your are exactly conflating physicalism with the "descriptive scope of science" which is exactly the category mistake I'm trying to point to! There will always be something unexplainable beyond the descriptive scope of science, and physicalism is filling that with "nothing but more physical-like clockwork things." But that is a belief. It might be the "most logical believe with fewest extra assumptions." But that should not grant it any special treatment among all other metaphysical interpretations.
4) Yes, I used the word "share/transmit information across distance" while describing non-locality. And while you cannot "use entanglement to transmit information," I think it's correct to say that the internal information of an entangled particle transmits information of its internal state to its entangled partner?
5) Please, don't treat this as an "attack on AI safety" - all I'm trying is to open it to a wider metaphysical consideration.
Hmm, interesting that this has -5 karma 5min after posting. That is not enough time to read the post. Can those downvoting explain? Thank you.
Responding to your last sentence: one thing I see as a cornerstone of biomimetic AI architectures I propose is the non-fungibility of digital minds. By being hardware-bound, humans could have an array of fail-safes to actually shut such systems down (in addition to other very important benefits like reduced copy-ability and recursive self-improvement).
In one way, of course this will not prevent covert influence and power accumulation etc. but one can argue such things are already quite prevalent in human society. So if the human-AI equilibrium stabilizes in AIs being extremely influential yet "overthrowable" if they obviously overstep, then I think this could be acceptable.
Hmm, so it is even more troubling, when eventually it does not end well, but initially it may seem like everything is fine.
To me that gives one more reason to why we should start experimenting with autonomous, unpredictable intelligent entities as soon as possible, and see if arrangements other than master-slave are possible.
Thank you. What a coincidence, huh?
Well I think now you're conflating two things - pointing out someone made a mistake and telling someone they're being foolish. I have learned from my last relationship it is crucial to separate the two. I would be extremely careful to label someone foolish as that can be taken as an attack ad hominem. Pointing out that someone had made a mistake (like turning physicalism into a scientific dogma) is admittedly a lesser misstep but I think it still strikes a note in many.
I'm just a bit disappointed that my last two posts have gotten so many downvotes but not a single person had presented arguments that my argumentation is incorrect. Fingers crossed I'm the 1% of contrarians that happens to be right? haha :D