Wiki Contributions

Comments

Just to be clear, I am not arguing in favour of or against dualism, however, it is not true that if dualism were true, it would explain nothing — it is certainly an explanation of consciousness (something like “it arises out of immaterial minds”) but perhaps is just an unpopular one/suffers from too many problems according to some. Secondly, while I may agree that what you are saying about AC being obvious, this does not really address any part of my argument — many things seemed obvious in the past that turned out to be wrong, so just relying on our intuitions rather than arguments does not seem valid. And since there may be reasons that the two cannot turn out to be similar enough (this is the crux of my argument), this may contest your thesis about AC simply being obvious.

I know I am super late to the party but this seems like something along the lines of what you’re looking for: https://www.alignmentforum.org/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios

Hi Gerald, thanks for your comment! Note that I am arguing neither in favour of or against doom. What I am arguing though is the following: it is not good practice to group AI with technologies that we were able to iteratively improve towards safety when you are trying to prove AI safety. The point here is that without further arguments, you could easily make the reverse argument and it would have roughly the equal force:

P1 Many new technologies are often unsafe and impossible to iteratively improve (e.g. airhips).

P2 AI is a new technology.

C1 AI is probably unsafe and impossible to iteratively improve.

That is why I argue that this is not a good argument template because through survivorship bias in P1, you‘ll always be able to sneak in whatever it is you’re trying to prove.

With respect to your arguments about doom scenarios, I think they are really interesting and I’d be excited to read a post with your thoughts (maybe you already have one?).