LESSWRONG
LW

Štěpán Los
23440
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument
Štěpán Los2y10

Just to be clear, I am not arguing in favour of or against dualism, however, it is not true that if dualism were true, it would explain nothing — it is certainly an explanation of consciousness (something like “it arises out of immaterial minds”) but perhaps is just an unpopular one/suffers from too many problems according to some. Secondly, while I may agree that what you are saying about AC being obvious, this does not really address any part of my argument — many things seemed obvious in the past that turned out to be wrong, so just relying on our intuitions rather than arguments does not seem valid. And since there may be reasons that the two cannot turn out to be similar enough (this is the crux of my argument), this may contest your thesis about AC simply being obvious.

Reply
The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument
[+]Štěpán Los2y-5-2
Race to the Top: Benchmarks for AI Safety
Štěpán Los2y20

I know I am super late to the party but this seems like something along the lines of what you’re looking for: https://www.alignmentforum.org/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios

Reply
AI Risk and Survivorship Bias - How Andreessen and LeCun got it wrong
Štěpán Los2y54

Hi Gerald, thanks for your comment! Note that I am arguing neither in favour of or against doom. What I am arguing though is the following: it is not good practice to group AI with technologies that we were able to iteratively improve towards safety when you are trying to prove AI safety. The point here is that without further arguments, you could easily make the reverse argument and it would have roughly the equal force:

P1 Many new technologies are often unsafe and impossible to iteratively improve (e.g. airhips).

P2 AI is a new technology.

C1 AI is probably unsafe and impossible to iteratively improve.

That is why I argue that this is not a good argument template because through survivorship bias in P1, you‘ll always be able to sneak in whatever it is you’re trying to prove.

With respect to your arguments about doom scenarios, I think they are really interesting and I’d be excited to read a post with your thoughts (maybe you already have one?).

Reply
-6The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument
2y
9
13AI Risk and Survivorship Bias - How Andreessen and LeCun got it wrong
2y
2
4Biosafety Regulations (BMBL) and their relevance for AI
2y
0
20AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms
2y
0