Posts

Sorted by New

Wiki Contributions

Comments

h2431

In my view, this is an example of low-resolution thinking that occurs prior to us having direct experience in the actual mechanisms that would lead to actual danger. 

It's easy to imagine bad outcomes and thus activate associated emotions. Then we wish to (urgently) control those emotions through problem-solving.

However, our knowledge of the mechanism leading to the outcome is necessarily low resolution prior to the mechanism actually existing. So, we can only reason in terms of "switching it off", and other highly abstract concepts.

To address the argument: I don't think this is a case of the "AI winning" or an example of the "switch-off-ability" of AI. I think it just feels like it is.

As AI develops, I doubt this will ever be the useful question. I think there will be more boring questions that are hard to predict. In other words, we'll keep asking this question until an actual mechanism of danger exists, and we'll fairly easily know the right question at that point, and will solve it. 

I notice in myself the instinct to "play it safe", but I think this is downstream of a "fear of the unknown" emotional reflex.

With regard to the bigger question of ai safety, I think my argument here pushes me towards a lower risk assessment than I might have had before elucidating it. This is tangential though.

As a side note: perhaps this is why Sam has commented he thinks the most sensible approach to ai safety is to keep releasing stuff incrementally. This builds collective knowledge of actual mechanisms and thus progressively improves the debate into more and more useful territory.