" 'We're generating algorithms that we don't really understand, that can do more and more things and keep getting even harder to understand. 

They often become capable of doing things we didn't realize they'd be able to do- this is also happening more often. 

When we get them to solve a problem, we don't know how they're going to solve the problem and we don't know if one of the steps is going to be something we don't want it to do. 

We're concerned that eventually, one of the steps might be something that really harms a lot of people and we won't know until it's too late. 

This concern gets closer, the more capable we make the algorithms. 

Right now, it's very close. "

 

leave scifi and fantasy stuff out of it. that is how you look like someone obsessed with stories, making a story of your own.

 

and even if you get through, it might cause emotional fear/panic rather than a problem solving response. the reaction to the fear/panic won't be to problem solve, but to look for a different explanation that will relieve their fear/panic.

New to LessWrong?

New Comment