Decision theory (which includes the study of risks of that sort) has long been a core component of AI-alignment research.

Decision theory (which includes the study of risks of that sort)

No, it doesn't. Decision theory deals with abstract utility functions. It can talk about outcomes A, B, and C where A is preferred to B and B is preferred to C, but doesn't care whether A represents the status quo, B represents death, and C represents extreme suffering, or whether A represents gaining lots of wealth and status, B represents the status quo, and C represents death, so long as the ratios of utility differences are the same in each case. Decision theory has nothing to do with the study of s-risks.

0Kaj_Sotala3yThat doesn't seem to refute or change what Alex said?

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

21