Posts

Sorted by New

Wiki Contributions

Comments

Ludwig2y10

Interesting, I see what you mean reagrding probability and it makes sense. I guess perhaps, what is missing is that when it comes to questions of peoples lives we may have a stronger imperative to be more risk-averse. 

I completely agree with you about effect size. I guess what I would say is that that given my point 1 from earlier  about the variety of X-risks coordination would contirbute in solving then the effect size will always be greater. If we want to maximise utility its the best chance we have. The added bonuses are that it is comparatively tractable and immediate avoiding the recent criticicisms about longtermism, while simoultnously being a longtermist solution. 

Regadless, it does seem that coordination problems are underdiscussed in the community, will try and make a decent main post once my academic committments clear up a bit. 

Ludwig2y10

Intersting, yes I am interested in coordination problems. Let me follow this framework, to make a better case. There are three considerations I would like to point out.

  1. The utility in adressing coordination problems is that they affect almost all X-risk scenarios.(Nuclear war, bioterror, pandemics, climate change and AGI) Working on coordintion problems reduces not only current suffering but also X-Risk of both AGI and non AGI kinds. 
  2. The difference between a 10% chance of something happening that may be an X-Risk in 100 years is not 10 times less then something with a certainity of happening. Its not even comparable because one is a certainity the other a probability, and we only get one roll of the dice (allocation of resources). It seems that the rational choice would always be the certainity. 
  3. More succinctly, with two buttons one with a 100% chance of adressing X-Risk and one with a 10% chance, which one would you press?
Ludwig2y10

I understand and appreciate your discussion.  I wonder if perhaps we could consider is that it may be more morally imperative to work on AI safety for the hugely impactful problems AI is contributing right now, if we assume that in finding solutions to these current and near-term AI problems we would also be lowering the risk of AGI X-risk (albiet indirectly). 

Given that the likelyhood for narrow AI risk being 1 and the likelyhood of AGI in the next 10 years being (as in your example) <0.1 -  It seems obvious we should focus on addressing the former as not only will it reduce suffering that we know with certainity is already happening but also suffer that will certainly continue to happen, in addition it will also indirectly reduce X-risk. If we combine this observation with the opportunity cost in not solving other even more solvable issues (disease, education etc). It seems even less appealing to pour millions of dollars and the careers of the smartest people in specifically AGI X-Risk. 

A final point, is that it would seem the worst issues caused by current and near-term AI risks are that it is degrading the coordination structures of western democracies. (Disinformation, polarisation and so on). If, following Moloch, we understand coordination to be the most vital tool in humanity's adressing of problems we see that focusing on current AI safety issues will improve our ability of addressing every other area of human suffering. 

The opportunity costs in not focusing on improving coordination problems in western countries seem to be equivalent to x-risk level consequences, while the probability of the first is 1 and that of AGI >1. 

Ludwig2y70

Why should we throw immense resources on AGI x-risk when the world faces enormous issues with narrow AI right now? (eg. destabalised democracy/mental health crisis/worsening inequality)

Is it simply a matter of how imminent you think AGI is? Surely the opportunity cost is enormous given the money and brainpower we are spending on AGI something many dont even think is possible versus something that is happening right now.