I have a slight problem with Existential Risk. Not as a thing to worry about, it makes lots of sense to worry about it. My problem will only emerge if it becomes an industry, with all the distortion that entails from inadequacy etc. If we want more than the current few odd rationalist academics we have currently working on it, we need to make sure everyones incentives are properly aligned when it is scaled up.
A company in the Existential Risk industry has very little reason to be accurate about the risk. The greater they can make the potential risk sound, the more money they can extract to study it from governments etc. So they have real reasons to inflate the risks. So what can we do to mitigate that problem?
I've had some ideas around trying to create a professional body that aligns the incentives of its members with truth seeking. Things like by giving prizes for accuracy of models (in the short term) and helping retrain people if a thing that society was worried about seems to not be such a problem. Giving them an economic line of retreat.
Then a government can make sure it hires companies that only hire members of this society. I'll write this up properly at some point, so it can get a good kicking.
But I was wondering if anyone else have thoughts around this?