Are there any good taxonomies or categorizations of risks from AI-enabled systems (broadly defined) that aren't focused solely on risks to society as a whole / global catastrophic risks? Ideally the taxonomy should cover things like accident risks from individual factory robots, algorithmic bias against individuals or groups, privacy and cybersecurity issues, misuse by hackers or terrorists, incidental job loss due to AI, etc. It should ideally also cover the big society-wide or global catastrophic risks, just that it shouldn't only be about those.

New Answer
New Comment

1 Answers sorted by

zeshen

Mar 07, 2024

32

I'm pretty sure you have come across this already, but just in case you haven't:

https://incidentdatabase.ai/taxonomy/gmf/