Institutions Cannot Restrain Dark-Triad AI Exploitation
On how governance institutional designs cannot restrain corporate, criminal and political groups of humans who gain increasing power from training and exploiting increasingly more capable – eventually power-seeking – machine learning architectures. As elaborating on the human-to-human stage of the previous post. Excerpts compiled below: > > How can governments and companies > > cooperate to reduce misuse risks from AI > > and equitably distribute its benefits? > > * that people and governments > simply do not have any way > to understand and mitigate > the significant advantage > that companies having and deploying AI > have, in regards to their own capabilities. > * that the governance challenges perceived > by many/most regular American people > to be the most likely to be impactful > to individual people "soon" > (ie, within the next decade) > and which where also considered > by them to be "the highest priority, > most significant/important issues", etc, > in regards to AI, machine learning, > and enhanced use of technology general, > were: > > > 1. "Preventing AI-assisted surveillance > from violating privacy and civil liberties". > > > 2. "Preventing AI from being used > to spread fake and harmful content online". > > > 3. "Preventing AI cyber attacks > against governments, companies, > organizations, and individuals". > > > 4. "Protecting data privacy". > > * that the issues listed by the common people, > as being of 'primary concern', > are actually less likely to be personally noticeable > than the use of AI tools by criminals > by businesses engaged in predatory practices, > by cultural/cult leaders (operating in their own interests), > politicians with predominately privatize interests, etc, > to do the kind of sense making (perception) > and action taking (expression) > as necessary to cre