On how governance institutional designs cannot restrain corporate, criminal and political groups of humans who gain increasing power from training and exploiting increasingly more capable – eventually power-seeking – machine learning architectures.
As elaborating on the human-to-human stage of the previous post.Excerpts compiled below:
> How can governments and companies > cooperate to reduce misuse risks from AI > and equitably distribute its benefits?that people and governmentssimply do not have any wayto understand and mitigatethe significant advantagethat companies having and deploying AIhave, in regards to their own capabilities.that the governance challenges perceived by many/most regular American peopleto be the most likely to be impactfulto individual people "soon"(ie, within the next decade)and which where also consideredby them to be "the highest priority,most significant/important issues", etc,in regards to AI, machine learning,and enhanced use of technology general,were: > 1. "Preventing AI-assisted surveillance from violating privacy and civil liberties". > 2. "Preventing AI from being used to spread fake and harmful content online". > 3. "Preventing AI cyber attacks against governments, companies, organizations, and individuals". > 4. "Protecting data privacy".that the issues listed by the common people,as being of 'primary concern', are actually less likely to be personally noticeablethan the use of AI tools by criminalsby businesses engaged in predatory practices,by cultural/cult leaders (operating in their own interests),politicians with predominately privatize interests, etc,to do the kind of sense making (perception)and action taking (expression) as necessary to create better honeypots,more, deeper, and more complex entanglements,and to implement more effective and efficientextraction and extortion of resourcesat higher frequency, intensity, consequentiality,at larger scale, more quickly,invisibly/transparently/covertly, etc,in more and more waysthat are more and more difficultto avoid, prevent, mitigate, and heal/restore from,for larger and larger and more variedfractions of the population, etc.where/moreover, as these toolsbecome and more widespread, effective, etc;that more and more people will be using them,and/or find that they have to use them,in order to remain competitive to their neighbors(ie; as per any other/similar multi-polar trap scenario)such that the prevalence and varietyof such traps, risks, harms, costs, etc,everywhere increases, systemically,with universal extraction occurring to such an extent, in so many ways,for so many resource kinds,with so many degrees of resource motionthat is overall globally unconscious,that the net effect is eventual inexorable system/civilization/cultural collapse.that the overall effectof introducing AI/machine learning,is that it ends up being usedfor more effective social pathology,as evidenced in increasing occurrenceof sophisticated bank fraud, stock market manipulation, back room dealing and government bailouts, etc. that most people do not actually realize/understand the actual most likely risks/hazards/costs associated with widespread AI use/deployment. - as including people in government. > Can existing governments > be used to prevent or regulate the use > of AI and/or other machine learning tech > by predatory people in predatory ways?as in: Can goverments make certain harmful, risky,or socially costly activities illegal, and yet also to be able to effectively enforce those new laws?as to actually/effectively protect individuals/groupsfrom the predatory actions, of other AI/machine/weapon empowered individuals/groups, in ways that favor:making the right outcome much more likely(as individually and socially beneficial) than the wrong/harmful outcome.early detection of risk, harms, costs,law violations, etc.the effective, complete proactive mitigationof such risks/harms/costs, etc.the restoration and healing of harm,reparation of cost, etc, as needed to restore actual holistic wholeness,of individuals, families, communities, cultures, etc. ^– where in general, no; not with governance structures/methodologiescurrently in place.that only much more effective,actual good governance structureswill have any hope of actually mitigatingthe real risks/costs/harmsof any substantial new technologybased on complexity itself(ie, examples as AI, machine learning, biotech, pharmaceuticals, and all intersections and variants of these).where in a/any contest between people savvy with AI use,and the rate of change of that technology, and its use,and the likely naivety of people in governmentattempting to regulate that AI, its use, etc,and the fact of extensive, very well funded, industry lobbyists all being (much) more knowledgeable, skillful, and moreover themselves empowered with the use of the tech itself, so as to either influence the policy makers, or to be/become the policy makers themselves, and thus to be serving their own interests (rather than the interests of the actual public good); that the chances that anyone who actually has the public interest in mind, and somehow manages, by complete accident, to find themselves at a government post, that they will for sure have too many things -- of way too much complexity, concurrently occurring -- for such ostensive government regulators to have or provide whatever sufficient amount of attention and understanding, that would actually be needed, to regulate that AI and machine learning industry, and/or its applications and/or uses, in anything at all approaching any sort of effective and actually risk mitigating manner, even when considered for acute problems only, leaving alone the complete unaddress of long term problems. - as consistent with nearly all historical precedent.that artificial inteligence are tools in the hands of psychopaths,who also in their completeness ofbeing incapable of feeling the pain of others,or in being characteristically and unusually unableto relate to such feelings,or feelings/meaning/value in/of,or in association with, other humans at all,as conscious, alive, and worthwhile beings,with value, meaning, and agency,a will and sovereignty of their own.where psychopaths have aligned tenancieswith the nature of the machines themselves,that this near perfect matingof solely personal benefit agencywith the soulless yet adaptable responsive nature of the machine intelligence process,makes for a significantly enhanced psychopathwith new superpowers.where leaders(either in business or governance, though more typically in business)have learned to 'do whatever it takes'to climb the social ladder(on the backs of whomever)to 'win regardless of whatever cost'(to others, and maybe to (future) self),that such machine learning toolsbecome indispensable to the operationsof the business itself, enabling increased efficiency of extractionacross all networks of capability.as combining Metcalfe's law with network commerceto build the ultimate parasitic system,intimately hostile to all humans,and possibly to all of life,through the will and agencyof the humans who elect to use such tools,and/or moreover may be required to use such tools,so as to effectively continue to compete with(their illusion of) (the capabilities of) "the other guy". that new governance (and economic) architectureswill be needed, which are inherently anti-psychopathic, anti- corruptible, to be anywhere near in capability to dealing with situations like this.
> How can governments and companies > cooperate to reduce misuse risks from AI > and equitably distribute its benefits?
> 1. "Preventing AI-assisted surveillance from violating privacy and civil liberties".
> 2. "Preventing AI from being used to spread fake and harmful content online".
> 3. "Preventing AI cyber attacks against governments, companies, organizations, and individuals".
> 4. "Protecting data privacy".
> Can existing governments > be used to prevent or regulate the use > of AI and/or other machine learning tech > by predatory people in predatory ways?
→ Read link to Forrest Landry's blog for more.
Note: Text is laid out in his precise research note-taking format.