Reflection of Hierarchical Relationship via Nuanced Conditioning of Game Theory Approach for AI Development and Utilization

by Kyoung-cheol Kim8 min read4th Jun 20212 comments

2

Game TheoryAI GovernanceSuperintelligenceSubagentsAI
Frontpage

Application of Game Theory to AI Development and Utilization

A recent research post “Game Theory as an Engine for Large-Scale Data Analysis” by a Google team (McWilliamson et al. 2021) provides a tremendously helpful viewpoint to think about AI development and also relevant implications on organization and governance with interventions of AI. By taking a multi-agent approach, beyond thinking about some aspects of related impacts, the theory of AI and social sciences can greatly share fundamental commonalities in operation. Still, however, it seems there are some limitations from the perspective of organization study and public administration. That being said, the game theory from economics conceptually worked greatly in the approach case, and now we need to additionally consider a characteristic organizational perspective, which deals with decision-making and execution of two or more individuals to achieve a shared goal in systematic ways.

Science of Bureaucracy with Integration of Novel Decision-Making Agent Artificial Intelligence

Bureaucracy as Systematization of Hierarchical and Horizontal Flows of Authority Setting Goal

To be specific, the “systematic ways” not only involve ‘equally’ horizontal relationships among agents but also hierarchical relationships among them. In a sizable organization treating complex problems, the organizational form and function tend to be bureaucratic to achieve administrative efficiency. As Herbert Simon acknowledged (1946), related to the core concept of bureaucracy by Max Weber, 1) hierarchical transferring of rational authority –with spreading it out to horizontal level on each rank– and 2) specialization of jobs are considered as scientific principles, as universally applicable to some degrees holding interaction with environments. In doing so, the performance of specialized jobs and connections of locus achieving the tasks are fundamentally grounded in using rules. In doing so, in principle, bureaucracy does not discriminate sectoral aspects in both the public and private; it shapes its further characteristic configurations under influence of political-economic control mechanism in institution, in fact, having mutual impacts between bureaucracy and institution (Waters & Waters 2015).

Remaining Core Aspects of Bureaucracy with Intervention of AI

In advance, given the context of AI intervention in bureaucracy, Bullock and Kim (2020) argue that specialization of jobs among human and AI via a multi-agent system can be considered in terms of the comparative advantage given that AI can also have limited capabilities for making decisions (bounded rationality) –thereby humans could still hold comparative superiorities with respect to task accomplishments in areas– given different complexity and uncertainty pertaining to characteristics of task and environment (Bullock 2019). Relatedly, it can be a critical matter to understand how organizational structuring and functioning –involving humans– thereby could be differentiated, or, even further, if organization can be maintained given exponentially increasing capabilities of a single AI agent reaching to superintelligence and potential activation of Artificial General Intelligence (AGI).

As an ultimate point, unless a single entity can treat every information in a perfectly complete and simultaneous manner, organization will be maintained (Bullock & Kim 2020). Under the current physical law, at least, a single entity is limited to efficiently solve all complex and imminent problems and achieve goals, and it requires co-working with other entities. Each entity takes different specialized tasks eventually controlled to be fitting to achieve shared goals through the dynamics of authority however the initiation and modification of task assignment were settled in terms of bottom-up or top-down approaches by circumstances. Moving forward, the systematization of works of agents manifests the realization of a higher-level capability of decision-making upon collective intelligence in terms of organizational phenomena. Each locus in organization needs to communicate to each other, and the systematization makes the communication also fallen onto the area of process through vertical and horizontal flows of authority in addition to coordination of those flows by functioning of managerial positions, in principle. All of these eventually form bureaucracy in terms of integrated structuring and functioning.

Grounded in the nature of bureaucratization, setting goals by the top authority and systematic designation of tasks to sub-levels in the systematic ways would remain even with substantive intervention of AI (Bullock & Kim 2020). To that extent, the control issue on the very top position or critical locus for task accomplishments which can have huge leverage impacts of administration and policy implementation (e.g., securing and operation of the unmanned missile system which is already being utilized by the US military) can be significantly crucial (Kim & Bullock 2021). This can be even more critical in the context of increased system-level integration of organization by having interventions of AI (Bullock et al. 2020; Meijer et al. 2021) as we can see in an exemplary case of the Joint All-Domain Command and Control (JADC2) system of the US Department of Defense (DOD) which incorporates the ground-air (space)-sea based operation system having AI in positions of specialized tasks even involving controlling function as we can see in the plan of the US Navy submarine operation system by AI. In the context of the US, with exponentially developing natural language based GPT-3 machine learning technology, with primary efforts by US General Services Administration (GSA) which leads the US AI utilization strategy in government and governance, we are facing more fundamental transformation overall. In doing so, the bureaucratic mechanism will highly likely maintain as argued, and thereby we need to seriously care about it in the development of AI as agent and its utilization.

Critical Viewpoint on Game Theory from Organization Theories (The Matter of Hierarchical Relationship)

A Shortcoming of the Highly Valuable Game Theory Application to AI

Turning back to the Google team research (McWilliamson et al. 2021), the thing is that game theory does not fully reflect the characteristic feature of organization: hierarchical relationship and control. Similarly, Institutional Analysis and Development (IAD) framework standing on works of Elinore Ostrom provided a fundamental viewpoint to investigate the interactional behavior of institutional actors grounded in dealing with rules (and norms) in institutional environments. However, while it also reflects the fundamentality of authority, it is basically limited to provide a specific blueprint of how collective actions are arising in centralized government and organization on the meso-level, beyond its superb applicability in the case of the community-level environments. This is partly a reason for the birth of governance theories as somewhat distinguishably developed (Hill & Hupe 2014). While economics-grounded theories creditably cover either individual activities or (inter-/regional) national economy –beyond the matter of focusing on rather monetary values–, it seems these are somewhat restricted particularly to cover the characteristic feature of organizational phenomena with underlining the hierarchy: if qualified, additionally, this may make the macro-level explicability as summative somewhat diluted. That said, speaking of game theory, it is relevantly somewhat limited to reflect the hierarchical and systematically unequal relationship among agents restricting its extended applicability (while there can be additional interesting discussion points with regards to game theory, the main focus in this writing is this).

In the Google research (McWilliamson et al. 2021), we can also have a great hint about the phenomena of hierarchical relationships by the fact that agents’ performance can be maximized by holding a perpendicular vector relationship. In doing so, the direction of the vectors does not necessarily need to be about reflecting the hierarchical relationship: as ordering or following, respectively. It can be other relationships as well in terms of cooperation, competition, and contest in all nuanced different ways. For game theory and its application in AI developments and usages, elaborating these aspects in terms of conditions of each agent with much sophistication could be fascinating future research.

Moreover, the research by a Google team suggests a highly convincing view of continuum of Utility (Optimization); Multiple Utilities (Multi-agent/Game Theory); and Utility-Free (Hebbian/Dynamical System) (McWilliamson et al. 2021). Using terminologies and concepts of organization studies, these can be similarly understood as perfect centralization (Ideal-type Bureaucracy by Weber) (Bullock 2021); activation of discretion (Realistic Bureaucracy); and perfect decentralization (Egalitarianism).

Among these, as investigated, the Multi-agent/Game Theory can be plausibly the most valid conceptual approach for AI development and organizational application. However, the problem is, Game Theory approach may be restricted to fully reflect the characteristic feature of hierarchical relationship (this can be actually a hypothesis for testing, through characterizing game conditions and testing its validity for explaining and/or forecasting actual organizational phenomena, firstly focusing on fields where more care about factual information than value-oriented judgments such as the military). By the theory of bureaucracy, the hierarchical relationship is grounded in the phenomena of authority. It can be also an interesting question how the game theory application itself can be differentiated between the case of humans, AI, and co-working among them (Bullock and Kim 2020), further dealing with performance, efficiency, and hereby ethical value issues as manifested or compromised including the matter of administrative evil (Young et al. 2021; Kim & Bullock 2021).

The Matter of Discretion for Agents (Existing Everywhere and Always in Organizational Contexts)

Another meaningful point of reflecting organization theory on the development and application of AI is discretion. It seems many AI scholars perceive AI as a completely value-neutral computer, in nature, and ‘subjective’ discretion is not a matter for AI but human bureaucrats. However, discretion can happen for every agent –either human or AI– in organizational contexts unless it is completely under control by the top authority or a single entity just performs sufficing sole works; the former may not be realizable given current physical laws, and the latter breaks the base condition of organization establishment (Bullock & Kim 2020). At least given a contemporary technological level, rules (laws, as more cohesive conjunction via the rule of law reaching up to the constitution, especially in the public sector) in use in any system cannot specify all required processes given every situation for decisions and executions, meaning an individual agent needs to make ‘own’ decisions at each point. Further, given restrictions in time and resources, for instance, some positions can actualize the ultimate authority in lieu of the top position in terms of (internal) delegation.

Then, eventually, discretion is a measurement issue about how much (whether) the own decision-making does 'not' completely reflect and realize the purity of directed goals. In general, there would be always a certain gap, in terms of discretion. This is not different for AI agent, and it is rather a matter of bureaucratic (organizational) system itself than a single entity. We now become able to better deal with even value areas by factual information processing due to recent machine learning developments; still, it may not be available to realize the perfect condition of information processing, although we never know what can happen with further developmental use of AI investigating natural and/or universal laws (Bullock & Kim 2020).

Limitation in Mutual Rule Revisions (Systematically Unequal Agent Relationship)

This matter of hierarchical control and discretion can be a critical point for the Google research to achieve the best efficiency (McWilliamson et al. 2021), in that the suggested simultaneous revision of rules among equal agents cannot be always available. We can think that the ideal-type bureaucracy may not be possible to realize even in the case of substantive intervention of AI in organization, due to the information problem. This, in fact, can make us prefer the game theory approach to some of recent mechanistic approaches on AI safety issues (Andrew Critch 2021) since contemporarily it could not be just technically realizable or, in other words, can be very inefficient in making statistical predictions by optimization in terms of its task accomplishment mechanism.

Nonetheless, a remaining issue even in the case of game theory application is, agents will have discretion only within a given area of permission or order, under the restriction of hierarchical authority. For instance, human or AI street-level bureaucrats can make their own interpretive decisions by best estimation about parts of (sub-) tasks not fully specified by law; yet, by principle, they cannot make certain decisions and implementation beyond the permitted coverage. That said, the true characteristics of hierarchy may not become fully reflected in the game conditioning, although we earlier discussed possible alternative approaches in basic.

Further, characteristically, the hierarchical control may not be successfully secured in the case of AI in terms of plausible evolution –beyond the matter of discretion–, and potential implications in respective or combined ways on (near) top positions and critical positions even on sub-levels having substantial leverage impacts of administration and policy should be on top priority concern of AI safety (Kim & Bullock 2021). Meanwhile, the characteristic issue for human agents with regards to rational control in mechanistic mode is that humans are inherently subjective, emotional, and susceptible to surrounding internal/external physical environments, which requires consideration of unofficial/irrational dimensions in reality, possibly challenging co-working with AI in interesting ways (Bullock and Kim 2020).

As an approach game theory could be a much useful tool to deal with the matter of evolution as well. Yet, the reflection of the hierarchy (with other suggested issues in extensions) for conditioning of game needs to be considered to secure higher effectiveness of the approach, if theoretically available. Otherwise, if the characterization of condition of game reflecting organizational hierarchy is not sufficiently achievable, it could be another intriguing and important research question to the extent of AI safety particularly at this point.

Conclusion: More Nuanced Conditioning of Game Theory for Better Applicability in AI Development and Utilization in Reality

Assuming that the control condition is secured, a most direct and specific implication to the Google team's research (McWilliamson et al. 2021) insofar we have discussed is that in the bureaucratic structure agents cannot always freely adjust their rules. Then why not just assume and pursue an equal working condition? Well, bureaucracy theory tells that, in reality, maintaining a certain level of optimization (centralization) can be more efficient. The Google team's research finding (McWilliamson et al. 2021) implies that perfect centralization could not be the best though. Then, combining perspectives of the theory of organization and the Google study (McWilliamson et al. 2021), a true implication of AI in development and utilization through organizational contexts may be lied upon somewhere between the perfect optimization (centralization) and game theory (activation of discretion) situations, in general.

After all, rather than arguing they would better not use the terminology of Multi-agent in the situation of merely reflecting horizontal relationships, leading to organizational applications eventually, I would like to suggest involving the hierarchical characteristics into the conceptualization of AI development and actual application. In fact, even Hebbian/Dynamical System may have the best performance in certain situations, although it will ultimately face suppression of bureaucratization and simultaneous pressure of discretion in terms of co-functioning of the factor-phenomena. Finally, further application of organizational and political theories to cover the various contexts can be very useful even for the AI safety issues in further research.

Reference (Including ones that a link is not available)

Bullock, J. B. (2021). Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS). LessWrong. https://www.lesswrong.com/posts/iekoEYDLgC7efzbBv/controlling-intelligent-agents-the-only-way-we-know-how

Bullock, J. B. (2019). Artificial Intelligence, Discretion, and Bureaucracy. The American Review of Public Administration, 49(7), 751–761. https://doi.org/10.1177/0275074019856123

Bullock, J. B., & Kim, K. (2020). Creation of Artificial Bureaucrats. Proceedings of European Conference on the Impact of Artificial Intelligence and Robotics. https://www.researchgate.net/publication/349776088_Creation_of_Artificial_Bureaucrats

Bullock, J. B., Young, M. M., & Wang, Y. F. (2020). Artificial intelligence, bureaucratic form, and discretion in public service. Information Polity, 25(4), 491–506. https://doi.org/10.3233/IP-200223

Critch, A. (2021). What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs). LessWrong. https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic

Hill, M., & Hupe, P. (2014). Implementing Public Policy: An Introduction to the Study of Operational Governance (Third edition). SAGE Publications Ltd.

McWilliams, B., Gemp, I., & Vernade, C. (2021). Game theory as an engine for large-scale data analysis: EigenGame maps out a new approach to solve fundamental ML problems. DeepMind. https://deepmind.com/blog/article/EigenGame

Kim, K., & Bullock, J. B. (2021). Machine Intelligence, Bureaucracy, and Human Control. Perspectives on Public Management and Governance Special Issue Reappraising Bureaucracy in the 21st Century (Accepted for the Final Round).

Meijer, A., Lorenz, L., & Wessels, M. (2021). Algorithmization of Bureaucratic Organizations: Using a Practice Lens to Study How Context Shapes Predictive Policing Systems. Public Administration Review, puar.13391. https://doi.org/10.1111/puar.13391

Simon, H. A. (1946). The Proverbs of Administration. In Classics of Public Administration (7th ed.). Wadsworth/Cengage Learning.

Waters, T., & Waters, D. (Eds.). (2015). Weber’s Rationalism and Modern Society. Palgrave Macmillan US. https://doi.org/10.1057/9781137365866

Young, M. M., Himmelreich, J., Bullock, J. B., & Kim, K. (2021). Artificial Intelligence and Administrative Evil. Perspectives on Public Management and Governance, gvab006. https://doi.org/10.1093/ppmgov/gvab006

2

2 comments, sorted by Highlighting new comments since Today at 6:12 PM
New Comment

Thank you for this post, Kyoung-Cheol. I like how you have used Deep Mind's recent work to motivate the discussion of the consideration of "authority as a consequence of hierarchy" and that "processing information to handle complexity requires speciality which implies hierarchy." 

I think there is some interesting work on this forum that captures these same types of ideas, sometimes with similar language, and sometimes with slightly different language.

In particular, you may find the recent post from Andrew Critch on "Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI" to sympathetic to core pieces of your argument here. 

It also looks like Kaj Sotala is having some similar thoughts on adjustments to game theory approaches that I think you would find interesting.

 I wanted to share with you an idea that remains incomplete, but I think there is an interesting connection between Kaj Sotala's discussion of non-agent and multi-agent models of the mind and Andrew Critch's robust agent-agnostic processes that connects with your ideas here and the general points I make in the IBS post.

Okay, finally, I had been looking for the most succinct quote from Herbert Simon's description of complexity and I found it. At some point, I plan to elaborate more on how this connects to control challenges more generally as well, but I'd say that we would both likely agree with Simon's central claim in the final chapter of The Sciences of the Artificial:

"Thus my central theme is that complexity frequently takes the form of hierarchy and that hierarchic systems have some common properties independent of their specific content. Hierarchy, I shall argue, is one of the central structural schemes that the architect of complexity uses." 

Glad you decided to join the conversation here. There are lots of fascinating conversation that are directly related to a lot of the topics we discuss together.

Thank you very much for your valuable comments, Dr. Bullock (It is another pleasure to see here)!

Yes, I am new to this forum and learning a lot from various viewpoints as you indicated using similar language or slightly different language. In doing so, I think my ideas provided here are highly aligned with unipolar/multipolar (pertaining to the configuration of the very top position level in bureaucracy) and non-agent/multi-agent (ultimately, regarding whether having needs or remaining of organization with the intervention of surpassingly developed AI) systems.

Since we humans have limited capabilities to fully understand the universe yet, taking various approach viewpoints and finding out similarities and discrepancies can be a crucial work for the sake of the philosophy of science and becoming less wrong. To that extent, I see many similarities of core thoughts here with others, and believe better understanding to different areas together could increase the positive aspect.

To some of your specific points, "Thus my central theme is that complexity frequently takes the form of hierarchy and that hierarchic systems have some common properties independent of their specific content. Hierarchy, I shall argue, is one of the central structural schemes that the architect of complexity uses." Yes, I completely agree with this and, at the same time, I think the complexity resulting in hierarchic systems for integrated intelligence (if the condition of formulation of organization maintains) also leaves discretion within it. Therefore, I approach that development of AI and utilization of it reflected on configurations of societal works will highly likely be lied upon somewhere between centralization and game theory situation. Although it is limited to specify exact situations, I think this projection basically includes the situation of unipolar/multipolar and non-agent/multi-agent systems.

Particularly, I think dealing with organizational frameworks can shed light on how humans will specifically work with AI agents in multi-agent system. Pertaining to your statement, "authority as a consequence of hierarchy" and that "processing information to handle complexity requires speciality which implies hierarchy." Authority for humans is also about a matter of how to accept it as a cognitive phenomenon while it would not be a matter for machines in that manner.

I believe organization theories need to be more actively reflected on various disccusions here (also again becoming crucial for governance experts) and I am very looking forward to more engaging with them!