Justin Bullock

Writer, Reader, Thinker, Researcher, Podcaster

Wiki Contributions

Comments

Thank you for the comment and for reading the sequence! I posted Chapter 7 Welcome to Analogia! (https://www.lesswrong.com/posts/PKeAzkKnbuwQeuGtJ/welcome-to-analogia-chapter-7) yesterday and updated the main sequence page just now to reflect that. I think this post starts to shed some light on ways of navigating this world of aligning humans to the interests of algorithms, but I doubt it will fully satisfy your desire for a call to action. 

I think there are both macro policies and micro choices that can help.

At the macro level,  there is an over accumulation of power and property by non-human intelligences (machine intelligences,  large organizations, and mass market production). The best guiding remedy here that I've found comes from Huxley. The idea is pretty straightforward in theory: spread the power and property around and in the direction away from these non-human intelligences and towards as many humans as possible. This seems to be the only reasonable cure to the organized lovelessness and its consequence of massive dehumanization.

At the micro level, there is some practical advice in Chapter 7 that also originates with Huxley. The suggestion here is that to avoid being an algorithmically aligned human,  choose to live filled with love, intelligence, and freedom as your guide posts. Pragmatically, one must live in the present, here and now, to experience those things fully. 

I hope this helps, but I'm not sure it will. 

The final thing I'd add at this point is that I think there's something to reshaping our technological narratives around machine intelligence away from its current extractive and competitive logics and directed more generally towards humanizing and cooperative logics. The Erewhonians from Chapter 7 (found in Samuel Butler's Erewhon) have a more extreme remedy: stop technological evolution, turn it backwards. But short of global revolution, this seems like proposing that natural evolution should stop.

I'll be editing these 7 chapters, adding a new introduction and conclusion, and publishing Part I as a standalone book later this year. And as part of that process, I intend to spend more time continuing to think about this.

Thanks again for reading and for the comment!

There is a growing academic field of "governance" that exists that would variously be described as a branch of political science, public administration, or policy studies. It is a relatively small field, but has several academic journals where that fit the description of the literature you're looking for. The best of these journals, in my opinion, is Perspectives on Public Management & Governance (although it has a focus on public governance structures to a fault of ignoring corporate governance structures).

In addition to this, there is a 50 chapter OUP AI Governance Handbook that I've co-edited with leading scholars from Economics, Political Science, International Affairs, and other fields of social science that are interested in these exact ideal governance questions as you describe them. 10 of the chapters are currently available, but I also have complete copies of essentially every chapter that I would be happy to share directly with you or anyone else that comments here and is interested. Here's the Table of Contents. I'm certainly biased, but I think this book contains the cutting edge dialogue around both how ideal governance may be applied to controlling AI and how the development of increasingly powerful AI presents new opportunities and challenges for ideal governance.

I have contributed to these questions both from trying to understand what might be the elements of ideal governance structures and processes for Social Insurance programs, AI systems, and Space settlement ideal governance and to understand what are the concerns of integrating autonomous and intelligent decision making systems into our current governance structures and processes.

I think there are some helpful insights into how to make governance adaptive (reset/jubilee as you described it) and for defining the elements of the hierarchy (various levels) of the governance structure. The governance literature looks at micro/meso/macro levels of governance structures to help illustrate how some governance elements are best described and understand at different levels of emergence or description. Another useful construct from governance scholars is that of discretion or breadth of choice set given to an agent carrying out the required decision making of the various governance entities, this is where much of my own interest lies, which you can see in work I have with colleagues on topics including discretion, evolution of bureaucratic form, artificial discretion, administrative evil, and artificial bureaucrats. This work builds on the notion of bounded rational actors and how they execute decisions in response to constitutional rule and insertional structures. Here & here in the AI Governance Handbook, with colleagues we look at how Herbert Simon and Max Weber's classic answers to these ideal governance questions hold in a world with machine intelligence, and we examine what new governance tools, structures, and processes may be needed now. I've also done some very initial work here on the Lesswrong forum looking at how Weber's ideal bureaucratic structure might be helpful for considering how to control intelligent machine agents

In brief recap, there is a relatively small interdisciplinary field/community of scholars looking at these questions, it is a community that has done some brainstorming, some empirical work, and used some economics-style thinking to address some of these ideal governance questions. There are also some classic works that touch on these topics as well around thinkers such as Max Weber, Herbert Simon, and Elinor Ostrom.

I hope this is helpful. I'm sure I've focused too much on my own work here, but I hope the Handbook in particular gives you some sense of some of the work out there. I would be happy to connect you with other writers and thinkers who I believe are taking these questions of ideal governance seriously. I find these to be among the most interesting and important questions for our moment in time.

Thank you! I’m looking forward to the process of writing it, synthesizing my own thoughts, and sharing them here. I’ll also be hoping to receive your insightful feedback, comments, and discussion along the way!

Thank you for this post, Kyoung-Cheol. I like how you have used Deep Mind's recent work to motivate the discussion of the consideration of "authority as a consequence of hierarchy" and that "processing information to handle complexity requires speciality which implies hierarchy." 

I think there is some interesting work on this forum that captures these same types of ideas, sometimes with similar language, and sometimes with slightly different language.

In particular, you may find the recent post from Andrew Critch on "Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI" to sympathetic to core pieces of your argument here. 

It also looks like Kaj Sotala is having some similar thoughts on adjustments to game theory approaches that I think you would find interesting.

 I wanted to share with you an idea that remains incomplete, but I think there is an interesting connection between Kaj Sotala's discussion of non-agent and multi-agent models of the mind and Andrew Critch's robust agent-agnostic processes that connects with your ideas here and the general points I make in the IBS post.

Okay, finally, I had been looking for the most succinct quote from Herbert Simon's description of complexity and I found it. At some point, I plan to elaborate more on how this connects to control challenges more generally as well, but I'd say that we would both likely agree with Simon's central claim in the final chapter of The Sciences of the Artificial:

"Thus my central theme is that complexity frequently takes the form of hierarchy and that hierarchic systems have some common properties independent of their specific content. Hierarchy, I shall argue, is one of the central structural schemes that the architect of complexity uses." 

Glad you decided to join the conversation here. There are lots of fascinating conversation that are directly related to a lot of the topics we discuss together.

Thanks for this. I tabbed the Immoral Mazes sequences. On cursory view it seems very relevant. I'll be working my way through it. Thanks again.

Thanks. I think your insight is correct that governance requires answers to the "how" and "what" questions, and that the bureaucratic structure is one answer, but it leave the "how" unanswered. I don't have a good technical answer, but I do have an interesting proposal by Hannes Alfven in the book "The End of Man?" that he published under the pseudonym of Olof Johnneson called Complete Freedom Democracy that I like. The short book is worth the read, but hard to find. The basic idea is a parliamentary system in which all humans, through something akin to a smart phone, to rank vote proposals. I'll write up the details some time! 

Thank you for the comment. There are several interesting points I want to comment on. Here are my thoughts in no particular order of importance:

  • I think what I see as your insight on rigidity versus flexibility (rigid predictable rules vs. innovation) more generally is helpful and something that is not addressed well in my post. My own sense is that an ideal bureaucracy structure could be rationally constructed that balances tradeoffs across rigidity and innovation. Here I would also take Weber's rule 6 that you highlight as an example. As represented in the post it states "The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive." I take this as rules and regulation need to be "learnable" not stable. A machine beamte (generally intelligent AI) should be able to quickly update on new rules and regulations. The condition of "more or less firm and more or less comprehensive" seems akin to more of a coherence condition rather than one that is static
  • This builds towards what I see as your concern of an ideal bureaucracy structure being consisted of fixed rules, ossification, and general inability to adapt successfully to changes in the type and character of complexity in the environment in which the bureaucracy is embedded. My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
  • One note here is that for me an ideal bureaucracy structure doesn't need to perfectly replicate Weber's description. Instead it would appropriately take into account what I see as the underlying fact that complexity demands specialization and coordination which implies hierarchy. An ideal bureaucracy structure would be one that requires multiple agents to specialize and coordinate to solve problems of any arbitrary level of complexity, which requires specifying both horizontal and vertical coordination. Weber's conceptualization as described in the post, I think, deserves more attention for the alignment problem, given that I think bureaucracies limitations can mostly be understood in terms of human limitation for information processing and communication.
  • I think I share your concern with a single bureaucracy of AI's being suboptimal, unless the path to superintelligence is through iterated amplification of more narrow AI's that eventually lead to joint emergent superintelligence that is constrained in an underlying way by the bureaucratic structure, training, and task specialization. This is a case where (I think) the emergence of a superintelligent AI that in reality functions like a bureaucracy would not necessarily be suboptimal. It's not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.
  • I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information. One takes a more decentralized processing based largely on prices to convey relevant value and information the other takes a more centralized approach implied by loosely organized hierarchical structures that allow for reliable specialization. It seems to me that market mechanisms also have their own tradeoffs across innovation and controllability. In other words, I do not see that the market structure dominates the bureaucratic or centralized approach across these tradeoffs in particular.
  • There are other governance models that I think are helpful for the discussion as well. Weber is one of the oldest in the club. One is Herbert Simon's Administrative Behavior (which is generalized to other types of contexts in his "The Sciences of the Artificial"). Another is Elinor Ostrom's Institutional Analysis and Development Framework. My hope is build out posts in the near future taking these adjustments in structure into consideration and discussing the tradeoffs.

 

Thanks again for the comment. I hope my responses have been helpful. Additional feedback and discussion are certainly welcomed! 

I think this approach may have something to add to Christiano's method, but I need to give it more thought. 

I don't think it is yet clear how this structure could help with the big problem of superintelligent AI. The only contributions I see clearly enough at this point are redundant to arguments made elsewhere. For example, the notion of a "machine beamte" as one that can be controlled through (1) the appropriate training and certification, (2) various motivations and incentives for aligning behavior with the knowledge from training, and (3) nominated by a higher authority for more influence. These are not novel considerations of course, but I think they do very much point to the same types of concerns of how to control agent behavior in an aligned way when the individual intelligent agents may have some components that are not completely aligned with the goal function of the principal (organization in this context, keeping superintelligent AI controlled by humanity as another potential context).

Thanks for the follow up.

Thank you for this. I pulled up the thread. I think you're right that there are a lot of open questions to look into at the level of group dynamics. I'm still familiarizing myself with the technical conversation around the iterated prisoner's dilemma and other ways to look at these challenges from a game theory lens. My understanding so far is that some basic concepts of coordination and group dynamics like authority and specialization are not yet well formulated, but again, I don't consider myself up to date in this conversation yet.

From the thread you shared, I came across this organizing post I found helpful: https://medium.com/@ThingMaker/open-problems-in-group-rationality-5636440a2cd1

Thanks for the comment.

Thank you for the insights. I agree with your insight that "bureaucracies are notorious homes to Goodhart effects and they have as yet found no way to totally control them." I also agree with you intuition that "to be fair bureaucracies do manage to achieve a limited level of alignment, and they can use various mechanisms that generate more vs. less alignment." 

I do however believe that an ideal type of bureaucratic structure helps with at least some forms of the alignment problem. If for example, Drexler is right, and my conceptualization of the theory is right (CAIS) expects a slow takeoff of increasing intelligent narrow AIs that work together on different components of intelligence or completing intelligent tasks. In this case, I think Weber's suggestions both of how to create generally controllable intelligent agents (Beamte) and his ideas on constraining individual agents authority to certain tasks who are then nominated to higher tasks by those with more authority (weight, success, tenure, etc) has something helpful to say in the design of narrow agents that might work together towards a common goal. 

My thoughts here are still in progress and I'm planning to spend time with these two recent posts in particular to help my understanding:

https://www.lesswrong.com/posts/Fji2nHBaB6SjdSscr/safer-sandboxing-via-collective-separation

https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models

 

One final thing I would add is that I think many of problems with bureaucracies can often be characterized around limits of information and communication (and how agents are trained and how they are motivated and what are the most practical or useful levels of hierarchy or discretion). I think the growth of increasingly intelligent narrow AIs could (under the right circumstance) drastically limit information and communication problems. 

Thanks again for your comment. The feedback is helpful. I hope to make additional posts in the near future to try and further develop these ideas.

Load More