All of Justin Bullock's Comments + Replies

Ideal governance (for companies, countries and more)

There is a growing academic field of "governance" that exists that would variously be described as a branch of political science, public administration, or policy studies. It is a relatively small field, but has several academic journals where that fit the description of the literature you're looking for. The best of these journals, in my opinion, is Perspectives on Public Management & Governance (although it has a focus on public governance structures to a fault of ignoring corporate governance structures).

In addition to this, there is a 50 chapter OU... (read more)

How Humanity Lost Control and Humans Lost Liberty: From Our Brave New World to Analogia (Sequence Introduction)

Thank you! I’m looking forward to the process of writing it, synthesizing my own thoughts, and sharing them here. I’ll also be hoping to receive your insightful feedback, comments, and discussion along the way!

Reflection of Hierarchical Relationship via Nuanced Conditioning of Game Theory Approach for AI Development and Utilization

Thank you for this post, Kyoung-Cheol. I like how you have used Deep Mind's recent work to motivate the discussion of the consideration of "authority as a consequence of hierarchy" and that "processing information to handle complexity requires speciality which implies hierarchy." 

I think there is some interesting work on this forum that captures these same types of ideas, sometimes with similar language, and sometimes with slightly different language.

In particular, you may find the recent post from Andrew Critch on "Power dynamics as a blind spot or b... (read more)

2Kyoung-cheol Kim1y
Thank you very much for your valuable comments, Dr. Bullock (It is another pleasure to see here)! Yes, I am new to this forum and learning a lot from various viewpoints as you indicated using similar language or slightly different language. In doing so, I think my ideas provided here are highly aligned with unipolar/multipolar (pertaining to the configuration of the very top position level in bureaucracy) and non-agent/multi-agent (ultimately, regarding whether having needs or remaining of organization with the intervention of surpassingly developed AI) systems. Since we humans have limited capabilities to fully understand the universe yet, taking various approach viewpoints and finding out similarities and discrepancies can be a crucial work for the sake of the philosophy of science and becoming less wrong. To that extent, I see many similarities of core thoughts here with others, and believe better understanding to different areas together could increase the positive aspect. To some of your specific points,"Thus my central theme is that complexity frequently takes the form of hierarchy and that hierarchic systems have some common properties independent of their specific content. Hierarchy, I shall argue, is one of the central structural schemes that the architect of complexity uses."Yes, I completely agree with this and, at the same time, I think the complexity resulting in hierarchic systems for integrated intelligence (if the condition of formulation of organization maintains) also leaves discretion within it. Therefore, I approach that development of AI and utilization of it reflected on configurations of societal works will highly likely be lied upon somewhere between centralization and game theory situation. Although it is limited to specify exact situations, I think this projection basically includes the situation of unipolar/multipolar and non-agent/multi-agent systems. Particularly, I think dealing with organizational frameworks can shed light on how h
Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS)

Thanks for this. I tabbed the Immoral Mazes sequences. On cursory view it seems very relevant. I'll be working my way through it. Thanks again.

Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS)

Thanks. I think your insight is correct that governance requires answers to the "how" and "what" questions, and that the bureaucratic structure is one answer, but it leave the "how" unanswered. I don't have a good technical answer, but I do have an interesting proposal by Hannes Alfven in the book "The End of Man?" that he published under the pseudonym of Olof Johnneson called Complete Freedom Democracy that I like. The short book is worth the read, but hard to find. The basic idea is a parliamentary system in which all humans, through something akin to a smart phone, to rank vote proposals. I'll write up the details some time! 

Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS)

Thank you for the comment. There are several interesting points I want to comment on. Here are my thoughts in no particular order of importance:

  • I think what I see as your insight on rigidity versus flexibility (rigid predictable rules vs. innovation) more generally is helpful and something that is not addressed well in my post. My own sense is that an ideal bureaucracy structure could be rationally constructed that balances tradeoffs across rigidity and innovation. Here I would also take Weber's rule 6 that you highlight as an example. As represented in th
... (read more)
3Logan Zoellner1y
I think this is the essential question that needs to be answered: Is the stratification of bureaucracies a result of the fixed limit on human cognitive capacity, or is it an inherent limitation of bureaucracy? One way to answer such a question might be to look at the asymptotics of the situation. Suppose that the number of "rules" governing an organization is proportional [https://twitter.com/devonzuegel/status/1395729263937179651] to the size of the organization. The question would then be does the complexity of the coordination problem also increase only linearly as well? If so, it is reasonable to suppose that humans (with a finite capacity) would face a coordination problem but AI would not. Suppose instead that the complexity of the coordination problem increases with the square [https://en.wikipedia.org/wiki/Metcalfe%27s_law] of organization size. In this case, as the size of an organization grows, AI might find the coordination harder and harder, but still tractable [https://en.wikipedia.org/wiki/P_versus_NP_problem]. Finally, what if the AI must consider all possible interactions between all possible rules in order to resolve the coordination problem? In this case, the complexity of "fixing" a stratified bureaucracy is exponential [https://en.wikipedia.org/wiki/Time_complexity#Exponential_time] in the size of the bureaucracy and beyond a certain (slowly rising) threshold the coordination problem is intractable. If weighted voting is indeed a solution to the problem of bureaucratic stratification, we would expect this to be true of both human and AI organizations. In this case, great effort should be put into discovering such structures because they would be of use in the present and not only in our AI dominated future. Suppose the coordination problem is indeed intractable. That is to say that once a bureaucracy has become sufficiently complex it is impossible to reduce the complexity of the system without unpredictable and undesirable side-effects. In
Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS)

I think this approach may have something to add to Christiano's method, but I need to give it more thought. 

I don't think it is yet clear how this structure could help with the big problem of superintelligent AI. The only contributions I see clearly enough at this point are redundant to arguments made elsewhere. For example, the notion of a "machine beamte" as one that can be controlled through (1) the appropriate training and certification, (2) various motivations and incentives for aligning behavior with the knowledge from training, and (3) nominate... (read more)

Open and Welcome Thread - May 2021

Thank you for this. I pulled up the thread. I think you're right that there are a lot of open questions to look into at the level of group dynamics. I'm still familiarizing myself with the technical conversation around the iterated prisoner's dilemma and other ways to look at these challenges from a game theory lens. My understanding so far is that some basic concepts of coordination and group dynamics like authority and specialization are not yet well formulated, but again, I don't consider myself up to date in this conversation yet.

From the thread you sh... (read more)

Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS)

Thank you for the insights. I agree with your insight that "bureaucracies are notorious homes to Goodhart effects and they have as yet found no way to totally control them." I also agree with you intuition that "to be fair bureaucracies do manage to achieve a limited level of alignment, and they can use various mechanisms that generate more vs. less alignment." 

I do however believe that an ideal type of bureaucratic structure helps with at least some forms of the alignment problem. If for example, Drexler is right, and my conceptualization of the theo... (read more)

3G Gordon Worley III1y
Yeah, I guess I should say that I'm often worried about the big problem of superintelligent AI and not much thinking about how to control narrow and not generally capable AI. For weak AI, this kind of prosaic control mechanism might be reasonable. Christiano things this class of methods might work on stronger AI.
Open and Welcome Thread - May 2021

My name is Justin Bullock. I live in the Seattle area after 27 years in Georgia and 7 years in Texas. I have a PhD and Public Administration and Policy Analysis where I focused on decision making within complex, hierarchical, public programs. For example, in my dissertation I attempted to model how errors (measured as improper payments) are built into the US Unemployment Insurance Program. I spent time looking at how agents are motivated within these complex systems trying to develop general insights into how errors occur in these systems. Until about 2016... (read more)

3gilch1y
See the Group Rationality topic [https://www.lesswrong.com/tag/group-rationality]. The rationalists, as a culture, still haven't quite figured out how to coordinate groups very well, in my opinion. It's something we should work on.