In this sequence of posts, we will examine how our global civilization has found itself wandering, staggering, and rocketing through, and now beyond Our Brave New World. 

Superintelligences exist and they are misaligned with the interests of both humanity and individual humans. These superintelligences have their own emergent goals and patterns of behavior. In fact, humans have already lost much meaningful control over these superintelligences and their behavior. This is the birth of Analogia.

Across ~25 posts gathered across ~6 major parts, this sequence will explore how humans, machines, and civilization have coevolved to present a world of increasing complexity, one dominated by hybrid superintelligences, and one in the process of developing machine superintelligences.

In Part I: The Pathway to Analogia, we will map how over-digitizalization, when coexistent with over-population and over-organization, has brought us to the fulfillment of Aldous Huxley’s Brave New World of mass manipulation of the general population by the more narrow interests of both individual groups of humans and of machines. It is from Our Brave New World, that Analogia has arisen. Analogia is a world where much of humanity has embraced over-digitalization and its continual encroachment into our analog, natural world, the world that, throughout all of human history has always been considered the “real world”, the one our biological minds and biological bodies inhabit. It is a world where the digital world has begun to assert control over the analog world. As a brief detour in Part I, we visit an abridged aversion of Olaf Stapledon’s “The Other Earth.” An early cautionary tale. (Chapters 1-7, planned titles with planned posting dates)

Ch 1: Machine Agents, Hybrid Superintelligences, and The Loss of Human Control 

Ch 2: The Other Earth 

Ch 3: Revisiting A Brave New World Revisited 

Ch 4: The Age of Over-population 

Ch 5: Over-organization 

Ch 6: Over-digitalization

Ch 7: Welome to Analogia! 

In Part II: The Machines, we will step away from the diagnosis of Our Brave New World and the troubling rise of Analogia. Here, we will turn to speculative fiction for some inspiration on the range of outcomes, challenges and opportunities, for living in Analogia. First, I turn to Samuel Butler's description of The Book of The Machines for a speculation on how a society settles its uneasy relationship with machine evolution. Then, we visit  E.F. Forster's short story "The Machine Stops." Here we see a world where humans have become quite reliant on their machines, in an altogether unhealthy manner. Finally, we take a journey through history from the eyes of Olof Johannesson. Its viewpoint is one not to be dismissed lightly, we do so at our own peril. I share these narratives to help us create our own vision of the future, one more kind to freedom, love, and intelligence. (Chapters 8-10)

Ch 8: The Book of The Machines 

Ch 9: The Machine Stops 

 Ch 10: GPT-7: The Tale of the Big Computer

In Part III, we will examine the landscape where information, evolution, and machines meet. I first provide a brief overview of the the evolution of machines as it has been documented by recent historians and observers. Then I provide a discussion on the evolution of organizations as well. As we will see, these two things, organizations and machines, have been coevolving together into new and less well described things. With this illustrated, we examine descriptions of some of  these new hybrid things so that we can better understand their emergent behavior along with their power and influence. This discussion will force us to zoom out to a broader discussion on the conceptual differences between information, knowledge, and understanding. As we will see each of these distinct concepts play important and differing roles in shaping the current structure of control and power in our current world  (Chapters 11-14)

Ch 11: Machine Evolution

Ch 12: The Evolution of Organizations 

Ch 13: Hybrid Intelligences, Hybrid Organizations, and Cyborgs 

Ch 14: Information, Knowledge, Understanding

 

Part IV examines governance, control, and symbolic systems themselves. Here we will take up cybernetics, control, and governance together so that we can better see the important questions each of them poses. Then, we will focus specifically on the incompleteness theorem, decision making generally, and discretionary agent behavior within organizations specifically. This will require us to visit with the differences between knowledge and symbols that represent knowledge as these distinctions will further aid us in understanding how to think about AI Governance and how we might govern superintelligence. (Chapters 15-19)

Ch 15: Governance, Cybernetics, Control 

Ch 16: Decision Making, Incompleteness, and Discretion 

Ch 17: Knowledge & Symbols 

Ch 18: AI Governance 

Ch 19: Governing Superintelligence


Part V examines guiding value frameworks for navigating the world including the perennial philosophy, effective altruism, and long termism with commentary from The Mystic, The Wizard, The Saint, The Revolutionary, and The Optimizer. (Chapters 20-23)

Ch 20: Ethics, Morality, Religion 

Ch 21: The Perennial Philosophy 

Ch 22: Long-termism and Effective Altruism

Ch 23: The Mystic, The Wizard, The Saint, The Revolutionary, and The Optimizer

 

Part VI provides some Tentative Conclusions (Chapter 24 & 25)

Ch 24: Pharmako-AI

Ch 25: Tentative Conclusions

Posts are planned to appear monthly beginning November 30, 2021 for approximately 2 years. After the completion of each major part, I will take a one month break from new posts and instead respond to comments, reflect, and research further before completing the next part. The first will appear on November 30th, 2021. We will begin with a more extended introduction to the broad contours of the arguments presented here. 

New Comment
4 comments, sorted by Click to highlight new comments since:

What an exciting project! I can't wait to read more about your incisive thinking on human, machine, and organizational intelligence, how they relate to each other, to what extent humanity may still have a chance to govern and control them, and how we may best go about it!

Thank you! I’m looking forward to the process of writing it, synthesizing my own thoughts, and sharing them here. I’ll also be hoping to receive your insightful feedback, comments, and discussion along the way!

[-]oge10

I enjoyed reading parts 1--6. Is there any chance you could discuss some of your conclusions sooner than 2023? I'd love to tie my feelings of gloom at humans-aligning-themselves-to-algorithms to some call-to-action...

Thank you for the comment and for reading the sequence! I posted Chapter 7 Welcome to Analogia! (https://www.lesswrong.com/posts/PKeAzkKnbuwQeuGtJ/welcome-to-analogia-chapter-7) yesterday and updated the main sequence page just now to reflect that. I think this post starts to shed some light on ways of navigating this world of aligning humans to the interests of algorithms, but I doubt it will fully satisfy your desire for a call to action. 

I think there are both macro policies and micro choices that can help.

At the macro level,  there is an over accumulation of power and property by non-human intelligences (machine intelligences,  large organizations, and mass market production). The best guiding remedy here that I've found comes from Huxley. The idea is pretty straightforward in theory: spread the power and property around and in the direction away from these non-human intelligences and towards as many humans as possible. This seems to be the only reasonable cure to the organized lovelessness and its consequence of massive dehumanization.

At the micro level, there is some practical advice in Chapter 7 that also originates with Huxley. The suggestion here is that to avoid being an algorithmically aligned human,  choose to live filled with love, intelligence, and freedom as your guide posts. Pragmatically, one must live in the present, here and now, to experience those things fully. 

I hope this helps, but I'm not sure it will. 

The final thing I'd add at this point is that I think there's something to reshaping our technological narratives around machine intelligence away from its current extractive and competitive logics and directed more generally towards humanizing and cooperative logics. The Erewhonians from Chapter 7 (found in Samuel Butler's Erewhon) have a more extreme remedy: stop technological evolution, turn it backwards. But short of global revolution, this seems like proposing that natural evolution should stop.

I'll be editing these 7 chapters, adding a new introduction and conclusion, and publishing Part I as a standalone book later this year. And as part of that process, I intend to spend more time continuing to think about this.

Thanks again for reading and for the comment!