Honestly I think part of that is that right now the people who are interested in designing those platforms are largely the more "technical" software devs who get more excited about interesting algorithms and infrastructure vs usability.

The New Age of Social Engineering

by VivaLaPanda 11 min read7th Dec 20197 comments

21


Why have so many online social networks failed to form healthy communities, and instead gained notoriety as hostile spaces? I argue that the reason these platforms have failed is because they didn’t learn the lessons taught by the High Moderns when humans were first faced with the challenge of engineering alongside systems that were built through millennia of natural evolution. In a chaotic environment such as human social relations, a different engineering approach is necessary to ensure that more good is done than harm. To gain the skills necessary to make these projects a success we need to learn from the history of social environments themselves, and of human engineering strategies. What follows is the story of social evolution becoming social engineering, how the meaning of both has changed radically in the last 20 years, and what this means for designers in the new Information Era.

Part 1 — Ten millenniums of social engineering

A key part of my thesis is that the way our social environment is formed has changed over the course of human history, and more rapidly in recent years. How do we know that to be true? Much of the work I’m building on comes out of the accounts provided by The Secret of Our Success by Joseph Henrich, as well as Seeing Like a State by James C. Scott. There are many things that I disagree with in these works, but I think they both get to the core idea that there exist two main ways in which human society develops. One of those ways is via an evolutionary process, where some societies develop some technique that aids in survival and flourishing, pass it on, and end up growing and outcompeting other societies. The people practicing these traditions often don’t have concrete knowledge as to why they work, but they become enshrined as tradition because they help the group succeed. This goes from knowledge about what plants are edible, to complex ideas like how the group should be structured. On the other hand, there is social engineering. In social engineering, explicit models of human behavior are used to derive new social conventions and structures. Usually this doesn’t mean designing something new from whole cloth, but instead an effective synthesis of ideas that the culture has generated over time into a compelling ideological canon or into new distinct institutions.

For most of human history, we relied primarily on social evolution instead of social engineering. This was for good reason: social engineering when done poorly is very often worse than social evolution. A mother who breaks tradition and tries feeding new plants to their children because she doesn’t know of any reason those plants are harmful may discover unexpected side effects of their consumption. This reality is often referred to as Chesterton’s fence, and is often discussed as an argument in favor of traditionalism. However, much of the social and technological progress of the industrial era has through the rejection of tradition. How do we square these conflicting forces? I think that a key reason is simply because for a society to succeed in social engineering, a detailed historical record and careful specialists are usually necessary. Only in this way can new first principle knowledge be solidified and built-upon. It’s for that reason that the societies that appear the most engineered also in general tend to be those with more detailed historical records and information about other differing societies. When one is able to see the culture “from above”, that unique perspective can enable one to design an effective institutions.

One excellent example is perhaps one of the most successful early cultural engineers, Confucius. The Great Teacher developed his unique philosophy while travelling around China and seeing various social issues, their causes, and the variety of different social structures present in China at the time. By synthesizing these insights into a central canon, he created an enduring cultural institution that was central to Chinese administration for centuries. Of course it is true that Confucianism relies heavily on tradition, and can in some ways be considered no more than a collection of various preexisting traditions, but its success indicates that it must have some quality beyond that of the constituent parts. Ultimately, Confucianism and the society it created lost supremacy because while it itself was a result of synthesis, it became unable to assimilate or change at pace with the world it inhabited. What once created a powerful bureaucratic class capable of financing great discoveries, ended up as a chain that left the society unable to appreciate the possibility of learning from outside influences. Innovation became tradition, and tradition cannot change course by its very nature.

Part 2 — Modernity

We’re going to leave behind ancient societies, because although they are a rich source of insight, others have better studied those trends in depth than I. Instead we are going to turn to look at the relatively modern. I am going to focus for a minute on the United States. For all that American Exceptionalism is a real risk, I think there is something somewhat unique and interesting about the formation of the US. Specifically, the US is one of the best examples of what I consider full social engineering. A group of people sat down in a room and set out to design, in a written legal document, how its society would function. It was a group of what can only be called engineers who set out to design structures to improve upon the governments they were aware of. They didn’t just try and say “tyranny is bad so we won’t be tyrants,” they tried to engineer complex social structures that took advantage of human behavior in order to guide the behavior of the government — independent of any single political actor. The idea of applying contractual thinking to the structure of our nations and institutions didn’t start with the US, but the US can be seen as a culmination of those ideas. This project has had varied success to say the least, but it’s notable that so many modern institutions function in this way. A group of founders get together and try to set the community’s direction at both an object and meta-level. Just as the objects and tools we use have become increasingly engineered, so too have our institutions become influenced by engineering. The evolution and design of social norms is at the heart of what we call society. I would venture to say that it is the defining feature of the human species, and of intelligent species in general. The Machiavellian Intelligence Hypothesis posits that intelligence arose, not to better use tools or better hunt prey, but instead to better compete in the social arena. Therefore, I think it’s fair to say the top down engineering in the world of social norms faces an uphill battle to outperform the metis of the traditional culture. However, this applies much less when we turn our eyes towards the engineering of the physical spaces our cultures occupy.

Social Engineering in the context of the physical is about the design of objects and spaces in the traditional engineering sense, but with a consideration to how that design influences the group rather than the individual. It’s obvious that a designer making a chair must consider how the chair interacts with the behaviors and preferences of the person who will eventually use the chair. This same thoughtfulness should be applied, and usually is, when dealing with objects and spaces that drive social interaction. Someone trying to build a successful bar will think carefully about the layout and decoration of the space and how that will influence their patrons. They may think about other layouts they’ve seen and how they might improve on those designs in order to give the space the mood they want. The pub is undeniably a social institution, and it is designed to both encourage and discourage certain types of social behavior. Every space you interact with, from the supermarket to the sidewalk, has generations of trial, error, and improvement. That doesn’t mean every space is perfect, but it’s easy to forget the marvel that is present all around us. However, it is the process of conscious engineering that has also introduced many institutions that are detrimental to healthy communities. One such piece of design often discussed is the American shopping mall.

In 1954, architect Victor Gruen built the first modern shopping mall in Michigan. Two years before his death in 1978 he would describe malls as destroyers of cities. The suburbanization of America, a process due in large part to decisions made by urban engineers, killed key social spaces and is widely seen as a social engineering mistake of the highest order. Many of these engineers would see the damages within their lifetime, and like Gruen, spend their life trying to reverse course. For much of human history, cities didn’t have designers, and even when city design started, it satisfied itself with general zoning, usually to protect and enforce class divisions, as often seen in early Chinese urban environments. City planning has introduced big improvements in quality of life and health, but also class segregation and the destruction of communities. Much of the excellent book, Seeing Like a State by James C. Scott, is focused on this phenomenon. The core conclusion of that book is that top-down planning is deeply flawed, evidenced by a variety of failures at such tasks, from farming to the city of Brasilia. The points made by Scott are well argued, and I recommend taking a look at the analysis of the book by Scott Alexander or Lou Keep at the very least. I agree with Scott in that top down planning of human environments is an incredibly difficult task to do successfully, and there are a mountain of failures left behind by the High Moderns for us to learn from. As Scott Alexander once said, “The road we’re on is littered with the skulls of the people who tried to do this before us.”

To me the key takeaway is that the High Moderns failed because of the particular approaches they took when undertaking their design projects. They regularly ignored the actual desires of the people living in the cities they were to redesign, and their motivations were counter to the goals of the populace. The government wanted more legible, easier to tax cities; the citizens want community and more local control (features which notably go hand in hand with organized resistance). Today, most of our designers are deeply aware of the failures of the high moderns. Books such as Seeing Like a State detail these past failures and provide guidance towards avoiding these pitfalls. Arguably cities at the forefront of growth have overlearnedthese lessons, with any attempt to demolish old buildings met with fierce opposition. We aren’t perfect, but lessons have been learned in the way we design our social spaces, with one massive exception.

Part 3 — The Internet

The internet has opened up a new frontier in the design of social institution. We are designing platforms that are used by wide masses of people, that grow more quickly than any historical analog, and that provide unprecedented levers of control over discourse. Facebook was founded in 2004; ten years later there were more monthly active Facebook users than Catholics. The ability to implement a new social institution with basically no startup cost and end up with this much influence is unprecedented, and thus it’s not surprising that we’ve seen so many instances of the new social internet having issues with healthy discourse. So much of the evolutionary work that went into shaping our meatspace social institutions has been ignored during the construction of these new online spaces. Platform designers often repeat the mistakes of the High Moderns. The users of the platforms are rarely given a voice in discussions, and are frequently treated as antagonists rather than stakeholders. Worst of all, centralized platforms have very little immediate incentives to improve the quality of discourse, as network effects prevent users from easily moving to a nicer competitor. Network operators are encouraged to gain users as fast as possible, keep them in the space as long as possible to view ads, and completely ignore the social well-being of the communities that form. Additionally, online platforms provide a nigh microscopic level of control over the interactions between users. Someone designing a bar can choose the layout of the tables and the lighting, but an online platform designer can run automated testing to determine what text, fonts, and layouts best guide the user into behaving in a desired way. In the case of platforms like YouTube, complex AI systems are constantly optimizing every nanometer of the system to maximize ad revenue.

In the early days of the internet there was a lot of optimism about the ability of the web to bring people together, to form new understanding between distant peoples, and to provide an escape from tyranny. It’s important to recognize some of the successes on these fronts. I regularly communicate with people from other nations, and that has helped give me a broader view of the world and of differing cultural norms. However, anybody can see that we have failed to live up to this early promise. Most of the social spaces online were driven in design by technological constraints and financial motives, not by a consistent dedication to building prosocial institutions. The rapid expansion of the internet has left engineers struggling to make their websites function at all, let alone spend resources on deep analysis of user behaviors. Such work is only done by established players with the goal of increasing revenue, and thus almost inevitably results in a worsening of user experiences because of misaligned incentives. To fix these issues we need both philosophical changes and technological changes.

On the ideological level we need to ensure that programmers, and the managers who direct them, are thinking carefully about how their platform encourages users to act when designing social spaces online. This requires a long-view of platform design, because user engagement metrics can’t tell us if we’re making people happier or more informed. A key part of this is to respect the history of human social evolution, and use existing institutions as a starting place. For example, it’s in the case of Facebook it’s unclear what exactly a Facebook friend maps to. Clearly it’s not analogous to a real life friend, because even someone you consider a friend can’t listen into most of your conversations and interject at will, a behavior mode that “friending” on Facebook enables. As a result, anyone who used Facebook during its height knew the pain of ending up with too many “friends” and having them jump into and derail any conversation, as well as the strange pressures around someone offline asking to be added as a Facebook friend. The failure of Google+ was unfortunate, because the circles concept of grouping individuals into overlapping groups such as “family”, “friends”, and “acquaintances” maps much more clearly to real world relationships. Once you can approximate an existing healthy institution, then you can start making modifications and consider how they might make the communities better or worse. Approach projects with a respect for your potential users and their values, but don’t be naive and assume everyone is a good faith actor. Understand the challenges presented by potentially web scale networks, where you could have anywhere from 50 to 50 thousand users, such as content moderation. This failure to scale up moderation is at the heart of YouTube’s difficulty with copyright and Twitter’s losing battle over what kind of speech is necessary to censor. Platform owners have tried to react to users attacking one another through more aggressive moderation, but this often results in innocent users being caught in the crossfire. Users are encouraged to use block/mute features, but such tools aren’t effective enough when facing a mob of potentially thousands of users. Users are forced into increasingly defensive behaviour, which means even good faith critics may be blocked out. There probably isn’t a one size fits all solution to many of the problems faced during this design process, but if we get programmers and designers to start considering these issues seriously we’re already improving. As an example, Yik Yak was developed pretty late into the social network timeline. Even a cursory glance at its design could have revealed the inevitable issues it would have with moderation and abuse. Somehow, the project still went ahead, and not 2 years later the platform was basically dead, with an unknowable amount of social collateral. Changes to the way engineers approach these problems are important, but there are reasons many of these somewhat obvious ideas haven’t already been implemented, and why seemingly obvious improvements to platforms get left on the table. That being said, anybody who works in the software world knows that there are already legions of designers that look at how these platforms are put together, trying to improve them every working hour. Clearly, it isn’t enough. Some of these problems are fundamental to the technologies and business models relied upon by platform operators.

The combination of centralization and the advertiser revenue model result in a world where essentially every online social space has goals that run counter to the values of the users, at least to some degree. Platform providers are only incentivized to keep their platform nice enough to prevent a total collapse in the user base, an astoundingly low bar because of the network effect. If YouTube was based on a monthly subscription model, the company’s main incentive would be to keep viewers and content creators happy. Instead, YouTube’s primary goal is keeping advertisers happy, which means doing things like penalizing external linking. The company’s obsessive AI driven tweaks all work with the goal of increasing the length of time users spend on the site, and by extension revenue. After years of users protesting that such narrow optimization hurts the community, YouTube has responded and stated it will try to change the way it manages content, but the fact remains that ad revenue is YouTube’s primary incentive. YouTube Red can even be considered an acknowledgement of these issues. Already we are seeing cracks in the technological and business structures of online platforms. Brave/BAT and others are pioneering a microtransaction alternative to ads for funding online content. Across basically every social platform struggles with content moderation are revealing the impossibility of centralized solutions to online social spaces, and decentralized media platforms like the Fediverse are waiting in the wings to step in and fill the void. To be honest, many of these alternatives are far from ready to pick up the mantle of the web giants, but they grow more attractive with each passing year. Most importantly, users want alternatives. People are sick of dealing with companies that don’t care about them and of ceding control over their communities to some distant office of, at best, overworked engineers just trying to avoid a lawsuit.

I’d like to take a moment to go back to those early internet idealists. To imagine the potential provided by the internet. Imagine settlers arriving at a continent where there is unlimited space for new communities and cultures. Where physical violence is impossible, and where nobody can be prevented from leaving a community they don’t like. We aren’t there, and maybe it will be a long long time before we get there. Despite that, I’m an optimist at heart. I believe in human ingenuity, and in the potential for people to rise up to the challenge and opportunity they are presented with. The internet is still young, and we have time to make sure that future generations will benefit from it in ways we can’t even imagine today.

Part 4 — Closing Remarks

I tried to focus on a descriptive approach in this essay. It’s obviously informed by my own perspective, but I avoid spending a lot of time on the particulars of how I would design a social platform, instead focusing on the technological and incentive structures that provide the foundation for any platform. In the next part of this series on the design of social platforms, I intend to dive more deeply into the specifics of how I might design a platform given the sentiments expressed above, as well as my own thoughts on the nature of community.

Footnotes

This essay is heavily inspired by The Uruk Series by Sam[ ]zdat also known as Lou Keep, whose writing was really inspirational to me, a person who is more high modernist by nature.

21