I was asked to review a draft that critiques the broader rationality movement for a tendency to engineer elegant solutions to fix the world in ways that bypass 'inefficient' democratic dialogue.
However, the authors worried that their explanations would trigger ingroup-outgroup conflict amongst readers, rather than engage rationalists to discuss and collaborate on reforms. They deliberately decided to ditch this version and not publish it in the media.
IMO though, they made some solid points about the broad sociology of the rationalist movement. So I selected and edited excerpts for clarity and constructiveness (skipping the intro that got into the already much discussed controversy between New York Times journalist Cade Metz and Scott Alexander), posted below with the authors’ permission.
Do ask friends for their two cents, but please share the link in private. Understanding this piece requires some context on how our community works, and throwing out a message on Twitter, Reddit, and the like is a poor choice for sharing that context.
I hope that you will find some insightful nuggets amongst the commentary.
What's your sense of what the article describes accurately, and what it fails to capture?
Edit: I should have more literally said that, yes, this draft has a bunch of commentary that is vague and antipathetic, and I hope you're going to get something out of it anyway.
The deep connections between Silicon Valley, rationalism, and the anti-democratic right are more complicated, and more revealing, than either Metz or his critics allow for. If Silicon Valley simply harbored a virulent minority of semi-closeted white supremacists it would be … exactly like much of America, as many have woken up to in the last decade. What is more important is what specifically links SV’s apparent rationalism to NRx attitudes. Given the way technologists’ dreams are increasingly shaping our future, we have a right to know what these dreams hold.
SV’s mythos and Rationalism start from the optimistic premise that technology and reason can fix the world, helping people understand what they have in common. NRx instead fixates on the misanthropic principle that democracy is always a ruse for the rule of the powerful and those who rule should be “worthy” (viz. brilliant CEO-godkings). Standard democracy is, in their minds, simply a cover under which the mediocre and corrupt displace the brilliant and able.
But both these ways of thinking about the world have a common origin in the engineer’s faith that logic and reason could transform the complex, messy challenges of the world into solvable problems. This faith inspires the civic religion of Silicon Valley, which preaches that new technologies spread liberal values and allow consumers to build in common. It leads Rationalists to argue that individual human reason could transform the world when it was purged of thinking errors. Equally, it inspires neo-reactionaries to argue that, when most people don’t follow this path, it shows they don’t know what is good for them, and would be better off if they were ruled by those who do. In their heretical interpretation, the secret gospel of the Valley isn’t that technology frees consumers, but that it empowers the founders and CEOs who have the brilliance and ruthlessness to do what is necessary.
Both the bright and dark versions of the religion of the engineers are dismayed by the pluralistic messiness of democracy, the inevitability of disagreement and talking past each other between people with different cultures, values and backgrounds. Both want to substitute something cleaner and simpler for this disorder, whether it be the bland corporate civicism of the social network, or the enforced order of the absolutist.
And this frames the political challenge for those who want a different vision of technology. If the great machineries of Silicon Valley are ever to serve democracy rather than undermine it, the people who are building and rebuilding it will need to come to a proper understanding of the virtues that are inseparable from its disorder, and embrace them. This would start from a much more humble understanding of rationality, one that recognizes the messiness and diversity of the ways in which people think about the world as a strength, not a weakness. A better understanding of how human reasoning actually works would help inoculate the makers of technology against both the blithe faith that they can build a world without disagreement, and its distorted shadow version justifying tyranny.
SV is a microcosm of what the world would look like if it were run by engineers. Engineers take complicated situations and formalize them mathematically, abstracting away as much of the mess as they can. Their models can often be used to figure out what is going wrong or what could be improved, discovering how complex and creaking machineries can be replaced with simpler, cleaner mechanisms. Engineers, then, tend to think of their work as a series of “optimization problems,” transforming apparently complex situations into an “objective function,” which ranks possible solutions, given the tradeoffs and constraints, to find the best one and implement it.
This is the mainspring of Silicon Valley culture - the faith that complicated problems can be made mathematically tractable and resolved, and that a focus on doing so is the path to a much brighter world. The first step is to identify what you want (figuring out some quantity that you want to maximize or minimize). The second is to use whatever data you have to provisionally identify the resources that you can employ to reach that goal, and the hard constraints that stand in your way. The third is to identify the best possible solution, given those resources and constraints, and try to implement it. And the fourth is to keep updating your understanding of the resources and constraints, as you gather more information and try to solve the problem better. This way of thinking is rational (it closely resembles how economists model decision making), and it is decidedly goal oriented.
This approach underlies what used to be exuberant and refreshing about SV, and very often still is. Engineers, almost by definition, want to get things done. They are impatient with people who seek to immerse themselves into and integrate different perspectives on issues rather than just solve the “primary” problem laid out in front of them. The closer a problem statement is to something formally optimizable, the more excited they are to engage with it.
And when engineers unleashed their energies on big social problems, it turned out that a lot of things could and did get done. Many of the great achievements of the modern age are the product of this kind of ingenuity. Google search – a means for combing through a vast, distributed repository of the world’s information and providing useful results within a fraction of a second – would have seemed like a ludicrous impossibility only three decades ago. Google’s founders used a set of mathematical techniques to leverage the Internet’s own latent information structures, ranking online resources in terms of their likely usefulness and unleashing a knowledge revolution. More recently, faster semiconductors have allowed the application of a wide variety of machine learning techniques – many of them based around optimization – to social needs and (often not at all the same thing) business models.
Many people have written about how this idealistic vision has been dragged down into a slough of despond by predatory profit models, monopolistic tendencies, and the deficiencies of algorithmic capitalism. Less attention has been paid to the vision’s own internal blind spots, even though some of the wiser leaders in the community recognized them early. For example, the PhD advisor of Sergei Brin and Larry Page, who invented Google’s PageRank algorithm, Terry Winograd (2006) famously highlighted the dangers of rationalism. While he saw the power and attraction of formalism, he saw how such formalism could easily destroy the very values it aimed to formalize and serve, especially in areas of social and economic life we understand very poorly.
This is why Silicon Valley does so badly at comprehending politics, where people not only disagree over what the best solutions are to problems or the precise setting of valuation parameters, but clash over the fundamental terms in which the problem is conceptualized. Is racism a characteristic of individual preferences or an arrangement of social forces? Is fairness a property of a whole society or of a particular algorithm? Is human flourishing captured by economic output? What does “power” and its “decentralization” mean?
The millenarian bet of SV was that these problems would dissipate when confronted with the power of optimization, so that advances in measurement and computational capacity would finally build a Tower of Babel that could reach the heavens. Facebook’s corporate religion held that cooperation would blossom as its social network drew the world together. Google’s founder Sergey Brin argued that the politicians who won national elections should “withdraw from [their] respective parties and govern as independents in name and in spirit.” These truths seemed so obvious that they barely needed to be defended. Reasonable people, once they got away from artificial disagreement, would surely converge upon the right solutions. The engineer’s model of how to think about a complex world became their model of how everyone ought and would think about a complex world, once technology had enabled them. Everyone was an engineer at heart, even if some didn’t know it yet.
As the historian Margaret O’Mara has documented, this faith made it hard for Silicon Valley to understand its own workings. The technically powerful frameworks that engineers created often failed because they weren’t responsive to what people actually wanted, or sometimes succeeded in unexpected ways. Companies with mediocre solutions that went viral could triumph over companies with much better products.
As O’Mara shows, success often depended less on technical prowess than on the ability to tell compelling stories about the products that were being sold. Social networks spun out from Stanford University, from the nascent venture capital industry and from other nexuses, shaping who got funded and who did not. Silicon Valley had a dirty intellectual secret: its model of entrepreneurial success depended significantly on the unequal social connections and primate grooming rituals that it loudly promised to replace. A culture that trumpeted its separation from traditional academic hierarchies recreated its own self-perpetuating networks of privilege – graduating from Y-Combinator was every bit as much an elite credential as getting tapped for Yales’ Skull and Bones.
The growing gap between how the Valley worked and how it told itself it worked generated extravagant tapestries of myth to cover the fissures. Founders like Steve Jobs and Mark Zuckerberg were idolized for their brilliance, even though others who were as bright or brighter had failed through bad luck (e.g. General Magic’s 90s pioneering of what became the iPhone), discrimination (e.g. the first programmers were women and minorities, as celebrated in Hidden Figures) or from being so innovative that they were ahead of their time (e.g. Xerox PARC). Those who constructed and sold these myths, often women from very different backgrounds from the men they helped build cults around, were written out of the success stories as well. Bargain basement ideologies, such as the new Stoicism or Girard’s theory of mimetics, provided justification for why things were as they were, or lent a spurious intellectual luster to an economy built as much around gladhanding as intellectual flair. And always, the master-myth was the notion that the Valley helped people to cooperate to make the world a better place.
When things took a turn for the worse, Silicon Valley companies clung to these myths. In a 2016 internal memo that later became notorious, senior Facebook executive Andrew Bosworth argued that Facebook’s power “to connect people” was a global mission of transformation, which justified the questionable privacy practices and occasional lives lost through people bullied into suicide, or terrorist attacks organized via social media. Connecting people together via Facebook was “de facto good,” unifying a world that was divided by borders and languages.
This rhetoric wore thin, as it became obvious that Facebook and other Silicon Valley platforms could amplify profound social divides, enabling the persecution of Rohingya minorities in Myanmar, allowing India’s BJP party to spur on ethnic hatred towards fellow citizens without repercussions, and magnifying the influence of America’s far right. As the writer Anna Wiener showed, the street found its own uses for things. Platforms such as GitHub, originally built for programmers to collaborate on coding open source software, unexpectedly provided a place for the far right to organize.
The machineries of optimization that SV built weren’t the only cause of this polarization, but they likely helped it spread and deepen. Social media giants devised algorithms that would learn to optimise “engagement” by pushing out content that made their consumers keep clicking and scrolling through the interface, and look at the profit-making ads that popped up. Users often engaged most with posts or videos that shocked or surprised them. Enticed down an individualized rabbit-hole, each would enter a land of dark wonder where agreed facts were obvious lies, and logic was turned upside down. Instead of providing a rational alternative to divisive politics, SV’s products deepened the divisions.
Rationalism was deeply entwined with SV thinking and replicated its flaws on a more intimate scale. Rationalists didn’t see themselves as a cult (though they did know that many outsiders saw them that way, and even joked about it now-and-then). Instead, they believed that they were pioneering a transformative and universalizable approach to reasoning. Human beings could be washed clean of the original sin of unbridled cognitive bias by being bathed in rational thinking. Maladaptive bias could gradually be overcome, making those following Rationalists “less wrong.” As a Rationalist learned over time, she would more closely approximate a true understanding of the world. And as others too followed the same learning path, they would all converge more closely together.
Viewed from one angle, Rationalism was optimization turned into a philosophy of personal self-improvement. Viewed from another, it was a complex intellectual amalgam of evolutionary cognitive psychology, Bayesian statistics, and mathematical game theory, with bits and pieces of epistemology (philosophical inquiry into how we know things) thrown in.
Evolutionary psychology explained man’s fallen state. Evolutionary forces had shaped human thinking to make it prone to a variety of cognitive biases – mental shortcuts that didn’t make much sense in the modern world. For example, we are all prone to double down on mediocre projects we committed to and can’t recover sunk costs from. Or we stick with the beliefs we or others close to us hold dear, even when the evidence starts showing that those beliefs are false.
But redemption was possible, thanks to a theorem that provided a mathematical method for updating how confident you were about a statement (or its counterpart) being true, as you discovered new evidence. As you continuously tested and retested your beliefs, Bayes’ theorem would guide you on a path towards rationality. And as more disciples embraced Bayesian reasoning, game theory suggested that they should converge on a common understanding of the truth.
These ideas spurred a gospel of self-improvement, whose evangelists sought new converts on online discussion forums such as LessWrong, and blogs such as Overcoming Bias. Their ideas had enormous indirect consequences. Rationalists took up the philosopher Peter Singer’s nascent ideas about charity and helped turn them into “effective altruism” – a way to think systematically and rationally about charity, weighing up the value of goals (in increasing human happiness) and the relative effectiveness of various means of reaching those goals. This approach spread to SV, reshaping philanthropy through organizations such as GiveWell and (less directly) the Gates Foundation, each seeking to discover the most effective organizations that you could give money to.
The fundamental Rationalist bet – that clear logical thinking would allow you to better understand the world – was validated by the work of Philip Tetlock and his colleagues, who set out to turn “forecasting” into a science. Tetlock found that conventional experts tied to a particular domain were overrated in their ability to predict the future. Instead, open-minded generalists could train to become “superforecasters” enabling them to forecast a future event relatively accurately by thinking logically about similar cases in the past (e.g. about how frequently such an event was observed subsequently, and what caused it to happen).
However, sometimes Rationalism didn’t counter the biases of its practitioners or speed their convergence on learning wisdom available in other traditions…it instead took them on strange and often circular intellectual journeys. Many rationalists became obsessed by threats to and opportunities for the long-term future of humanity, driven by the notion that small changes in our society’s trajectory today might dramatically alter the life prospects of our distant descendants. George Mason university economist Tyler Cowen used this notion in his case for maximising the economic growth rate, arguing that if corporations make sustained improvements in how much more they produce and sell relative to years before, the benefits to consumers will compound substantially over the long run. Others, such as Oxford-based philosophers Toby Ord and Nick Bostrom, instead argued that minimizing existential threats (e.g. that a pandemic, an asteroid, or a powerfully optimising machine eliminates all humans).
Eliezer Yudkowsky, a blogger so prominent early on that Alexander jokingly referred to the community’s central tenet as his being the “rightful caliph”, called the sirens about the possibility that an ultra-powerful machine intelligence would run wild. However, a key conclusion from the line of “AI safety” research he later stimulated was that machines must be hardcoded to be uncertain about which objectives to optimise for, and thus kept constantly dependent on feedback from their human overseers. Effectively, this conclusion undermines Yudkowky’s original imperative to model autonomous machines, returning them to their humdrum role as tools and aids to human cooperation and communication (areas already researched for decades in non-AI-focused fields like human-computer interaction).
Rationalists gathered in such intellectual culs-de-sac because they were convinced they had found a better way to think than the groups around them, almost by definition. Yudkowsky saw Rationalism as a revolt against conformity: no one could become a true Rationalist until “their parents have failed them, their gods are dead, and their tools have shattered in their hand.” They needed to have their “trust broken in the sanity of the people around them.” Even standard approaches to science were inadequate, having formed long before the role of bias was understood properly. Such an isolating posture naturally leads adherents to be impatient with established but unfamiliar traditions of knowledge, or the slow, pluralistic social understanding that results from coalition and consensus-building.
This spawned a movement that was sometimes inward-looking and self-congratulatory, and gave rise to its own special shibboleths (as Cowen has politely but firmly suggested, Rationalism had some notably irrational elements, and was best considered as “just another kind of religion”). It also primed its members to believe that there were stark inequalities in humans’ ability to reason, and that they were on the right end of the inequality.
The notion of natural and enduring inequalities of reason might explain why rationalism has had little appeal in the broader public. Few outsiders seemed interested in grand proposals to remake politics along rationalist lines. The proposal for a “futarchy” by Robin Hanson, blog author of Overcoming Bias and Cowen’s university colleague, would rebuild the political system around betting markets. Yet speculation on world events, even within today’s heavily regulated betting pools and derivative markets, is frequently taken by the public as an unscrupulous form of misappropriation that accelerates inequality and financial instability.
Contests for epistemic supremacy as the foundation of a better future made sense to many Rationalists, but not to many others. But Rationalists were well used to believing in things such as cryonics and the ‘many worlds’ theory of quantum physics that the public scoffed at. Therefore, they took such scoffing as a sign of the public’s irrationality or disingenuousness, rather than a suggestion that ideas ought to be re-examined. This theme loomed large in Hanson’s later work, which suggested that most communication in and around democracy was a kind of social show, concealing deeper power struggles and dynamics.
Such cynicism made it easier for a minority of Rationalists to drift gradually towards a darker kind of politics. If there was a crucial distinction between a small cognitive elite that was capable of reasoning and thinking well, and a much larger population that was so deranged by cognitive bias that it could barely think at all, then why should you believe in democracy in the first place? It would be better to allow those who could think clearly to rule over those who could not. If democratic communication was always overrun by disingenuous signaling, the best we could hope for is to be ruled by our epistemic bests.
This was the point where Rationalism, NRx and SV joined together. Where Rationalism went sour, it inclined towards a dogmatic elitism, suggesting that the few who could think clearly ought re-engineer society for the many who could not. The reactionary argument that democracy was a fundamental mistake seemed quite attractive to some Rationalists, especially those who had seen their ideas wither in the face of public indifference.
And SV – together with authoritarian societies such as China, and quasi-democratic societies such as Singapore – provided a model of how this could be done. Curtis Yarvin, who had played a significant role in the early days of LessWrong, advocated an idiosyncratic model of rule that combined absolutist monarchy with Silicon Valley founder-worship. CEO-kings, like the Roman emperor Septimius Severus in Gibbons’ Decline and Fall, would recognize that the “true interest of an absolute monarch generally coincides with that of his people” and govern to provide stability and prosperity.
In short, Rationalist debate was pulled backwards and forwards between two points of attraction. The more powerful one was optimistic about the possibility of making the world more rational. The less powerful concluded that most people were incapable of being converted, and moreover didn’t deserve it. Rationalists weren’t simply more tolerant of contrarian arguments about, for example, racial or gendered differences in intelligence or capacity to learn and work. They were fascinated by these arguments, which spoke to the core divide in their community: could all find the truth of Rationalism and be saved, or was salvation reserved for a tiny Elect, condemning everyone else to perdition?
These core disagreements reflected and helped shape how SV thought about politics. Again, a majority held to their version of the liberal creed, anticipating that technology and connection would make the world into a great thrumming hive of thought and activity. Individual differences of class, creed and interest would fade into insignificance. And yet again, a small but influential minority sought to speed the return of the Outer Gods and their bloody handed servants. The leaders of the two factions mingled together, serving on the same boards, investing in the same firms, and arguing with each other in common terminology over contrary ends.
When democratic crisis came, the former had no good way to react, or even, initially, to understand what was happening. How could their tools, which were designed to draw people and societies together, instead be helping to tear them apart? The latter, although they often despised Trump and those around him as useful idiots, saw his victory as justifying their darkest hopes and beliefs. Trump was no-one’s idea of a ruthlessly competent CEO-priestking – but he might clear the path for one by tearing away the organizing myths of liberal equality. The result was a kind of doublethink. SV finds it hard to think about helping to fix democracy, because it has a hard time thinking about democracy itself.