Here are some hypotheticals to consider, with a common theme. Note that in each case I’m asking what you would do, rather than what you should do.
Your answers to these questions depend on your personal circumstances. How uncomfortable is wearing a mask? Do you find a fork slightly or significantly easier to use? How strong is your preference to have kids? Is there a Justin Bieber poster in your background?
But — unless your answer to each question didn’t depend on the specific percentage — your decision also depends on others’ behavior. If everyone else has their video on, you’ll probably feel obligated to keep yours on too. Maybe you’ll move the poster first. This applies not just to behavior, but also to preferences, beliefs, and opinions.
So far, everything I’ve said is pretty obvious. But let’s throw a model at this observation and see if we can discover anything interesting.
Let’s take our example with 20 people in a video call and line all of the participants up, from bottom to top, based on how many other people need to have their video on in order for them to choose to keep their video on.
In this example, Alex and Betty will keep their video on no matter what; on the other hand, Riley, Steve, and Tara will turn their video off no matter what. Most others are somewhere in between: Isaac, for instance, will keep his video on if at least 7 of the other 19 participants have their video on.
Take a moment to think about what will happen in this call. As a hint, you might want to consider the diagonal line I’ve drawn on the chart.
The answer is that nine participants — Alex through Isaac — will have their video on, and the rest will have it off. Why? Well, if everyone has their video on at the start, then Riley, Steve and Tara will turn their video off right away. A cascade will follow: Quinn and Pete, who are only willing to have their video on if everyone else has theirs on, will turn their video off. And so on — up through Jenny. Now Alex through Isaac (but no one else) will have their video on. But at this point the cascade stops: Isaac is happy to keep his video on, as is everyone else.
This would also be the end state if everyone started with their video off (assuming there’s no status quo bias). Alex and Betty would turn their video on, Charlie and Diana would follow, and so on, up through Isaac. Indeed, no matter who has their camera on at the start, this will be the end state.
Let’s look at a different example.
Now, Alex is willing to keep his video on so long as at least one other participants does as well. At the other end of the spectrum, Tara is willing to keep her video on if at least 17 other participants do. Now what will happen?
This time, the answer depends on the starting state. If everyone starts with their video on, everyone will keep it on. If everyone has it off, everyone will keep it off. In fact, small changes in the starting state cause dramatic differences in the end state. If Alex through Henry start with their video on, then Fred, Grace, and Henry will turn their video off, and there will be a downward cascade (ending with no one having their video on). If Alex through Jenny start with their video on, there will instead be an upward cascade, with Kevin turning his video on, followed by Lisa, and so on. (Exercise for the reader: what happens if Alex through Isaac start with their video on?)
Drawing a 20 by 20 chart is very particular to our example. Let’s abstract that away; instead of plotting dots on a 20 by 20 grid, we’ll plot curves on a square. That’ll look something like this.
Just like in the video call example, we’re ordering people by how willing they are do a thing. In the video call example, the thing was having their video on. But now we’re abstracting away the number of people and instead want to think about the percentage of other people doing the thing that is necessary for someone to choose to do the thing. I’ll be calling these curves social behavior curves.
Perhaps a concrete example would be helpful: let’s say that in the above plot, the “thing” is wearing a mask. In the plot, the people who are most willing to wear a mask will wear one if at least 3% or so of people are wearing a mask (these are the people toward the bottom). On the other hand, 3% or so aren’t willing to wear a mask no matter what (these are the people at the very top). In the middle we have the median person in terms of willingness to wear a mask, who will wear a mask if at least 35% of others are wearing one.
Armed with a social behavior curve, it’s pretty easy to reason about the social equilibria of mask-wearing. For example, suppose that just the 50% most willing-to-wear-a-mask people wear masks. This is not an equilibrium. That’s because the person who is just a little bit above the 50% mark (i.e. the person who’s just slightly less willing than median to wear a mask) will put on a mask: after all, they’re willing to wear a mask if at least 35% of others are wearing masks, which is the case. And the next most willing person will put on a mask, and so forth: we’ll have an upward cascade of mask-wearing until… when?
Until the purple point near the middle (around 65%) is reached. And at that point, the cascade will stop because people above the 65% line will need more than 65% of people to wear a mask in order to themselves wear a mask. And similarly, if the starting state were that 75% of people wore masks, there would be a downward cascade until the 65% mark was reached. (In general, an upward cascade will happen if you’re at a point on the y-axis where the red curve is to the left of the blue line. And if you’re at a point where the red curve is to the right of the blue line, you’ll get a downward cascade.)
In this sense, the 65% mark is a social equilibrium. More precisely, the state of the world where the 65% most willing-to-wear-a-mask people wear a mask, and the rest don’t, is a social equilibrium. There are other social equilibria in this example. One is 0%: if no one is wearing a mask, no one will put on a mask. And there’s another one around 96% or so.
What about the points in orange? They’re kind of weird! Consider the bottom-left one, which is around 25%. If 24% of people are wearing a mask, there will be a downward cascade. But if 26% of people are wearing a mask, there will be an upward cascade. In that sense, this point is an unstable social equilibrium. If you’re right at 25%, everyone wearing a mask is happy to continue wearing one and everyone not wearing one won’t put one on. Go just above or below 25% and you get a cascade. The same is true of the orange point around the 90% mark.
To summarize, points where the social behavior (red) curve crosses the blue line from left to right are stable social equilibria. Points where the curve crosses the blue line from bottom to top are unstable equilibria.
As with all mathematical models of social behavior, this one is incomplete. You might want to take a minute to think about the various things this model fails to capture. Still, I wish to make the case that this model is useful for understanding phenomena such as persuasion, radicalism, and rapid cultural shifts. Let’s dive in.
In our model, persuasion is the act of shifting the social behavior curve horizontally. That’s because a persuasive argument in favor of doing X lowers the percentage of other people who need to be doing X in order for you to join in and start doing X yourself.
To see this, bring yourself back into the very early days of the pandemic, when the virus was spreading but no one was wearing a mask. Now suppose you read a compelling argument in favor of wearing a mask. This alone probably wouldn’t be sufficient for you to start wearing a mask. (If this isn’t true for you personally, consider the “you” to be generic.) Instead, it would lower your threshold for how many other people need to be wearing a mask in order for you to be willing to wear one. Maybe beforehand you would have started wearing a mask if 30% of people around you were wearing one, but now a 25% masking rate would be sufficient. In other words, your point on the social behavior curve used to have an x-value of 30%, but now it’s 25%. Your point on the curve shifted leftward.
(In a more naïve model of persuasion, one where if you hear a persuasive argument in favor of X, you start doing/believing X. I think basically no one does that; we’re all shaped not just by arguments but by the beliefs and behaviors of those around us.)
Now, if everyone reads the argument then the entire curve will shift to the left. And if some fraction of the population comes across the argument, then you can still model the curve as shifting left — you just need to multiply the amount of the shift by the fraction of people who come across the argument.
(If the argument systematically affects people in different spots of the curve differently, then the leftward shift won’t be uniform. But I’ll be assuming a uniform shift to avoid overcomplicating the model.)
Typically, the effect of persuasion looks something like this:
You come up with a really clever argument in favor of X — enough to shift the red curve leftward by 5 percentage points. People who previously needed 80% of people around them to do X in order to themselves do X now only need 75%, and so on. This causes the equilibrium to shift from the purple point… up just a few percentage points to the grey point. Congratulations: you’ve successfully disseminated your super persuasive argument, and 3% more people believe X.
That’s what typically happens. But in some cases, an equally persuasive argument can have dramatic effects.
In this example, the same 5% shift displaces the equilibrium shown in purple. After the social behavior curve shifts from the red curve to the green curve, there is no longer any equilibrium near the purple point, since the green curve does not cross the blue line there. There’s an upward cascade of people who start to do X, and society moves to a new equilibrium around 85%. A 5% persuasion shift causing 50% of people to change their behavior: that’s some serious return on investment!
An alternative perspective: the effect of gradual persuasion is that the social behavior curve gradually shifts to the left, and so the equilibrium gradually shifts to the right (as illustrated in Figure 6). But then at some point, the curve moves past a point of tangency with the blue line — that’s what happens in Figure 7 — and then there’s a dramatic shift, where the equilibrium switches to a different, possibly far-away one.
Is this realistic? I believe it is! In the real world you won’t see such a seismic shift, because different people belong to different communities with different social behavior curves. But I would posit that often when society sees a rapid change in social norms and behaviors, it’s often due to an effect like the one in Figure 7.
Here are some possible examples (speculative; take with approximately 1.5 tablespoons of salt):
Interestingly, this model of rapid social change posits that once a rapid social change has happened, it’s usually really difficult to go back. To see that, suppose that the dictator decides to placate the populace (or perhaps crack down hard and increase the cost of revolting), effectively moving the green curve back to the right. The result will not be a shift from the grey point back to the purple point. Instead it will shift back down just a little to the yellow point.
This accords with my intuition for how rapid social changes tend to work out: once they happen, things rarely go back to how they were.
(What about revolutions that fizzle out? I think these tend to be small in size, i.e. the green curve is never reached. If a revolution gets really large, I think it rarely fizzles out; instead it leads to war or regime change. Ideally I’d like to phrase this hypothesis in a way where I can’t weasel out of it by claiming that any particular revolution that fizzled out just didn’t get big enough, but I’m not sure how to do that.)
The fact that the shape of the curve matters a lot has implications for activists and influencers: focus your energy on causes where the social behavior curve makes it possible for you to tip society into a new equilibrium. You might have ten or a hundred times more leverage than if you just choose the issue that’s most compelling to you!
Of course, estimating the shape of the curve is a huge challenge. One place to start is to try to infer social behavior curves from historical behavior changes and draw some general conclusions (e.g. “social behavior curves regarding public opinion on civil liberties tend to be S-shaped”). Or maybe conducting a survey that asks people questions of the form “If your neighbors started doing X, do you think you would?”. That might give you some mileage, but overall I’d guess that people don’t understand themselves well enough to answer that question accurately.
There’s a lot of intuition to be gained about a social behavior curve by looking at its slope (derivative) at different points. The slope of a social behavior curve at 30% (for example) represents, loosely speaking, how many people have 30% as their “tipping point”, i.e. how many people will switch from not doing X to doing X once 30% of people are already doing X. (For math people: the social behavior curve is a CDF, so its derivative is the corresponding PDF.)
For example, here’s the initial (pre-oppression) curve in Figure 9.
And here’s what the derivative of the red curve looks like. This captures the notion of most people deciding to revolt when a certain “critical mass” (around 15%) is reached.
It is often easier to think about what the orange curve should look like (I’ll be calling them behavior density curves from now on) and then extrapolate the social behavior curve; indeed, that’s how I reasoned about what the curve in Figure 9b should look like.
What do behavior density curves look like? It obviously depends on what X is, but it stands to reason that many behavior density curves are bell-shaped. (After all, many distributions are bell-shaped.)
This produces a social behavior curve that looks like an S-curve. The tighter the bell curve, the steeper the social behavior curve. In the limit, everyone’s at 50%, which means that everyone is going to do whatever the majority is currently doing. A good example of this is network effects. Imagine two identical platforms, Facebook 1 and Facebook 2. People want to talk to their friends, so they’ll join whichever platform has more of their friends.
On the other hand, you could imagine a reverse situation, where most people have a strong preference either to do X or not, such that others’ behavior only matters only a little. The behavior density curve would look like this:
Examples of this tend to be things that are pretty ingrained in people, as opposed to being socially influenced. A good example of this is left- and right-handedness (though in this example the behavior density curve isn’t centered at 50%). The corresponding social behavior curve has this sort of shape:
So far I’ve been talking about these curves descriptively: making guesses about what the world actually looks like. But for fun, let’s talk about the prescriptive question: what is the best shape for a social behavior curve?
On its face this is a pretty silly question. The best possible social behavior curve for “being a serial killer” looks a lot different from the best possible social behavior curve for “donating to charity.” But let’s set these examples aside and think about what we might want out of a social behavior curve where the two possibilities (doing X and not doing X) are both reasonable, but one might be substantially better than the other.
There are lots of examples of this — it’s the case basically whenever reasonable people disagree on what social norms they want. One example of this is ask culture versus guess culture. In ask culture, it’s totally polite to request a favor (“Hey, remember me from high school? I’m visiting your town, can I stay at your place?”), and it is likewise completely fine to say no. In guess culture, people are expected to only ask for a favor if they think that the person they’re asking would be comfortable granting it, and likewise one is expected to say yes unless there’s a good reason not to.
Imagine if the social behavior curve for X = “behaves as if in ask culture” looked like the one in Figure 13 (Facebook 1 vs. Facebook 2): almost everyone behaves like the majority. This is the “collective society” approach, where people are expected to closely follow societal norms. Such an approach would be good for social cohesion: everyone follows the same norm, so there’s no conflict resulting from people misunderstanding each other’s intentions. But it would be bad from the perspective of getting stuck in a bad equilibrium: maybe the current equilibrium is “everyone follows guess culture norms” but in fact ask culture is better and there’s no way to discover this and switch.
Conversely, imagine if the social behavior curve looked like Figure 15 (handedness). This would be the “individualistic society” approach, where people behave according to their own intrinsic preferences. Then society would have lots of askers and guessers (so it would be easy to get a sense of the relative merits and drawbacks of each), but it would be hard to learn from these merits and drawbacks. (Think about how far to the left you’d need to shift the social behavior curve for ask vs. guess culture if it had the same steep-flat-steep shape as the curve for handedness. It would be really difficult for society to come to a collective decision on which approach it prefers.)
What’s the best way to balance this trade-off? I’d argue in favor of something like this:
I see this as the best of both worlds. On the one hand, there are a few people who have a strong preference for ask culture, and a few for guess culture, so society gets to experience and learn from both — or at least knows that both norms are theoretical possibilities. On the other hand, if society gets evidence that ask culture is better, a relatively small leftward shift in the curve will cause most of society to be on board with ask culture (that’s because the slope of the red curve is close to 1). This confers the benefit of social cohesion. It also means that very sudden shifts like in Figure 7 can’t happen; society can respond to new information relatively quickly, but does so smoothly. This seems like a good thing.
What does the corresponding behavior density curve look like?
Something like this (though maybe steeper at the edges). There are lots of people everywhere along the spectrum: people who strongly prefer ask culture, people who strongly prefer guess culture, and also those who are happy to go with whatever norm is the current default. Such a density curve — which is “in between” the “collective society” curve (Figure 12) and the “individualistic society” curve (Figure 14) (though perhaps closer to the latter) — gets you the best of both worlds.
The fact that Figures 16 and 17 are symmetric around 50%, by the way, is not an important feature. The curve below has the same nice properties, even though its equilibrium is around 20% instead. So when I talk about curves with the “general shape” of the curve in Figure 16, I’m including curves like this one.
(I think there’s a lot more that could be said here. We could analyze free speech norms from the same perspective. Totally free speech allows for exploration of a vast swathe of ideas at the expense of societal cohesion, while a lack of free speech inhibits exploration and progress; maybe there’s an optimal happy medium? Also, perhaps differences in collectivism versus individualism could serve as an explanation for why some societies and communities have been more successful than others. But this is way above my pay grade so I’ll let others speculate.)
For me, the main takeaway from the previous section is this: society needs both radicals and conformists, as well as people in between.
When I say “radical”, think Vermin Supreme. A radical does things their own way, to hell with what society thinks. A radical wears a boot on their head and prepares for the zombie apocalypse. A radical bucks the establishment — social, political, scientific, you name it — in pursuit of their own weird beliefs and inclinations.
Society stands to gain very little from most radicals. A typical radical is someone who markets a new form of pseudo-medicine, or espouses a nonsensical economic policy. In the worst case, a radical becomes convinced that societal ills can only be remedied through violence and crashes airplanes into buildings.
But occasionally — not usually but not never — a radical invents a new form of medicine that saves millions of lives, or causes a major scientific paradigm shift, or helps society make substantial moral progress. Without radicals, we’d be stuck with wrong beliefs and bad equilibria forever. (See also: Scott Alexander’s Rule Thinkers In, Not Out.)
I’m not sure how deep this analogy goes, but think genetic mutations. Most are bad, but it’s really good to have some nonzero level of mutation, as this makes evolutionary progress possible.
A radical is someone who, for many different values of X, is on the far-left or far-right of the social behavior curve for X. They’ll do X, or think X, even if no one else does or thinks X. Their existence gives society the opportunity to ponder X.
And in particular, radicals’ existence gives radical-adjacent people the opportunity to join in doing or believing X if it seems like a good idea. Radical-adjacents are people who tend to be pretty close to the left or right extremes of a social behavior curve, but not all the way on the edge. They are the people who don’t necessarily do really weird things or promote strange ideas themselves, but are open to such habits and ideas once entertained by a few radicals.
And so on and so on, down the respectability cascade, all the way to the conformists: those who will go with the prevailing norm or belief. And conformists are important too. Without them, a social behavior curve might look something like this.
You need to move the red curve really far to the left or right (i.e. come up with an incredibly convincing argument or effective movement in favor of or against X) to shift society from “50% of people do X” to “everyone (or no one) does X” — which is quite detrimental if society would be a lot better off with everyone (or no one) doing X.
How many radicals and how many conformists is ideal? I’ve already sort of answered that in Figure 17. People toward the left and right edges of that figure are more radical, people in the middle are conformists. So the ideal distribution looks perhaps something like this:
In my ideal world people are about evenly distributed on the spectrum between radical and conformist, with perhaps a slight radical-ward bias. (Even in such a world, there are very few true radicals: they are represented only by the leftmost 1% or so of the chart in Figure 20.)
[Epistemic status: progressively more and more trolling]
I’ve already talked about one way that social behavior curves can help you think about how to make the world a better place: figuring out when persuasion and activism are effective. I want to finish by talking about another way social behavior curves can help you: namely, figuring out how radical you ought to be.
Let’s say that no one is doing X, but — at least if you disregard that fact — X seems like a good idea to you. An example of X might be becoming vegan, if you’re in a community where everyone eats meat. Should you become vegan or go along with your community’s norm of meat-eating?
The answer depends on whether you think there are too many or too few radicals in your community. If there are too few, then by increasing the “mutation rate” you realize the upside that you might eventually convert everyone to veganism and make the world a better place. If on the other hand there are too many radicals, then an outside view argument is likely more applicable here: society has likely already considered and rejected veganism, even if you don’t know why they did so.
But how can you figure out if there are too many or too few radicals in society? Answering this question seems really hard, just like estimating the shape of a social behavior curve is super difficult.
One approach might be to examine the question empirically: see whether societies with higher levels of radicalism have fared better. But this seems extremely hard and noise-prone.
My radical answer to this question is: if you are inclined (from an inside view perspective) to adopt a behavior or belief, decide how radical to be at random. To explain why, let’s talk about rule utilitarianism.
Rule utilitarianism says that you should act according to whatever set of rules results in the most good. This contrasts with classical utilitarianism, which says that for every decision you should take whichever action results in the most good. I like rule utilitarianism because it’s realistic to follow: it would be exhausting to do utility calculations at every turn, but if your community has some rules of thumb worked out then you can follow those. Spend some time working out good rules of thumb, and you’ll be able to make good moral decisions without excessive overhead.
As an example of rule utilitarianism, a pretty good rule might be “Donate 10% of your income to the charity where a marginal dollar will have the greatest positive impact“.
Crucially, this rule works because we have a pretty good sense of how well-funded different charities are. Suppose instead that people had no idea how much money each charity had, and for that matter didn’t know where anyone else was donating.
This would make things a lot harder. I might do the best I can to follow the rule with the information I have and decide that AI risk is probably the most important cause area. Everyone else who shares my basic thought process might come to the same conclusion, we’ll all donate to MIRI, and MIRI will become oversaturated, while other important charities go neglected.
The key remedy to this problem is either to spread your dollars between multiple charities, or else to randomize your donation, choosing a charity in proportion to how much money it would ideally get. And if everyone randomizes, the outcome will be pretty good! So in the absence of information about others’ charitable donations, a better rule would be “Donate 10% of your income to a charity selected at random in proportion to how much money each charity would ideally get”.
This is the situation with social behavior curves. You have no idea what the social behavior curve for not eating animal products looks like; you just know that you’re in a “everyone eats meat” equilibrium. Nor do you know the distribution of radicals versus radical-adjacents versus conformists. In the absence of such information, you can’t take the strategy of “adopt whichever disposition is most neglected”. Nor is there an approach analogous to “spread your money between charities”: you can’t be a mixture of different levels of radical on the same issue. So the rule that, if adopted, would do the most good is “Select how radical you’ll be at random”.1 If this rule is followed, your community would end up with the right number of radicals and conformists and in-betweens!
How seriously should you take the argument I’ve just made? Should you literally flip a coin next time you decide whether to do something no one else is doing? I’m not sure; as far as I can tell, no one is flipping coins to decide these sorts of things. But maybe some small number of people take my argument seriously and start flipping coins. And if they get good results, maybe some other people will join in the fun. And then eventually, maybe everyone will be flipping coins.
So should you, personally, start flipping coins in such situations? Flip a coin to find out!
[Edit: Ben Edelman points out that sociologists use social behavior curves, see e.g. this paper and these Wikipedia pages. I guess it shouldn't come as a surprise that these concepts are already well-known. I suppose it's nice to have a bit of confirmation that these models are considered reasonable/interesting!]
1. More precisely, the rule is “Select how radical you’ll be at random, according to the ideal distribution of radicals versus conformists”. Here, “ideal distribution” is what an outside observer would prefer for the distribution to be in general. (I’ve posited a guess at this distribution in Figure 20.) The “outside observer” bit is important: of course you’d prefer for there to be lots of radicals on your pet issue, but it’s not a good rule if you wouldn’t want for it to be universalized to everyone’s pet issue.
I think this works well to describe the behavior of small, well-mixed groups, but as you look at larger societies, it gets more complicated because of the structure of social networks. You don't get to see how many people overall are wearing face-masks in the whole country, only among the people you interact with in your life. So it's totally possible that different equilibria will be reached in different locations/socio-economic classes/communities. That's probably one reason why revolutions are more likely to fizzle out than it looks. Another problem arising from the structure of social networks is that the sample of people your interact with is not representative of your real surroundings: people with tons of friends are over-represented among your friends (I had a blog post about this statistical phenomenon a while ago). I'm not sure how one could expand the social behavior curve model to account for that, but it would be interesting.
What a beautiful model! Indeed it seems like a rediscovery of Granovetter's threshold model, but still, great work finding it.
I'm not sure "radical" is the best word for people at the edges of the curve, since figure 19 shows that the more of them you have, the more society is resistant to change. Maybe "independent" instead?
I was always surprised that small changes in public perception, a slight change in consumption or political opinion can have large effects. This post introduced the concept of the social behaviour curves for me, and it feels like explains quite a lot of things. The writer presents some example behaviours and movements (like why revolutions start slowly or why societal changes are sticky), and then it provides clear explanations for them using this model. Which explains how to use social behaviour curves and verifies some of the model's predictions at the same time.
The second half of the post bases a theory on what an ideal society would look like, and how should you act on a radical-conformists axis. Be a radical except if only radicals are around you is a cool slogan for a punk band, but even he writes that he's gonna do a little trolling. I feel like there are some missing assumptions about why he chooses these curves.
In the addendum, there are references to other uses in the literature, which can be used as a jumping point for further understanding. What I'm missing from this post is the discussion of large networks. Everyone knows everyone in a small group, but changes propagate over time for large ones; it also matters if someone has few or many connections. There is also some kind of criticality in large networks too, but it's a bit different. Also, the math gets much more complicated, in fact, graph criticality results are few and hard, and most places use computer simulations instead of closed equations. All in all, I think social behaviour curves are a simple and good tool for understanding an aspect of social reality.
I noticed (while reading your great modeling exercise about an important topic) a sort of gestalt presumption of "one big compartment (which is society itself)" in the write-up and this wasn't challenged by the end.
Maybe this is totally valid? The Internet is a series of tubes, but most of the tubes connect to each other eventually, so it is kinda like all one big place maybe? Perhaps we shall all be assimilated one day.
But most of my thoughts about modeling how to cope with differences in preference and behavior focus a lot on the importance of spacial or topological or social separations to minimize conflicts and handle variations in context.
My general attitude is roughly: things in general are not "well mixed" and (considering how broken various things can be in some compartments) thank goodness for that!
This is a figure from this research where every cell basically represents a spacially embedded agent, and agents do iterated game playing with their neighbors and then react somehow.
In many similar bits of research (which vary in subtle ways, and reveals partly what the simulation maker wanted to see) a thing that often falls out is places where most agents are being (cooperatively?) gullible, or (defensively?) cheating, or doing tit-for-tat... basically you get regions of tragic conflict, and regions of simple goodness, and tit-for-tat is often at the boundaries (sometimes converting cheaters through incentives... sometimes with agents becoming complacent because T4T neighbors cooperate so much that it is easy to relax into gullibility... and so on).
A lot depends on the details, but the practical upshot for me is that it is helpful to remember that the right thing in one placetime is not always the right thing everywhere or forever.
Arguably, "reminding people about context" is just a useful bravery debate position local to my context? ;-)
With a very simple little prisoner's dilemma setup, utility is utility, and it is clear what "the actual right thing" is: lots of bilateral cooperate/cooperate interactions are Simply Good.
However in real life there is substantial variation in cultures and preferences and logistical challenges and coordinating details and so on.
It is pretty common, in my experience, for people to have coping strategies for local problems that they project out on others who are far from them, which they imagine to be morally universal rules. However, when particular local coping strategies are transported to new contexts, they often fail to translate to actual practical local benefits, because the world is big and details matter.
Putting on a sort of "engineering hat", my general preference then is to focus on small specific situations, and just reason about "what ought to be done here and now" directly based on local details the the direct perception of objective goodness.
The REASON I would care about "copying others" is generally either (1) they figured out objectively good behavior that I can cheaply add to my repertoire, or (2) they are dangerous monsters who will try to hurt me of they see me acting differently. (There are of course many other possibilities, and subtleties, and figuring out why people are copying each other can be tricky sometimes.)
Your models here seem to be mostly about social contagion, and information cascades, and these mechanisms read to me as central causes of "why 'we' often can't have nice things in practice" ...because cascading contagion is usually anti-epistemic and often outright anti-social.
You’re having dinner with a party of 10 at a Chinese restaurant. Everyone else is using chop sticks. You know how to use chop sticks but prefer a fork. Do you ask for a fork? What if two other people are using a fork?
I struggled with this one because I will tend to use a chopstick at Chinese restaurants for fun, and sometimes I'm the only one using them, and several times I've had the opportunity to teach someone how to use them. The alternative preference in this story would be COUNTERFACTUAL to my normal life in numerous ways.
Trying to not fight the hypothetical too much, I could perhaps "prefer a fork" (as per the example) in two different ways:
(1) Maybe I "prefer a fork" as a brute fact of what makes me happy for no reason. In this case, you're asking me about "a story person's meta-social preferences whose object-level preferences are like mine but modified for the story situation" and I'm a bit confused by how to imagine that person answering the rest of the question. After making an imaginary person be like me but "prefer a fork as a brute emotional fact"... maybe the new mind would also be different in other ways as well? I couldn't even figure out an answer to the question, basically. If this was my only way to play along, I would simply have directly "fought the hypothetical" forthrightly.
(2) However, another way to "prefer a fork" would be if the food wasn't made properly for eating with a chopstick. Maybe there's only rice, and the rice is all non-sticky separated grains, and with a chopstick I can only eat one grain at a time. This is a way that I could hypothetically "still have my actual dietary theories intact" and naturally "prefer a fork"... and in this external situation I would probably ask for a fork no matter how unfun or "not in the spirit of the experience" it seems? Plausibly, I would be miffed, and explain things to people close to me who had the same kind of rice, and I would predict that they would realize I was right, nod at my good sense, and probably ask the waiter to give them a fork as well.
But in that second try to generate an asnwer, it might LOOK like the people I predicted might copy me would be changing because "I was +1 to fork users and this mapped through a well defined social behavior curve feeling in them" but in my mental model the beginning of the cascade was actually caused by "I verbalized a real fact and explained an actually good method of coping with the objective problem" and the idea was objectively convincing.
I'm not saying that peer pressure should always be resisted. It would probably be inefficient for everyone to think from first principles all the time about everything. Also there are various "package deal" reasons to play along with group insanity, especially when you are relatively weak or ignorant or trying to make a customer happy or whatever. But... maybe don't fall asleep while doing so, if you can help it? Elsewise you might get an objectively bad result before you wake up from sleep walking :-(
A lot depends on the details, but the practical upshot for me is that it is helpful to remember that the right thing in one placetime is not always the right thing everywhere or forever.[...]However in real life there is substantial variation in cultures and preferences and logistical challenges and coordinating details and so on.
Martin Sustrik's "Anti-Social Punishment" post is great real-life example of this
Something triggered in me by this response -- and maybe similar to part of what you were saying in the later part: sometimes preferences aren't affected much by the social context, within a given space of social contexts. People may just want to use chopsticks because they are fun, rather than caring about what other people think about them.
Also, societal preferences for a given thing might actually decrease when more and more people are interested in them. For example, demand for a thing might cause the price to rise. With orchestras: if lots of people are already playing violin, that increases the relative incentive for others to learn viola.
Enjoyable. I'm surprised Asch's conformity experiments are not mentioned, e.g. https://www.lesswrong.com/posts/WHK94zXkQm7qm7wXk/asch-s-conformity-experiment.
Thanks for mentioning Asch's conformity experiment -- it's a great example of this sort of thing! I might come back and revise it a bit to mention the experiment.
(Though here, interestingly, a participant's action isn't exactly based on the percentage of people giving the wrong answer. It sounds like having one person give the right answer was enough to make people give the right answer, almost regardless of how many people gave the wrong answer. Nevertheless, it illustrates the point that other people's behavior totally does influence most people's behavior to quite a large degree, even in pretty unexpected settings.)
The descriptive part is great, but the prescriptive part is a little iffy. The optimal strategy is not choosing to be "radical" or "conformist". The optimal strategy is: do a Bayesian update on the fact that many other people are doing X, and then take the highest expected utility action. Even better, try to figure out why they are doing X (for example, by asking them) and update on that. It's true that Bayesian inference is hard and heuristics such as "be at such-and-such point on the radical-conformist axis" might be helpful, but there's no reason why this heuristic is always the best you can do.
This model makes explicit something I’ve had intuitions about for a while (though I wasn’t able to crystallise them nearly as perspicaciously or usefully as UnexpectedValues). Beyond the examples given in the post, I'm reminded of Zvi’s discussion of control systems in his covid series, and also am curious about how this model might apply to valuing cryptocurrencies, which I think display some of the same dynamics.
The post is also very well-written. It has the wonderful flavour of a friend explaining something to you by a whiteboard, building up a compelling story almost from first principles with clear diagrams. I find this really triggers my curiosity -- I want to go out and survey housemates to pin down the social behavior curves around me; go up to the whiteboard and sketch some new graphs and figure out what they imply, and so forth.
To jump in on people naming related things, some specific consequences of this type of thing are discussed in Timur Kuran's "Private Truths, Public Lies" https://www.goodreads.com/book/show/1016932.Private_Truths_Public_Lies.
A radical is someone who, for many different values of X, is on the far-left or far-right of the social behavior curve for X.
A radical is someone who, for many different values of X, is on the far-left or far-right of the social behavior curve for X.
“Select how radical you’ll be at random”.
“Select how radical you’ll be at random”.
I don't see why being stubborn about one value of X should have to be correlated with being stubborn about any other value of X, so I'm confused about why there would have to be capital-R "Radicals" who are stubborn about everything, as opposed to having a relatively even division where everybody is radical about some issues and not about others. Being radical can be pretty exhausting, and it seems like a good idea to distribute that workload. I mean, I'm sure that people do tend to have natural styles, but you're also talking about which style a person should consciously adopt.
Why not either randomly choose how radical you're going to be on each specific issue independent of all others, or even try to be more radical about issues where you are most sure your view of the ideal condition is correct?
How does all of this hold up when there's a lot of hysteresis in how people behave? I can think of lots of cases where I'd expect that to happen. Maybe some people just never change the random initial state of their video...
Yeah -- to clarify, in the last section I meant "select how radical you'll be for that issue at random." In the previous section I used "radical" to refer to a kind of person (observing that some people do have a more radical disposition than others), but yeah, I agree that there's nothing wrong with choosing your level of radicalism independently for different issues!
And yeah, there are many ways this model is incomplete. Status quo bias is one. Another is that some decisions have more than two outcomes. A third is that really this should be modeled as a network, where people are influenced by their neighbors (and I'm assuming that the network is a giant complete graph). A simple answer to your question might be "draw a separate curve for 'keep camera on if default state is on' and 'turn camera on if default state is off'", but there's more to say here for sure.
Interesing! I have something to say about the questions at the beginning of the text. I tried very very hard to answer them, but I found out I just can’t do it without lying to myself! I litterally can’t imagine the situation in order to feel it and make a decision. It’s imposible to imagine myself deciding, rationally, when I do something when someone else does it, or what is the % needed (like in the mask question) in order for me to change my behaviour. But, like everybody else, i change my behavior based on what others do.> So, my question is, do you think that some people do think more rationally about the way to act in those situations or they act impulsively like everyone else and they rationalize their behavior later? Does it have something to do with their inclination towards more math-rich sciences? Or, maybe, it depends on the biological characteristics of the individual? Like personality, for example.
And, in a more broader sense, how many of our decisions de you guys think are rational and how many are just rationalized?
> I hope it makes sense. Cheers.
or what is the % needed (like in the mask question)
I was excited to be reminded by this post of Louis Sachar's classic Wayside School series: in one section of the first puzzle book installment, Sideways Math from Wayside School, you read about students at various places on the social behavior curve for participating in a game of basketball, and are asked to determine who will play under the changing circumstances of that day's recess.
I wonder where the Spiral of Silence fits in here. I guess opposite the Respectability cascade?
society can respond to new information relatively quickly, but does so smoothly. This seems like a good thing.
This makes me think of the Concave Disposition.
I guess it shouldn't come as a surprise that these concepts are already well-known.
Well I think independent discovery is underrated anyway.
This is a nice essay. I think it could benefit from including a bit more literature, though. I remember seeing a keynote lecture during the wcere in Istanbul some years ago that included social learning models with quite similar results, probably by E. Somanathan. You may also want to check https://www.nber.org/papers/w23110