Writes Putanumonit.com and helps run the New York LW meetup.
I wrote about this post extensively as part of my essay on Rationalist self-improvement. The general idea of this post is excellent: gathering data for a clever natural experiment of whether Rationalists actually win. Unfortunately, the analysis itself is very lacking and is not very data-driven.
The core result is: 15% of SSC readers who were referred by LessWrong made over $1,000 in crypto, 3% made $100,000. These quantities require quantitative analysis: Is 15%/3% a lot or a little compared to matched groups like the Silicon Valley or Libertarian blogosphere? How good a proxy is Scott's selection for people who were on LessWrong when Bitcoin was launching and had the means to take advantage of the opportunity? How much of a consensus on LessWrong was the advice to buy cryptocurrencies? These are all questions that one could find data on (I did a bit of it in my own post), but the essay does no such thing. Scott declares by fiat that 15% earns the community a C grade, with very little justification provided. This conclusion aligns perfectly with what Scott previously opined on the utility of Rationality to things like making money, which doesn't engender confidence in the objectivity of his evaluation.
The idea behind this essay is very admirable; one of the main things we fail to do more as a community is to test ourselves against real world outcomes. And the fact that Scott gathered the data himself is laudable as well. But the essay in itself is more of a suggestion for a good research post than a good work of analysis in itself.
In my opinion, the biggest shift in the study of rationality since the Sequences were published were a change in focus from "bad math" biases (anchoring, availability, base rate neglect etc.) to socially-driven biases. And with good reason: while a crash course in Bayes' Law can alleviate many of the issues with intuitive math, group politics are a deep and inextricable part of everything our brains do.
There has been a lot of great writing describing the issue like Scott’s essays on ingroups and outgroups and Robin Hanson’s theory of signaling. There are excellent posts summarizing the problem of socially-driven bias on a high level, like Kevin Simler’s post on crony beliefs. But The Intelligent Social Web offers something that all of the above don’t: a lens that looks into the very heart of social reality, makes you feel its power on an immediate and intuitive level, and gives you the tools to actually manipulate and change your reaction to it.
Valentine’s structure of treating this as a “fake framework” is invaluable in this context. A high-level rigorous description of social reality doesn’t really empower you to do anything about it. But seeing social interactions as an improv scene, while not literally true, offers actionable insight.
The specific examples in the post hit very close to home for me, like the example of one’s family tugging a person back into their old role. I noticed that I quite often lose my temper around my parents, something that happens basically never around my wife or friends. I realized that much of it is caused by a role conflict with my father about who gets to be the “authority” on living well. I further recognized that my temper is triggered by “should” statements, even innocuous ones like “you should have the Cabernet with this dish” over dinner. Seeing these interactions through the lens of both of us negotiating and claiming our roles allowed me to control how I feel and react rather than being driven by an anger that I don’t understand the source of. An issue that I struggled with for years was mostly resolved after reading this post and thinking about it for a while.
The post’s focus on salient examples (family roles, the convert boyfriend, the white man’s role) also has a downside, in that it’s somewhat difficult to keep track of the main thrust of Valentine’s argument. The entire introductory section also does nothing to help the essay cohere; it makes claims about personal benefits Valentine has acquired by using this framework. These claims are neither substantiated nor explored further in the essay, and they are also unnecessary — the essay is compelling by the force of its insight and not by promising a laundry list of results.
Valentine does not go into detail about the reasons that people “need the scene to work” above all other considerations. This for two reasons: the essay is long enough as it is, and the underlying structure is more speculative than established. I hope to see more people exploring this underlying structure as a follow up. I recommend Sarah Constantin’s look at abusive relationships through the lens of playing out familiar roles; I have also written an essay fitting Valentine’s idea into a broader framework of how predictive processing shapes how we think about identity and social interaction.
But again: The Intelligent Social Web didn’t just inspire me to write about ideas, it changed how I live my life. Whenever I feel a discordant emotion in a social interaction or have a goal that is thwarted I put on the framework of improv scenes and social roles to understand what is happening. And every time I reread the post after trying out the framework in real life, I glean more from it. If the post was slightly better structured and focused it could reach more readers, but it is already the most impactful thing I read on LessWrong in 2018.
As I said, someone who is 100% in thrall to social reality will probably not be reading this. But once you peek outside the bubble there is still a long way to enlightenment: first learning how signaling, social roles, tribal impulses etc. shape your behavior so you can avoid their worst effects, then learning to shape the rules of social reality to suit your own goals. Our community is very helpful for getting the first part right, it certainly has been for me. And hopefully we can continue fruitfully exploring the second part too.
Somewhat unrelated, but one can think of RSI as being a *meta* self-improvement approach — it's what allows you to pick and choose between many competing theories of self-improvement.
Aside from that, I didn't read the academic literature on TAPs before trying them out. I tried them out and measured how well they work for me, and then decided when and where to use them. Good Rationalist advice is to know when to read meta-analyses and when to run a cheap experiment yourself :)
I have several friends in New York who are a match to my Rationalist friends in age, class, intelligence etc. and who:
Now perhaps Rationalist self-improvement can't help them, but if you're reading LessWrong you may be someone who may snap out of social reality long enough for Rationality to change your life significantly.
> if you want to propose some kind of rationalist self-help exercise that I should try
Different strokes for different folks. You can go through alkjash's Hammertime Sequence and pick one, although even there the one that he rates lowest (goal factoring) is the one that was the most influential in my own life. You must be friends with CFAR instructors/mentors who know your personality and pressing issues better than I do and can recommend and teach a useful exercise.
Thank you for the detailed reply. I'm not going to reply point by point because you made a lot of points, but also because I don't disagree with a lot of it. I do want to offer a couple of intuitions that run counter to your pessimism.
While you're right that we shouldn't expect Rationalists to be 10x better at starting companies because of efficient markets, the same is not true of things that contribute to personal happiness. For example: how many people have a strong incentive in helping you build fulfilling romantic relationships? Not the government, not capitalism, not most of your family or friends, often not even your potential partners. Even dating apps make money when you *don't* successfully seduce your soulmate. But Rationality can be a huge help: learning that your emotions are information, learning about biases and intuitions, learning about communication styles, learning to take 5-minute timers to make plans — all of those can 10x your romantic life.
Going back to efficient markets, I get the sense that a lot of things out there are designed by the 1% most intelligent and ruthless people to take advantage of the 95% and their psychological biases. Outrage media, predatory finance, conspicuous brand consumption and other expensive status ladders, etc. Rationality doesn't help me design a better YouTube algorithm or finance scam, but at least it allows me to escape the 95% and keeps me away from outrage and in index funds.
Finally, I do believe that the world is getting weirder faster, and the thousands of years of human tradition are becoming obsolete at a faster pace. We are moving ever further from our "design specs". In this weirding world, I already hit jackpot with Bitcoin and polyamory, two things that couldn't really exist successfully 100 years ago. Rationality guided me to both. You hit jackpot with blogging— can you imagine your great grand uncle telling you that you'll become a famous intellectual by writing about cactus people and armchair sociology for free? And we're both still very young.
For any particular achievement like basketball or making your first million, there are more dedicated practices that help you to your goal faster than Rationality. But for taking advantage of unknown unknowns, the only two things I know that work are Rationality and making friends.
Another idea is that intelligence is valued more when a society feels threatened by an outside force, for which they need competent people to protect themselves from.
Building up on this, virtue is valued more when a society is threatened from the inside. If people are worried about being betrayed or undermined by those who appear to be part of their tribe they will look for virtue signals. We see this a lot in the high correlation of virtue signaling with signals of ingroup loyalty, while intelligence signaling often takes the shape of disagreeing with the group.
In general, an outside threat or goal allows people to measure themselves against it. Status is set by the number of enemy scalps one collects, for example. But without an external measuring stick people will jockey for relative status by showing loyalty and virtue
This post changed how I think about everything from what creativity is to why my friend loves talking one-on-one but falls silent in 5 person groups. I will write a longer review in December.
LSD doesn't make your brain do anything your brain is incapable of doing, just many things that your brain hasn't done in a long while. The best description I can give is that it gives you the intellectual openness of a 5-year-old, the emotional openness of a 3-year-old, and the sensory experience of perhaps a baby who has not formed strong enough predictions of things like "the clouds don't shift in shape while I look at them". All of these are in your brain, but they're usually suppressed by the strong top-down predictions and ego-narrative that are generated by parts of your brain like the Default Mode Network. Psychedelics suppress the DMN and let the rest of your brain run free.
I missed the importance of that sentence in the actual conversation and moved on to the next topic, but then when I listened to the recording it made me go "Holy $&@%!" This is absolutely the biggest disagreement between me and Aella. To me, the fact that the sense of insight is the same is *absolutely terrifying*. It's not a good thing.