Jacobian

Writes Putanumonit.com and helps run the New York LW meetup.

Jacobian's Comments

Go F*** Someone

Attractiveness comes in many forms. I'm extroverted and write better than I look, so I do well at dinner parties and OKCupid. You can be attractive in dancing skill, in spiritual practice, in demonstrable expertise, in an artistic pursuit... guitar players get laid even if they're not that good looking.

And yet, everyone's first association when talking about "aim for 100 dates" is Tinder, which works only for the men who are top 20% in the one aspect of attractiveness that's crowded and hard to improve - physical looks. This includes men who self-report as unattractive, like this commenter (and presumably, "Simon").

The minimum threshold of attractivenes on Tinder is incredibly high, much higher than almost any other place to look for dates. It's certainly higher than my own good looks — I only turn Tinder on when I leave the country.

Go F*** Someone

I was thinking of people who write comments without reading the post, which pollutes the conversation. Or people who form broad opinions about a writer or a blog without reading. I deal with those people all day every day on Twitter and in the blog comments.

I didn't mean people deciding what to read based on the title. Of course everyone does that! Someone seeing 'Go F*** Someone' may assume that the post will be somewhat vulgar, and will talk about sex. Both things are true. People not interested in vulgar writing about sex shouldn't read it. If I titled it 'A Consideration of Narcissism as it Affects the Formation of Long Term Bonds' that would actually be more misleading, since people would not expect it to be a vulgar post about sex and will get upset.

Go F*** Someone

I understand your concerns.

I cross-post everything I write on Putanumonit to LW by default, which I understood to be the intention of "personal blogposts". I didn't write this for LW. If anyone on the mod team told me that this would be better as a link post or off LW entirely, not because it's bad but because it's not aligned with LW's reputation, I'll be happy to comply.

I could imagine casual readers quickly looking at this and assuming it's related to the PUA community

With that said, my personal opinion is that LW shouldn't cater to people who form opinions on things before reading them and we should discourage them from hanging out here.

Go F*** Someone

95%+ of people who drop out of the workforce to raise children are women

Citation needed.

Other than that, you are supporting my general argument by writing from within the very framework that I lay out here. Why is the choice to leave work "destructive"? Why is it OK for a man to depend on a woman for the biological necessities of having a family, but not OK for either partner do depend on the other for the financial necessities?

Accomplished women who drop out to raise families usually don't surrender the spending of money to their husbands (I agree that demanding that they do so is patriarchal and bad). They only surrender the making of the money. The ability to spend money is what lets people build good lives and families, but making money is what contributes to their status*. Post-divorce, it's usually much easier for a woman (particularly an accomplished one) to make money again than it is for a man to have children again.

*At least, their status among some people. I personally care about LW karma more than income :)

Caring less

"Caring less" was in the air. People were noticing the phenomenon. People were trying to explain it. In a comment, I realized that I was in effect telling people to care less about things without realizing what I was doing. All we needed was a concise post to crystallize the concept, and eukaryote obliged.

The post, especially the beginning, gets straight to the point. It asks the question of why we don't hear more persuasion in the form of "care less", offers a realistic example and a memorable graphic, and calls to action. This is the part that was most useful to me - it gave me a clear handle on something that I've been thinking about for a while. I'm a big fan of telling people to care less, and once I realized that this is what I was doing I learned to expect more psychological resistance from people. I'm less direct now when encouraging people to care less, and often phrase it in terms of trade-offs by telling people that caring less about something (usually, national politics and culture wars) will free up energy to care more about things they already endorse as more important (usually, communities and relationships).

The post talks about the guilt and anxiety induced by ubiquitous "care more" messaging, and I think it's taking this too much for granted. An alternative explanation is that people who are not scrupulous utilitarian Effective Altruists are quite good at not feeling guilt and anxiety, which leaves room for "care more" messaging to proliferate. I wish the post made more distinction between the narrow world of EA and the broader cultural landscape, I fear that it may be typical-minding somewhat.

Finally, eukaryote throws out some hypotheses that explain the asymmetry. This part seems somewhat rushed and not fully thought out. As a quick brainstorming exercise it could be better as just a series of bullet points, as the 1-2 paragraph explanations don't really add much. As some commenters pointed out and as I wrote in an essay inspired by this post, eukaryote doesn't quite suggest the "Hansonian" explanation that seems obviously central to me. Namely: "care more about X" is a claim for status on behalf of the speaker, who is usually someone who has strong opinions and status tied up with X. This is more natural and more tolerable to people than "care less about Y", which reads as an attack on someone else's status and identity - often the listener themselves since they presumably care about Y.

Instead of theorizing about the cause of the phenomenon, I think that the most useful follow ups to this post would be figuring out ways to better communicate "care less" messages and observing what actually happens if such messages are received. Even if one does not buy the premise that "care less" messaging is relaxing and therapeutic, it is important to have that in one's repertoire. And the first step towards that is having the concept clearly explained in a public way that one can point to, and that is the value of this post.


Expressive Vocabulary

I feel like this post is missing an important piece.

When people say "chemicals" or "technology" they are very often not talking about the term in question, but communicating an emotional fact about themselves. "I am disgusted by foods that feel artificially produced", "I want you not to be distracted by devices during dinner". Coming up with better and more precise terms won't help at all, since the thing is being communicated has little to do with the referent of the imprecise term.

You can notice this when the conversation switches from personal experience to a more general and technical discussion. If someone proposes a "ban on technology use in school", everyone will be quick to focus on what is actually in the category.

What determines the balance between intelligence signaling and virtue signaling?

This is a great example. During the Cultural Revolution and similar periods (e.g., Stalinist Russia) you not only wanted to signal virtue above intelligence, you actively wanted to signal *lack* of intelligence as vigorously as you could. The inteligentzia are always suspect.

A LessWrong Crypto Autopsy

I wrote about this post extensively as part of my essay on Rationalist self-improvement. The general idea of this post is excellent: gathering data for a clever natural experiment of whether Rationalists actually win. Unfortunately, the analysis itself is very lacking and is not very data-driven.

The core result is: 15% of SSC readers who were referred by LessWrong made over $1,000 in crypto, 3% made $100,000. These quantities require quantitative analysis: Is 15%/3% a lot or a little compared to matched groups like the Silicon Valley or Libertarian blogosphere? How good a proxy is Scott's selection for people who were on LessWrong when Bitcoin was launching and had the means to take advantage of the opportunity? How much of a consensus on LessWrong was the advice to buy cryptocurrencies? These are all questions that one could find data on (I did a bit of it in my own post), but the essay does no such thing. Scott declares by fiat that 15% earns the community a C grade, with very little justification provided. This conclusion aligns perfectly with what Scott previously opined on the utility of Rationality to things like making money, which doesn't engender confidence in the objectivity of his evaluation.

The idea behind this essay is very admirable; one of the main things we fail to do more as a community is to test ourselves against real world outcomes. And the fact that Scott gathered the data himself is laudable as well. But the essay in itself is more of a suggestion for a good research post than a good work of analysis in itself.

The Intelligent Social Web

In my opinion, the biggest shift in the study of rationality since the Sequences were published were a change in focus from "bad math" biases (anchoring, availability, base rate neglect etc.) to socially-driven biases. And with good reason: while a crash course in Bayes' Law can alleviate many of the issues with intuitive math, group politics are a deep and inextricable part of everything our brains do.

There has been a lot of great writing describing the issue like Scott’s essays on ingroups and outgroups and Robin Hanson’s theory of signaling. There are excellent posts summarizing the problem of socially-driven bias on a high level, like Kevin Simler’s post on crony beliefs. But The Intelligent Social Web offers something that all of the above don’t: a lens that looks into the very heart of social reality, makes you feel its power on an immediate and intuitive level, and gives you the tools to actually manipulate and change your reaction to it.

Valentine’s structure of treating this as a “fake framework” is invaluable in this context. A high-level rigorous description of social reality doesn’t really empower you to do anything about it. But seeing social interactions as an improv scene, while not literally true, offers actionable insight.

The specific examples in the post hit very close to home for me, like the example of one’s family tugging a person back into their old role. I noticed that I quite often lose my temper around my parents, something that happens basically never around my wife or friends. I realized that much of it is caused by a role conflict with my father about who gets to be the “authority” on living well. I further recognized that my temper is triggered by “should” statements, even innocuous ones like “you should have the Cabernet with this dish” over dinner. Seeing these interactions through the lens of both of us negotiating and claiming our roles allowed me to control how I feel and react rather than being driven by an anger that I don’t understand the source of. An issue that I struggled with for years was mostly resolved after reading this post and thinking about it for a while.

The post’s focus on salient examples (family roles, the convert boyfriend, the white man’s role) also has a downside, in that it’s somewhat difficult to keep track of the main thrust of Valentine’s argument. The entire introductory section also does nothing to help the essay cohere; it makes claims about personal benefits Valentine has acquired by using this framework. These claims are neither substantiated nor explored further in the essay, and they are also unnecessary — the essay is compelling by the force of its insight and not by promising a laundry list of results.

Valentine does not go into detail about the reasons that people “need the scene to work” above all other considerations. This for two reasons: the essay is long enough as it is, and the underlying structure is more speculative than established. I hope to see more people exploring this underlying structure as a follow up. I recommend Sarah Constantin’s look at abusive relationships through the lens of playing out familiar roles; I have also written an essay fitting Valentine’s idea into a broader framework of how predictive processing shapes how we think about identity and social interaction.

But again: The Intelligent Social Web didn’t just inspire me to write about ideas, it changed how I live my life. Whenever I feel a discordant emotion in a social interaction or have a goal that is thwarted I put on the framework of improv scenes and social roles to understand what is happening. And every time I reread the post after trying out the framework in real life, I glean more from it. If the post was slightly better structured and focused it could reach more readers, but it is already the most impactful thing I read on LessWrong in 2018.

Is Rationalist Self-Improvement Real?

As I said, someone who is 100% in thrall to social reality will probably not be reading this. But once you peek outside the bubble there is still a long way to enlightenment: first learning how signaling, social roles, tribal impulses etc. shape your behavior so you can avoid their worst effects, then learning to shape the rules of social reality to suit your own goals. Our community is very helpful for getting the first part right, it certainly has been for me. And hopefully we can continue fruitfully exploring the second part too.

Load More