habryka

Running Lightcone Infrastructure, which runs LessWrong. You can reach me at habryka@lesswrong.com. I have signed no contracts or agreements whose existence I cannot mention.

Sequences

A Moderate Update to your Artificial Priors
A Moderate Update to your Organic Priors
Concepts in formal epistemology

Wiki Contributions

Comments

Sorted by

Is this someone who has a parasocial relationship with Vassar, or a more direct relationship? I was under the impression that the idea that Michael Vassar supports this sort of thing was a malicious lie spread by rationalist leaders in order to purge the Vassarites from the community.

I think "psychosis is underrated" and/or "psychosis is often the sign of a good kind of cognitive processing" are things I have heard from at least people very close to Michael (I think @jessicata made some arguments in this direction): 

"Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time.

(To be clear, I don't think "jessicata is in favor of psychosis" is at all a reasonable gloss here, but I do think there is an attitude towards things like psychosis that I disagree with that is common in the relevant circles)

I explained it a bit here: https://www.lesswrong.com/posts/fjfWrKhEawwBGCTGs/a-simple-case-for-extreme-inner-misalignment?commentId=tXPrvXihTwp2hKYME 

Yeah, the principled reason (though I am not like super confident of this) is that posts are almost always too big and have too many claims in them to make a single agree/disagree vote make sense. Inline reacts are the intended way for people to express agreement and disagreement on posts.

I am not super sure this is right, but I do want to avoid agreement/disagreement becoming disconnected from truth values, and I think applying them to elements that clearly don't have a single truth value weakens that connection.

Makes sense. My experience has been that in-person conversations are helpful for getting on the same page, but they also often come with confidentiality requests that then make it very hard for information to propagate back out into the broader social fabric, and that often makes those conversations more costly than beneficial. But I do think it's a good starting point if you don't do the very costly confidentiality stuff.

Fwiw, the contents of this original post actually have nothing to do with EA itself, or the past articles that mentioned me.

Yep, that makes sense. I wasn't trying to imply that it was (but still seems good to clarify).

Sure, happy to chat sometime. 

I haven't looked into the things I mentioned in a ton of detail (though have spent a few hours on it), but have learned to err on the side of sharing my takes here (where even if they are wrong, it seems better to have them be in the open so that people correct them and people can track what I believe even if they think it's dumb/wrong).

Do you know whether the person who wrote this would be OK with crossposting the complete content of the article to LW? I would be interested in curating it and sending it out in our 30,000 subscriber curation newsletter, if they were up for it.

I think people were happy to have the conversation happen. I did strong-downvote it, but I don't think upvotes are the correct measure here. If we had something like agree/disagree-votes on posts, that would have been the right measure, and my guess is it would have overall been skewed pretty strongly into the disagree-vote diretion.

habryka466

Reputation is lazily evaluated

When evaluating the reputation of your organization, community, or project, many people flock to surveys in which you ask randomly selected people what they think of your thing, or what their associations with your organization, community or project are. 

If you do this, you will very reliably get back data that looks like people are indifferent to you and your projects, and your results will probably be dominated by extremely shallow things like "do the words in your name invoke positive or negative associations".

People largely only form opinions of you or your projects when they have some reason to do that, like trying to figure out whether to buy your product, or join your social movement, or vote for you in an election. You basically never care about what people think about you while engaging in activities completely unrelated to you, you care about what people will do when they have to take any action that is related to your goals. But the former is exactly what you are measuring in attitude surveys.

As an example of this (used here for illustrative purposes, and what caused me to form strong opinions on this, but not intended as the central point of this post): Many leaders in the Effective Altruism community ran various surveys after the collapse of FTX trying to understand what the reputation of "Effective Altruism" is. The results were basically always the same: People mostly didn't know what EA was, and had vaguely positive associations with the term when asked. The people who had recently become familiar with it (which weren't that many) did lower their opinions of EA, but the vast majority of people did not (because they mostly didn't know what it was). 

As far as I can tell, these surveys left most EA leaders thinking that the reputational effects of FTX were limited. After all, most people never heard about EA in the context of FTX, and seemed to mostly have positive associations with the term, and the average like or dislike in surveys barely budged. In reflections at the time, conclusions looked like this

  1. The fact that most people don't really care much about EA is both a blessing and a curse. But either way, it's a fact of life; and even as we internally try to learn what lessons we can from FTX, we should keep in mind that people outside EA mostly can't be bothered to pay attention.
  2. An incident rate in the single digit percents means that most community builders will have at least one example of someone raising FTX-related concerns—but our guess is that negative brand-related reactions are more likely to come from things like EA's perceived affiliation with tech or earning to give than FTX.
  3. We have some uncertainty about how well these results generalize outside the sample populations. E.g. we have heard claims that people who work in policy were unusually spooked by FTX. That seems plausible to us, though Ben would guess that policy EAs similarly overestimate the extent to which people outside EA care about EA drama.

Or this:

Yes, my best understanding is still that people mostly don't know what EA is, the small fraction that do mostly have a mildly positive opinion, and that neither of these points were affected much by FTX.[1] 

This, I think, was an extremely costly mistake to make. Since then, practically all metrics of the EA community's health and growth have sharply declined, and the extremely large and negative reputational effects have become clear.

Most programmers are familiar with the idea of a "lazily evaluated variable" - a value that isn't computed until the exact moment you try to use it. Instead of calculating the value upfront, the system maintains just enough information to be able to calculate it when needed. If you never end up using that value, you never pay the computational cost of calculating it. Similarly, most people don't form meaningful opinions about organizations or projects until the moment they need to make a decision that involves that organization. Just as a lazy variable suddenly gets evaluated when you first try to read its value, people's real opinions about EA don't materialize until they're in a position where that opinion matters - like when deciding whether to donate, join, or support EA initiatives.

Reputation is lazily evaluated. People conserve their mental energy, time, and social capital by not forming detailed opinions about things until those opinions become relevant to their decisions. When surveys try to force early evaluation of these "lazy" opinions, they get something more like a placeholder value than the actual opinion that would form in a real decision-making situation.

This computation is not purely cognitive. As people encounter a product, organization or community that they are considering doing something with, they will ask their friends whether they have any opinions, perform online searches, and generally seek out information to help them with whatever decision they are facing. This is part of the reason for why this metaphorical computation is costly and put off until it's necessary.

So when you are trying to understand what people think of you, or how people's opinions of you are changing, pay much more attention to the attitudes of people who have recently put in the effort to learn about you, or were facing some decision related to you, and so are more representative of the people who are facing some kind of decision related to you. These will be much better indicators of your actual latent reputation than what happens when you ask people on a survey. 

For the EA surveys, these indicators looked very bleak: 

"Results demonstrated that FTX had decreased satisfaction by 0.5-1 points on a 10-point scale within the EA community"

"Among those aware of EA, attitudes remain positive and actually maybe increased post-FTX —though they were lower (d = -1.5, with large uncertainty) among those who were additionally aware of FTX."

"Most respondents reported continuing to trust EA organizations, though over 30% said they had substantially lost trust in EA public figures or leadership."

If various people in EA had paid attention to these, instead of to the approximately meaningless placeholder variables that you get when you ask people what they think of you without actually getting them to perform the costly computation associated with forming an opinion of you, I think they would have made substantially better predictions.

FWIW, if anyone is interested in my take, my guess is it doesn't make sense to support this (and mild-downvoted the post). 

I am pretty worried about some of your past reporting/activism in the space somewhat intentionally conflating between some broader Bay Area VC and tech culture and the "EA community" in a way that IMO ended up being more misleading than informing (and then you ended up promoting media articles that I think were misleading, despite I think many people pointing this out).

People can form their own opinions on this: https://forum.effectivealtruism.org/posts/JCyX29F77Jak5gbwq/ea-sexual-harassment-and-abuse?commentId=DAxFgmWe3acigvTfi 

I might also be wrong here, and I don't feel super confident, but I at least have some of my flags firing and would have a prior that lawsuits in the space, driven by the people who currently seem involved, would be bad. I think it's reasonable for people to have very different takes on this. 

I am obviously generally quite in favor of people sharing bad experiences they had, but would currently make bets that most people on LW would regret getting involved with this (but am also open to argument and don't feel super robust in this).

We have a few kinds of potential bonus a post could get, but yeah, something seems very off about your sort order, and I would really like to dig into it. A screenshot would still be quite valuable.

Load More