Straight-edge Warning Against Physical Intimacy

I'm coming to this article by way of being linked from a Facebook group though I am also an occasional LessWrong user. I would have asked this question in the comments of the FB post where this post was linked, but since the comments were closed there, I'll ask it here: What was (or were) the reason(s) behind:

  1. Posting this to a FB group with the comments open;
  2. Waiting until a few comments had been made, then closing them on FB and then asking for commenters to comment on this LW post instead?

I understand why someone would do this if they thought a platform with a higher variance for quality of discourse, like FB or another social media website, was delivering a significantly lower quality of feedback than one would hope or expect to receive on LW. Yet I read the comments on the FB post in question, in a group frequented by members of the rationality community, and none of them stuck out to me as defying what have become the expected norms and standards for discourse on LW.

Does the Berkeley Existential Risk Initiative (self-)identify as an EA-aligned organization?
What seems to matter is (1) that such a focus was chosen because interventions in that area are believed to be the most impactful, and (2) that this belief was reached from (a) welfarist premises and (b) rigorous reasoning of the sort one generally associates with EA.

This seems like a thin concept of EA. I know there are organizations who choose to pursue interventions based on them being in an area they believe to be (among) the most impactful, and based on welfarist premises and rigorous reasoning. Yet they don't identify as EA organizations. That would be because they disagree with the consensus in EA about what constitutes 'the most impactful,' 'the greatest welfare,' and/or 'rigorous reasoning.' So, the consensus position(s) in EA of how to interpret all those notions could be thought of as the thick concept of EA.

Also, this definition seems to be a prescriptive definition of "EA organizations," as opposed to being a descriptive definition. That is, all the features you mentioned seem necessary to define EA-aligned organizations as they exist, but I'm not convinced they're sufficient to capture all the characteristics of the typical EA-aligned organization. If they were sufficient, any NPO that could identify as an EA-aligned organization would do so. Yet there are some that aren't. An example of a typical feature of EA-aligned NPOs that is superficial but describes them in practice would be if they receive most of their funding from sources also aligned with EA (e.g., the Open Philanthropy Project, the EA Funds, EA-aligned donors, etc.).

Does the Berkeley Existential Risk Initiative (self-)identify as an EA-aligned organization?

Technical Aside: Upvoted for being a thoughtful albeit challenging response that impelled me to clarify why I'm asking this as part of a framework for a broader project of analysis I'm currently pursuing.

Does the Berkeley Existential Risk Initiative (self-)identify as an EA-aligned organization?


I'm working on a global comparative analysis of funding/granting orgs not only in EA, but also in those movements/communities that overlap with EA, including x-risk.

Many in EA may evaluate/assess the relative effectiveness of these orgs in question according to the standard normative framework(s) of EA, as opposed to the lense(s)/framework(s) through which such orgs evaluate/assess themselves, or would prefer to be evaluted/assessed by other principals and agencies.

I expect that the EA community will want to know to what extent various orgs are amenable to change in practice or self-evaluation/self-assessment according to the standard normative framework(s) of EA, however more reductive they may be than ones employed for evaluating the effectiveness of funding allocation in x-risk of other communities, such as the rationality community.

Ergo, it may be in the self-interest of any funding/granting org in the x-risk space to precisely clarify their relationship to the EA community/movement, perhaps as operationalized through the heuristic framework of "(self-)identification as an EA-aligned organization. I assume that includes BERI.

I care because I'm working on a comparative analysis of funds and grants among EA-aligned organizations.

For the sake of completion, this will extend to funding and grantmaking organizations that are part of other movements that have overlap with or are constituent movements of effective altruism. This includes existential risk reduction.

Most of this series of analyses will be a review, as opposed to an evaluation or assessment. I believe the more of those normative judgements I leave out of the analysis, and to leave that to the community. I'm not confident with feasible to produce such a comparative analysis competently without at least a minimum of normative comparison. Yet, more importantly, the information could, and likely woud, be used by various communities/movements with a stake in x-risk reduction (e.g., EA, rationality, Long-Term World Improvement, transhumanism, etc.) to make those normative judgements far beyond what is contained in my own analysis.

I will include in a discussion section a variety of standards by which each of those communities might evaluate or assess BERI in relation to other funds and grants focused on x-risk reduction, most of which are EA-aligned organizations the form the structural core, not only of EA, but also of x-risk reduction. Hundreds, if not thousands, of individuals, including donors, vocal supporters, and managers of those funds and grants run by EA-aligned organizations, will be inclined to evaluate/assess each of these funds/grants focused on x-risk reduction through a lens in EA. Some of these funding/granting orgs in x-risk reduction may diverge in opinion about what is best in the practice and evaluation/assessment of funding allocation in x-risk reduction.

Out of respect for those funding/granting orgs in x-risk reduction that do diverge in opinion from those standards in EA, I would like to know that so as to include those details in the discussion section. This is important because it will inform how the EA community will engage with those orgs in question after my comparative analysis is complete. One shouldn't realistically expect that many in the EA community will evaluate/assess such orgs with a common normative framework, e.g., where the norms of the rationality community diverge from those of EA. My experience is they won't have the patience to read many blog posts about how the rationality community, as separate from EA, practices and evaluates/assesses x-risk reduction efforts differently than EA does, and why those of the rationality community are potentially better/superior. I expect many in EA will prefer a framework that is, however unfortunately, more reductive than applying conceptual tools like factorization and 'caching out' for parsing out more nuanced frameworks for evaluating x-risk reduction efforts.

So, it's less about what I, Evan Gaensbauer, care about and more about what hundreds, if not thousands, of others in EA and beyond care about, in terms of evaluating/assessing funding/granting orgs in x-risk reduction. That will go more smoothly for both those funding/granting orgs in question, and x-risk reducers in the EA community, to know if those orgs in question fit into the heuristic framework of "identifying (or not) as an EA-aligned organization." Ergo, it may be in the interest of those funding/granting orgs in question to clarify their relationship to EA as a movement/community, even if there are trade-offs, real or perceived, before I publish this series of comparative analyses. I imagine that includes BERI.

Dialogue on Appeals to Consequences

Summary: I'm aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into long, complicated theories. These theories don't appear to respond at all to the content of the disagreements with the public accusations of lying and dishonesty, and that's why these repeat accusations and justifications for them are poorly received.

These complicated theories don't have anything to do with what people actually want when public accusations of dishonesty or lying are being made, what is typically called 'hard' (e.g., robust, empirical, etc.) evidence. If you were to make narrow claims of dishonesty with more modest language, based on just the best evidence you have, and being willing to defend the claim based on that; instead of making broad claims of dishonesty with ambiguous language, based on complicated theories, they would be received better. That doesn't mean the theories of how dishonesty functions in communities, as an exploration of social epistemology, shouldn't be written. It's just that they do not come across as the most compelling evidence to substantiate public accusations of dishonesty.

For me it's never been so complicated as to require involving decision theory. It's as simple as some of the basic claims being made into much larger, more exaggerated or hyperbolic claims being a problem. They also come along with readers, presumably a general audience among the effective altruism or rationality communities, apparently needing to have prior knowledge of a bunch of things they may not be familiar with. They will only be able to parse the claims being made by reading a series of long, dense blog posts that don't really emphasize the thing these communities should be most concerned about.

Sometimes the claims being made are that Givewell is being dishonest, and sometimes they are something like because of this the entire effective altruism movement has been totally compromised, and is also incorrigibly dishonest. There is disagreement, sometimes disputing how the numbers were used in the counterpoint to Givewell; and some about the hyperbolic claims made that appear as though they're intended to smear more people than whoever at Givewell, or who else in the EA community, is responsible. It appears as though people like you or Ben don't sort through, try parsing, and working through these different disagreements or criticisms. It appears as though you just take all that at face value as confirmation the rest of the EA community doesn't want to hear the truth, and that people worship Givewell at the expense of any honesty, or something.

It's in my experience too, that with these discussions of complicated subjects that appear very truncated for those unfamiliar, that the instructions are just to go read some much larger body of writing or theory to understand why and how people deceiving themselves, each other, and the public in the ways you're claiming. This is often said as if it's completely reasonable to claim it's the responsibility of a bunch of people with other criticisms or disagreements with what you're saying to go read tons of other content, when you are calling people liars, instead of you being able to say what you're trying to say in a different way.

I'm not even saying that you shouldn't publicly accuse people of being liars if you really think they're lying. In cases of a belief that Givewell or other actors in effective altruism have failed to change their public messaging in the face of, by their own convictions, being correctly pointed out as them being wrong, then just say that. It's not necessary to claim that thus the entire effective altruism community are also dishonest. That is especially the case for members of the EA community who disagree with you, not because they dishonestly refused the facts they were confronted with, but because they were disputing the claims being made, and their interlocutor refused to engage, or deflected all kinds of disagreements.

I'm sure there are lots of responses to criticisms of EA which have been needlessly hostile. Yet reacting, and writing strings of posts as though, the whole body of responses were consistent in just being garbage, is just not accurate of the responses you and Ben have received. Again, if you want to write long essays about what rational implications how people react to public accusations of dishonesty has for social epistemology, that's fine. It would just suit most people better if that was done entirely separately from the accusations of dishonesty. If you're publicly accusing some people of being dishonest, just accuse those and only those people of being dishonest very specifically. Stop tarring so many other people with such a broad brush.

I haven't read your recent article accusing some actors in AI alignment of being liars. This dialogue seems like it is both about that, and a response to other examples. I'm mostly going off those other examples. If you want to say someone is being dishonest, just say that. Substantiate it with what the closest thing you have to hard or empirical evidence that some kind of dishonesty is going on. It's not going to work with an idiosyncratic theory of how what someone is saying meets some kind of technical definition of dishonesty that defies common sense. I'm very critical of a lot of things that happen in effective altruism myself. It's just that the way that you and Ben have gone about it is so poorly executed, and backfires so much, I don't think there is any chance of you resolving the problems you're trying to resolve with your typical approaches.

So, I've given up on keeping up with the articles you're writing criticizing things in effective altruism happening, at least on a regular basis. Sometimes others nudge me to look at them. I might get around to them eventually. It's honestly at the point, though, where the pattern I've learned to follow is to not being open-minded that the criticisms being made of effective altruism are worth taking seriously.

The problem I have isn't the problems being pointed out, or that different organizations are being criticized for their alleged mistakes. It's how the presentation of the problem, and the criticism being made, are often so convoluted I can't understand them, and that's before I can figure out if I agree or not. I find that I am generally more open-minded than most people in effective altruism to take seriously criticisms made of the community, or related organizations. Yet I've learned to suspend that for the criticisms you and Ben make, for the reasons I gave, because it's just not worth the time and effort to do so.

How Much Do Different Users Really Care About Upvotes?
BTW, it might be worth separating out the case where controversial topics are being discussed vs boring everyday stuff. If you say something on a controversial topic, you are likely to get downvotes regardless of your position. "strong, consistent, vocal support" for a position which is controversial in society at large typically only happens if the forum has become an echo chamber, in my observation.

On a society-wide scale, "boring everyday stuff" is uncontroversial by definition. Conversely, articles that have a high total number of votes, but a close-to-even upvote:downvote ratio, are by definition controversial to at least several people. If wrong-headed views of boring everyday stuff aren't heavily downvoted, and are "controversial" to the point half or more of the readers supported someone spreading supposedly universally recognizable nonsense, that's a serious problem.

Also, regarding the EA Forum and LW, at least, "controversial topics" vs. "boring everyday stuff" is a false dichotomy. These fora are fora for all kinds of "weird" stuff, by societal standards. Some of popular positions on the EA Forum and LW are also controversial, but that's normal for EA and LW. What going by societal standards doesn't reflect is why different positions are or aren't controversial on the EA Forum or LW, and why. There are heated disagreements in EA, or on LW, for when most people outside those fora don't care about any side of those debates. For the examples I have in mind, some of the articles were on topics that were controversial in society at large, and then some that were only controversial disagreements in a more limited sense on the EA Forum or LW.

How Much Do Different Users Really Care About Upvotes?

You make a good point I forgot to add: the function karma on an article or comment serves in providing info to other users, as opposed to just the submitting user. That's something people should keep in mind.

How Much Do Different Users Really Care About Upvotes?

What bugs me is when people who ostensibly aspire to understand reality better let their sensitivity get in the way, and let their feelings colour the reality of how their ideas are being received. It seems to me this should be a basic skill of debiasing that people would employ if they were as serious about being effective or rational thinkers as they claim to be. If there is anything that bugs me you're suspicious of, it's that.

Typically, I agree with an OP who is upset about the low quality of negative comments, but I disagree with how upset they get about it. The things they say as a result are often inaccurate. For example, people will say because of a few comments worth of low-quality negative feedback on a post that's otherwise decently upvoted that negative reception is typical of LW, or the EA Forum. They may not be satisfied with the reception they've received on an article. That's just a different claim than their reception was extremely negative.

I don't agree with how upset people are getting, though I do to think they're typically correct the quality of some responses to their posts is disappointingly low. I wasn't looking for a solution to a problem. I was asking an open-ended question to seek answers that would explain some behaviour on others' part that doesn't fully make sense to me. Some other answers I've gotten are just people speaking from their own experience, like G Gordon, and that's fine by me too.

How Can Rationalists Join Other Communities Interested in Truth-Seeking?

Some but not all academics also seek truth in terms of their own beliefs about the world, and their own processes (including hidden ones) for selecting the best model for any given decision. From a Hansonian perspective, that's at least what scientists and philosophers are telling themselves. Yet from a Hansonian perspective, that's what everyone is telling themselves about their ability to seek truth, especially if a lot of their ego is bound up in 'truth-seeking', including rationalists. So the Hansonian argument here would appear to be a perfectly symmetrical one.

I don't have a survey on hand for what proportion of academia seek truth both in a theoretical sense, and a more pragmatic sense like rationalists aspire to do. Yet "academia", considered as a population, it much larger than the rationality community, or a lot of other intellectual communities. So, even if the relative proportion of academics who could be considered a "truth-seeking community" in the eyes of rationalists is small, the absolute/total amount of academics who would be considered part of a "genuine truth-seeking community" in those same eyes would be large enough to take seriously.

To be fair, the friends I have in mind who are more academically minded, and are critical of the rationality community and LessWrong, are also critical of much of academia as well. For them it's about aspiring to a greater and evermore critical intellectualism than it is sticking to academic norms. Philosophy tends to be a field in academia that tends to be more like this than most other academic fields, because philosophy has a tradition of being the most willing to criticize the epistemic practices of other academic fields. Again, this is a primary application of philosophy. There are different branches and specializations in philosophy, like the philosophies of: physics; biology; economics; art (i.e., aesthetics); psychology; politics; morality (i.e., ethics); and more.

The practice of philosophy at it's most elementary level is a practice of 'going meta', which is an art many rationalists seek to master. So I think truth-seekers in philosophy, and in academia more broadly, are the ones rationalists should seek to interact with more, even if finding academics like that is hard. Of course, the most common way rationalists could find academics like that, is to look to academics already in the rationality community like that (there are plenty), and ask them if they know other people/communities they enjoy interacting with for reasons similar to why they enjoy interacting with rationalists.

There is more I could say on the subject of how learning from philosophy, academia, and other communities in a more charitable way could benefit the rationality community. They're really only applicable if you either are part of an in-person/'irl' local rationalist community; or if you're intellectually and emotionally open to criticisms and recommendations for improvement to the culture of the rationality community. If one or both of those conditions apply to you, I can go on.

How Can Rationalists Join Other Communities Interested in Truth-Seeking?

One thing about this comment that really sticks out to me is the fact I know several people who think LessWrong and/or the rationality community aren't that great at truth-seeking. There are a lot of specific domains where rationalists aren't reported to be particularly good at truth-seeking. Presumably, that could be excused by the fact rationalists are generalists. However, I still know people who think the rationality community is generally bad at truth-seeking.

Those people tend to hail from philosophy. To be fair, 'philosophy', as a community, is one of the only other communities that I can think of that are interested in truth-seeking in as generalized way as the rationality community. You can ask the mods about it, but they've got some thoughts on how 'LessWrong' is a project of course strongly tied to but distinct from the 'rationality community'. I'd associate with LessWrong more with truth-seeking than 'the rationality community', since if you ask a lot of rationalists, truth-seeking isn't nearly all of what the community is about these days, and and truth-seeking isn't even a primary draw for a lot of people.

Anyway, most philosophers don't tend to think LessWrong is very good at seeking truth much of the time either. Again, to be fair, philosophers think lots of different kinds of people aren't nearly as good at truth-seeking as they make themselves out to be, including all kinds of scientists. Doing that kind of thing comes with the territory of philosophy, but I digress.

The thing is about 'philosophy', as a human community, is, unlike rationality originating from LessWrong, is blended into the rest of the culture that 'philosophers' don't congregate outside of academia like 'rationalists' do. 'Scientists' seem to tend to do that more than philosophers, but not more than rationalists. Yet for people who want to surround themselves with a whole community of like-minded others, all of them wouldn't want to join academia to get that. Even for rationalists who have worked in academia, the fact the truth-seeking is more part of the profession than something weaved into the fabric of their lifestyles.

Of course, the whole point of this question was to figure out what truth-seeking communities are out there that rationalists would get along with. If rationalists aren't perceived as good enough at truth-seeking for others to want to get along with them, which oftentimes appears to be the case, I don't know what a rationalist should do about that. Of course, you didn't mention truth-seeking, and I mentioned there are plenty of things rationalists are interested in other than truth-seeking. So, the solution I would suggest is for rationalists to route around that, and see if they can't get along with people who share something in common with rationalists, that they also appreciate about rationalists, other than truth-seeking.

Load More