Evan_Gaensbauer

Evan_Gaensbauer's Comments

Dialogue on Appeals to Consequences

Summary: I'm aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into long, complicated theories. These theories don't appear to respond at all to the content of the disagreements with the public accusations of lying and dishonesty, and that's why these repeat accusations and justifications for them are poorly received.

These complicated theories don't have anything to do with what people actually want when public accusations of dishonesty or lying are being made, what is typically called 'hard' (e.g., robust, empirical, etc.) evidence. If you were to make narrow claims of dishonesty with more modest language, based on just the best evidence you have, and being willing to defend the claim based on that; instead of making broad claims of dishonesty with ambiguous language, based on complicated theories, they would be received better. That doesn't mean the theories of how dishonesty functions in communities, as an exploration of social epistemology, shouldn't be written. It's just that they do not come across as the most compelling evidence to substantiate public accusations of dishonesty.

For me it's never been so complicated as to require involving decision theory. It's as simple as some of the basic claims being made into much larger, more exaggerated or hyperbolic claims being a problem. They also come along with readers, presumably a general audience among the effective altruism or rationality communities, apparently needing to have prior knowledge of a bunch of things they may not be familiar with. They will only be able to parse the claims being made by reading a series of long, dense blog posts that don't really emphasize the thing these communities should be most concerned about.

Sometimes the claims being made are that Givewell is being dishonest, and sometimes they are something like because of this the entire effective altruism movement has been totally compromised, and is also incorrigibly dishonest. There is disagreement, sometimes disputing how the numbers were used in the counterpoint to Givewell; and some about the hyperbolic claims made that appear as though they're intended to smear more people than whoever at Givewell, or who else in the EA community, is responsible. It appears as though people like you or Ben don't sort through, try parsing, and working through these different disagreements or criticisms. It appears as though you just take all that at face value as confirmation the rest of the EA community doesn't want to hear the truth, and that people worship Givewell at the expense of any honesty, or something.

It's in my experience too, that with these discussions of complicated subjects that appear very truncated for those unfamiliar, that the instructions are just to go read some much larger body of writing or theory to understand why and how people deceiving themselves, each other, and the public in the ways you're claiming. This is often said as if it's completely reasonable to claim it's the responsibility of a bunch of people with other criticisms or disagreements with what you're saying to go read tons of other content, when you are calling people liars, instead of you being able to say what you're trying to say in a different way.

I'm not even saying that you shouldn't publicly accuse people of being liars if you really think they're lying. In cases of a belief that Givewell or other actors in effective altruism have failed to change their public messaging in the face of, by their own convictions, being correctly pointed out as them being wrong, then just say that. It's not necessary to claim that thus the entire effective altruism community are also dishonest. That is especially the case for members of the EA community who disagree with you, not because they dishonestly refused the facts they were confronted with, but because they were disputing the claims being made, and their interlocutor refused to engage, or deflected all kinds of disagreements.

I'm sure there are lots of responses to criticisms of EA which have been needlessly hostile. Yet reacting, and writing strings of posts as though, the whole body of responses were consistent in just being garbage, is just not accurate of the responses you and Ben have received. Again, if you want to write long essays about what rational implications how people react to public accusations of dishonesty has for social epistemology, that's fine. It would just suit most people better if that was done entirely separately from the accusations of dishonesty. If you're publicly accusing some people of being dishonest, just accuse those and only those people of being dishonest very specifically. Stop tarring so many other people with such a broad brush.

I haven't read your recent article accusing some actors in AI alignment of being liars. This dialogue seems like it is both about that, and a response to other examples. I'm mostly going off those other examples. If you want to say someone is being dishonest, just say that. Substantiate it with what the closest thing you have to hard or empirical evidence that some kind of dishonesty is going on. It's not going to work with an idiosyncratic theory of how what someone is saying meets some kind of technical definition of dishonesty that defies common sense. I'm very critical of a lot of things that happen in effective altruism myself. It's just that the way that you and Ben have gone about it is so poorly executed, and backfires so much, I don't think there is any chance of you resolving the problems you're trying to resolve with your typical approaches.

So, I've given up on keeping up with the articles you're writing criticizing things in effective altruism happening, at least on a regular basis. Sometimes others nudge me to look at them. I might get around to them eventually. It's honestly at the point, though, where the pattern I've learned to follow is to not being open-minded that the criticisms being made of effective altruism are worth taking seriously.

The problem I have isn't the problems being pointed out, or that different organizations are being criticized for their alleged mistakes. It's how the presentation of the problem, and the criticism being made, are often so convoluted I can't understand them, and that's before I can figure out if I agree or not. I find that I am generally more open-minded than most people in effective altruism to take seriously criticisms made of the community, or related organizations. Yet I've learned to suspend that for the criticisms you and Ben make, for the reasons I gave, because it's just not worth the time and effort to do so.

How Much Do Different Users Really Care About Upvotes?
BTW, it might be worth separating out the case where controversial topics are being discussed vs boring everyday stuff. If you say something on a controversial topic, you are likely to get downvotes regardless of your position. "strong, consistent, vocal support" for a position which is controversial in society at large typically only happens if the forum has become an echo chamber, in my observation.

On a society-wide scale, "boring everyday stuff" is uncontroversial by definition. Conversely, articles that have a high total number of votes, but a close-to-even upvote:downvote ratio, are by definition controversial to at least several people. If wrong-headed views of boring everyday stuff aren't heavily downvoted, and are "controversial" to the point half or more of the readers supported someone spreading supposedly universally recognizable nonsense, that's a serious problem.

Also, regarding the EA Forum and LW, at least, "controversial topics" vs. "boring everyday stuff" is a false dichotomy. These fora are fora for all kinds of "weird" stuff, by societal standards. Some of popular positions on the EA Forum and LW are also controversial, but that's normal for EA and LW. What going by societal standards doesn't reflect is why different positions are or aren't controversial on the EA Forum or LW, and why. There are heated disagreements in EA, or on LW, for when most people outside those fora don't care about any side of those debates. For the examples I have in mind, some of the articles were on topics that were controversial in society at large, and then some that were only controversial disagreements in a more limited sense on the EA Forum or LW.

How Much Do Different Users Really Care About Upvotes?

You make a good point I forgot to add: the function karma on an article or comment serves in providing info to other users, as opposed to just the submitting user. That's something people should keep in mind.

How Much Do Different Users Really Care About Upvotes?

What bugs me is when people who ostensibly aspire to understand reality better let their sensitivity get in the way, and let their feelings colour the reality of how their ideas are being received. It seems to me this should be a basic skill of debiasing that people would employ if they were as serious about being effective or rational thinkers as they claim to be. If there is anything that bugs me you're suspicious of, it's that.

Typically, I agree with an OP who is upset about the low quality of negative comments, but I disagree with how upset they get about it. The things they say as a result are often inaccurate. For example, people will say because of a few comments worth of low-quality negative feedback on a post that's otherwise decently upvoted that negative reception is typical of LW, or the EA Forum. They may not be satisfied with the reception they've received on an article. That's just a different claim than their reception was extremely negative.

I don't agree with how upset people are getting, though I do to think they're typically correct the quality of some responses to their posts is disappointingly low. I wasn't looking for a solution to a problem. I was asking an open-ended question to seek answers that would explain some behaviour on others' part that doesn't fully make sense to me. Some other answers I've gotten are just people speaking from their own experience, like G Gordon, and that's fine by me too.

How Can Rationalists Join Other Communities Interested in Truth-Seeking?

Some but not all academics also seek truth in terms of their own beliefs about the world, and their own processes (including hidden ones) for selecting the best model for any given decision. From a Hansonian perspective, that's at least what scientists and philosophers are telling themselves. Yet from a Hansonian perspective, that's what everyone is telling themselves about their ability to seek truth, especially if a lot of their ego is bound up in 'truth-seeking', including rationalists. So the Hansonian argument here would appear to be a perfectly symmetrical one.

I don't have a survey on hand for what proportion of academia seek truth both in a theoretical sense, and a more pragmatic sense like rationalists aspire to do. Yet "academia", considered as a population, it much larger than the rationality community, or a lot of other intellectual communities. So, even if the relative proportion of academics who could be considered a "truth-seeking community" in the eyes of rationalists is small, the absolute/total amount of academics who would be considered part of a "genuine truth-seeking community" in those same eyes would be large enough to take seriously.

To be fair, the friends I have in mind who are more academically minded, and are critical of the rationality community and LessWrong, are also critical of much of academia as well. For them it's about aspiring to a greater and evermore critical intellectualism than it is sticking to academic norms. Philosophy tends to be a field in academia that tends to be more like this than most other academic fields, because philosophy has a tradition of being the most willing to criticize the epistemic practices of other academic fields. Again, this is a primary application of philosophy. There are different branches and specializations in philosophy, like the philosophies of: physics; biology; economics; art (i.e., aesthetics); psychology; politics; morality (i.e., ethics); and more.

The practice of philosophy at it's most elementary level is a practice of 'going meta', which is an art many rationalists seek to master. So I think truth-seekers in philosophy, and in academia more broadly, are the ones rationalists should seek to interact with more, even if finding academics like that is hard. Of course, the most common way rationalists could find academics like that, is to look to academics already in the rationality community like that (there are plenty), and ask them if they know other people/communities they enjoy interacting with for reasons similar to why they enjoy interacting with rationalists.

There is more I could say on the subject of how learning from philosophy, academia, and other communities in a more charitable way could benefit the rationality community. They're really only applicable if you either are part of an in-person/'irl' local rationalist community; or if you're intellectually and emotionally open to criticisms and recommendations for improvement to the culture of the rationality community. If one or both of those conditions apply to you, I can go on.

How Can Rationalists Join Other Communities Interested in Truth-Seeking?

One thing about this comment that really sticks out to me is the fact I know several people who think LessWrong and/or the rationality community aren't that great at truth-seeking. There are a lot of specific domains where rationalists aren't reported to be particularly good at truth-seeking. Presumably, that could be excused by the fact rationalists are generalists. However, I still know people who think the rationality community is generally bad at truth-seeking.

Those people tend to hail from philosophy. To be fair, 'philosophy', as a community, is one of the only other communities that I can think of that are interested in truth-seeking in as generalized way as the rationality community. You can ask the mods about it, but they've got some thoughts on how 'LessWrong' is a project of course strongly tied to but distinct from the 'rationality community'. I'd associate with LessWrong more with truth-seeking than 'the rationality community', since if you ask a lot of rationalists, truth-seeking isn't nearly all of what the community is about these days, and and truth-seeking isn't even a primary draw for a lot of people.

Anyway, most philosophers don't tend to think LessWrong is very good at seeking truth much of the time either. Again, to be fair, philosophers think lots of different kinds of people aren't nearly as good at truth-seeking as they make themselves out to be, including all kinds of scientists. Doing that kind of thing comes with the territory of philosophy, but I digress.

The thing is about 'philosophy', as a human community, is, unlike rationality originating from LessWrong, is blended into the rest of the culture that 'philosophers' don't congregate outside of academia like 'rationalists' do. 'Scientists' seem to tend to do that more than philosophers, but not more than rationalists. Yet for people who want to surround themselves with a whole community of like-minded others, all of them wouldn't want to join academia to get that. Even for rationalists who have worked in academia, the fact the truth-seeking is more part of the profession than something weaved into the fabric of their lifestyles.

Of course, the whole point of this question was to figure out what truth-seeking communities are out there that rationalists would get along with. If rationalists aren't perceived as good enough at truth-seeking for others to want to get along with them, which oftentimes appears to be the case, I don't know what a rationalist should do about that. Of course, you didn't mention truth-seeking, and I mentioned there are plenty of things rationalists are interested in other than truth-seeking. So, the solution I would suggest is for rationalists to route around that, and see if they can't get along with people who share something in common with rationalists, that they also appreciate about rationalists, other than truth-seeking.

How Can Rationalists Join Other Communities Interested in Truth-Seeking?

Hi Dayne. I'd like to join the Facebook group. How do I join?

Diversify Your Friendship Portfolio

The first thing I would think to look at to solve this problem is to look at cultural gaps between rationality and adjacent communities, especially based on how they interact in person, like effective altruism, startup culture, transhumanism, etc.

Schism Begets Schism

One thing I find interesting, as an example that may be particularly pertinent to some rationalists, is how effective altruism has, in spite of everything else, been robust to the kinds of schisms you're talking about. In spite of all the differences between different factions of EA, it remains a grand coalition/alliance (of a sort). Each of the following subgroups of EA, usually built around a specific, preferred cause, in total has at a few hundred if not a couple thousand adherents in EA, and I expect would each be able to command millions of dollars in donations to their preferred charities each year, including:

  • high-impact/evidence-based global poverty alleviation (aka global health and development)
  • AI risk/alignment/safety
  • existential risk reduction (inclusive of AI risk as a distinct and primary subgroup, but focused on other potential x-risks as well)
  • effective animal advocacy (focused on farm animal welfare)
  • reducing wild animal suffering (focused on wild animal welfare)
  • rationality
  • transhumanism

While none of these subgroups of EA is wholly within EA, it's very possible the majority of members of these communities also identifies as part of the EA community as well. An easy explanation is that everyone is sticking around for the Open Phil bucks, or the chance of receiving Open Phil bucks in the future, as a cause area's increased prominence in EA is moderately-to-highly correlated with them receiving => $10^7/year within a few years, when before each area's annual funding was probably <= $10^5. Yet there isn't a guarantee, and the barriers to access to these resources has been such that I've seen multiple of these subgroups openly and seriously consider splitting with EA. If any or all of these causes could sustain and grow themselves such that one or more of them might do better by investing its own resources into growing outside of EA, and securing its independence. However, as far as I can tell, there has never been a single, whole cause area of EA that has 'exited' the community. As the movement has existed for ~10 years, it seems unlikely that this would be the case if there wasn't other factors contributing to the cohesion of such otherwise disparate groups.

Schism Begets Schism

I was thinking about something similar the other day. I was wondering if, from a historical perspective, it would be valid to look not just specific sects, but all Abrahamic religions, as 'schisms' from the original Judaism. One thing is that religious studies scholars and historians may see transformation of one sect into an unambiguously distinct religion as more of an 'evolution', like speciation in biology, than 'schisms', as we typically think of them in human societies.

Load More