Alfred

most posts will be taken from my facebook or some other website and posted here because I think the ideas need proliferation.

you can reach me at fb.com/a.macdonald.iv

I occasionally use twitter, but most of us shouldn't.

Posts

Sorted by New

Wiki Contributions

Comments

But the genre also comes with a lot of strong beliefs that do not replicate. (Talking for 10 minutes with someone who reads Taleb's tweets regularly makes me want to scream.)

 

By this criterion, absolutely no one should be using LessWrong as a vehicle for learning. The Malcolm Gladwell reader you proposed might have been a comparable misinformation vehicle, in, say, 2011, but as of 2022 LessWrong is by a chasmic margin worse about this. It's debatable whether the average LessWrong user even reads what they're talking about anymore.

I can name a real-life example: in a local discord of about 100 people, Aella argued that the MBTI is better understood holistically under the framework of Jungian psychology, and that looking at the validity of each subtest (e.g. "E/I", "N/S", ""T/F", "J/P") is wrongly reductive. This is not just incorrect, it is the opposite of true; it fundamentally misunderstands what psychometric validity even is. I wrote a fairly long correction of this, but I am not sure anyone bothered to read it — most people will take what community leaders say at face value, because the mission statement of the ingroup of LessWrong is "people who are rational" and the thinking goes that someone who is rational, surely, would have taken care of this. (This was not at all the case.)

I don't think further examples will help, but they are abundant throughout this sphere; there is a reason I spent 30 minutes of that audio debunking the pseudoscientific and even quasi-mystical beliefs common to Alexander Kruel's sphere of influence.

A bias is an error in weighting or proportion or emphasis. This differs from a fallacy, which is an error in reasoning specifically. Just to make up an example, an attentional bias would be a misapplication of attention -- the famous gorilla experiment -- but there would be no reasoning underlying this error per se. The ad hominem fallacy contains at least implicit reasoning about truth-valued claims.

Yes, it's possible that AI could be a concern for rationality. But AI is an object of rationality; in this sense, AI is like carbon emissions; it has room for applied rationality, absolutely, but it is not rationality itself. People who read about AI through this medium are not necessarily learning about rationality. They may be, but they also may not be. As such, the overfocus on AI is a massive departure from the original subject matter, much like how it would be if LessWrong became overwhelmed with ways to reduce carbon emissions.

Anyway -- that aside, I actually don't disagree much at all with most of what you said.

The issue is that when these concerns have been applied to the foundation of a community concerned with the same things, they have been staggeringly wrongheaded and resulted in the disparities between mission statements and practical realities, which is more or less the basis of my objection. I am no stranger to criticizing intellectual communities; I have outright argued that we should expand the federal defunding criteria to include of certain major universities such as UC Berkeley itself. For all of the faults that have been levied against academia — and I have been such a critic of these norms that I was in Tucker Carlson's book ("Ship of Fools" p. 130) as a Person Rebelling Against Academic Norms — I have never had a discussion as absurd as I have when questioning why MIRI should receive Effective Altruism funding. It was and still is one of the most bizarre and frankly concerning lines of reasoning I've ever experienced, especially when contrasted with the position of EA leaders to address homelessness or the drug war. The concept of LessWrong and much of EA, on face, is not objectionable; what has resulted absolutely is.

Load More