Creating better infrastructure for controversial discourse

by Rudi C2 min read16th Jun 202011 comments

69

Conversation (topic)DisagreementPublic DiscourseSite Meta
Personal Blog

Currently there are three active forks of Lesswrong; Itself, the alignment forum and the EA forum. Can adding a new fork that is more focused on good discourse on controversial/taboo topics be a good idea?

I am an Iranian, and have a fair share of experience with bad epistemic conditions. Disengagement with politics is always a retreat, though it can be strategic at times. As time goes on, the politics of a memetic-based elite will increasingly infringe on your rights and freedoms. Aside from this, as your community’s status grows, your epistemic freedoms get a lot worse. The community itself will be increasingly infiltrated by the aggressive memetic structure, as the elite control the education, the media, and distribution of status/resources.

The fact (IMHO) that the current object-level neo-religion(s) is also vastly superior in its dogma is also bad news for us. The better a religion is, the harder fighting it is. I, myself, believe a bit in the neo-religion.

The rationalist/EA community has not invested much in creating a medium for controversial discourse. So there should be quite a bit of low-hanging fruit there. Different incentive mechanisms can be tried to see what works. I think anonymity will help a lot. (As a datapoint, Quora had anonymous posting in the past. I don’t know if they still do.) Perhaps hiding all karma scores might help, as well; (An anonymized score of the user’s karma on Lesswrong can be shown to distinguish “the old guard.”) Outsiders won’t have as much of an incentive to engage without a karma system. Posts that are, e.g., just ripping on the outgroup won’t be as attractive without the scores.

A big issue is distancing the controversial forum from tainting Lesswrong’s brand. I don’t have good ideas for this. The problem is that we need to somehow connect this forum to Lesswrong’s community.

Another direction to tackle the controversy problem is introducing jargon. Jargon repels newcomers, attracts academic-minded people, gives protection against witch-hunters, and raises the community’s status. (Its shortcomings should be obvious.) One of the ways it protects against witch hunters is that it signals that we are politically passive and merely “philosophizing.” Another reason (possibly what gives the signal its credibility) is that we automatically lose most of the population by going down this route. We have introduced artificial inferential distance, and thus made ourselves exclusive.

I was going to post this as a reply to Wei Dai here, but I thought posting it as a top post might be wiser. This is less a post about my ideas and more a reminder that investing in defensive epistemic infrastructure is a worthwhile endeavor that this community has a comparative advantage in doing. This is true even if things don’t get worse, and even if the bad epistemic conditions have historical precedent. If things do get worse, the importance of these defenses obviously shots up. I am not well aquatinted with the priorities the EA community is pursuing, but creating better epistemic conditions seems a good cause to me. People’s needs aren’t just material/hedonic. They also need freedom of expression and thought. They also deserve the infrastructure to deal critically with invasive memes. There is a power imbalance between people and memes. It’s wrongness can be compared to child brides. I often feel most people who have not experienced being enthralled by aggressive memetic structures will underestimate its sheer grossness. The betrayal you feel after losing cherished beliefs that you held for years, the beliefs that colored so much of your perception and identity, that shaped your social life so much. It’s like you had lived a slave to the meme.

69