I'm an admin of LessWrong. Here are a few things about me.
Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.
I think most of the people involved like working with the smartest and most competent people alive today, on the hardest problems, in order to build a new general intelligence for the first time since the dawn of humanity, in exchange for massive amounts of money, prestige, fame, and power. This is what I refer to by 'glory'.
I think of it as 'glory'.
Perhaps a react for "I wish this idea/sentence/comment was a post" would improve things.
I felt confused at first when you said that this framing is leaning into polarization. I thought "I don't see any clear red-tribe / blue-tribe affiliations here."
Then I remembered that polarization doesn't mean tying this issue to existing big coalitions (a la Hanson's Policy Tug-O-War), but simply that it is causing people to factionalize and create an conflict and divide between them.
I think it seems to me like Max has correctly pointed out a significant crux about policy preferences between people who care about AI existential risk, and it also seems to me worth polling people and finding out who thinks what.
It does seem to me that the post is attempting to cause some factionalization here. I am interested in hearing about whether this is a good or bad faction to exist (relative to other divides) rather than simply saying that division is costly (which it is). I am interested in some argument about whether this is worth it / this faction is a real one.
Or perhaps you/others think it should ~never be actively pushed for in the way Max does in this post (or perhaps not in this way on a place with high standards for discourse like LW).
That's right. One exception: sometimes I upvote posts/comments written to low standards in order to reward the discussion happening at all. As an example I initially upvoted Gary Marcus's first LW post in order to be welcoming to him participating in the dialogue, even though I think the post is very low quality for LW.
(150+ karma is high enough and I've since removed the vote. Or some chance I am misremembering and I never upvoted because it was already doing well, in which case this serves as a hypothetical that I endorse.)
The effect seems natural and hard to prevent. Basically, certain authors get reputations for being high (quality * writing), and then it makes more sense for people to read their posts because both the floor and ceiling are higher in expectation. Then their worse posts get more readers (who vote) than posts of a similar quality by another author, who's floor and ceiling is probably lower.
I'm not sure the magnitude of the cost, or that one can realistically expect to ever prevent this effect. For instance, ~all Scott Alexander blogposts get more readership than the best post by many other authors who haven't built a reputation and readership, and this kind of just seems part of how the reading landscape works.
Of course, it can be frustrating as an author to sometimes see similar quality posts on LW get different karma. I think part of the answer here is to do more to celebrate the best posts by new authors. The main thing that comes to mind here is curation, where we celebrate and get more readership on the best posts. Perhaps I should also have a term here for "and this is a new author, so I want to bias toward curating them for the first time so that they're more invested in writing more good content".
I am not really clear that I should be worried on the scale of decades? If we're doing a calculation of expected future years of a flourishing technologically mature civilization, slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Given this, it seems plausible to me that one should rather spend 200 years trying to improve civilizational wisdom and decision-making rather than instead attempt to specifically just unlock regulation on AI (of course the specifics here are cruxy).
There is a strong force in web forums to slide toward news and inside-baseball; the primary goal here is to fight against that. It is a bad filter for new users if a lot of that they see on first visiting the LessWrong homepage is discussions of news, recent politics, and the epistemic standards of LessWrong. Many good users are not attracted by these, and for those not put off it's bad culture to set this as the default topic of discussion.
(Forgive me if I'm explaining what is already known, I'm posting in case people hadn't heard this explanation before; we talked about it a lot when designing the frontpage distinction in 2017/8.)