I found the community in spring 2020 through HPMOR which I found while bored and reading stories online. When I learned that there were other people using such witchcraft as "not only using reasoning on math exercises, but also issues in the real world", I was sold.
Crocker's Rules and Metahonesty are in effect (on me) at all times.
You can always message me and I will not be upset. No anxiety needed around "bugging" me.
If I say something and you think "Wow! He sounds like a moron who doesn't understand humans!", you've misunderstood and I was trying to make a more subtle point. Extend me charity and I'll find it low cost to extend it to you.
I don't know what to do about the general case where there's a good tool, properly integrating that tool makes you more effective overall, but learning and starting to use that tool is likely going to lead to some mistakes, and also it's hard to get good practice in to smooth those out.
I really like your response but want to highlight that I'm concerned about a different thing. I don't think this is a Valley of Bad Rationality per se, but more a tightrope walk. You made it to the other end, but a lot of people fall off before then and become paranoid about social interactions.
Because of this, I'm not worried about minimizing risk while getting people onboarded, but mitigating risk that is perpetually ongoing.
I don't know how to solve this issue either FWIW.
I really like the concept and agree that a lot of Rationalist-community folks are walking around without it. I also think the concept is likely to be mishandled by a lot of people and to really make a mess of community dynamics because of how the current presentation isn't paired with any helpful tips at how to use it.
I've seen this play out a couple times since the post's publication:
The concept of Thresholding is really useful: it lets you stop doing rules lawyering and start using common sense. However, the people who need to learn about Thresholding do not have social common sense, and can often apply the idea in a very rigid way, making things even more exploitable for bad actors.
I do not have a good enough view of the whole of the community to know whether this second-order effect is small enough that the first-order effect is a good purchase at this price, and I really really liked Duncan's essay, but I'm reluctant to promote the essay further for this reason.
Thanks for the reply. I'm going to dip out (after reading your reply, if you choose to leave one), but I will in my last reply suggest you revisit my initial and revised wording, which said that nothing "compels" or "requires" you to change in this manner, not that you are forbidden from doing so.
You are fighting against a position I do not hold, and so cannot offer you the satisfaction of convincing me to abandon it. You made a claim which I interpreted as "You MUST follow this procedure". I replied that nothing "MUST" be done, and now you are, if I'm reading you correctly, explaining that you MAY. I agree! You MAY! You also MAY NOT, so I think MUST is overblown.
I have strong upvoted this because of how helpful I think it is to understanding the post (even for people who disagree with your methods, knowing what they are should be very helpful!)
I think those sound like awesome ways to facilitate high-quality conversation between many more people than otherwise. You need a way for beginners to get good, of course, but sometimes it's hard to build those skills without some help. It almost sounds like your goal (getting lots of people involved and sharing ideas) is intentionally opposed to a discussion that's solely among practiced debaters with extremely coherent arguments (and little room for disagreement, since "correct" thinkers shouldn't diverge by much, presumably). That seems OK to me, but I'm not certain you'd agree.
Apologies for the offense, but luckily, (perhaps reciprocally to an error I made first) the sentence I set out to write differs greatly from the sentence you read. Perhaps a reword will help:
One is not required by facts to change one's (terminal-ish) values.
Sorry, I know that's messy in that there's a grammatical correction and a philosophical objection, but I think it would muddy the waters to leave either out of my reply.
I think this is mostly wrong in ways that make it hard to reform it. From the top:
It is experimentally verifiable that most of them are not very smart.
is a very silly thing to say! What did you mean by this: that most of the world can't cook broccoli? It is very hard to learn to cook broccoli, but because we all know how to do this, we don't find it impressive. This sentence boils down to "Most people are not the smartest people", a statement that I'd argue is directionally true but largely oversells itself (if you know how to graft trees, write a database, and repair a washing machine, you're a very unusual person! Smugly declaring that the Common Rube is obviously dumber than you elides a lot of important information about the guy who fixed your car a few years back).
Continuing, we have
You have effectively been brought up in a church that indoctrinated you to think that all surviving hominins are "equal" and that those who disagree are evil and edgy.
an attempt to insulate this post from critique (after all, anyone who disagrees is just deluded!)
Then we have an is–ought confusion:
If you feel the value of an entity is increased by having a given level of intellectual ability, you're just, allowed to think that.
And end on one last attempt to make sure any criticism is read as an attack instead of a correction
Just don't say it out loud or the stupid apes will eat you. (Typical.)
In general, I think a LessWrong comment should at least attempt to approach its subject matter with a truth-seeking lens, not a conspiratorial bundle of shadowy assertions. It is the case that people have different skill levels at different tasks, and it's further the case that a lot of these different skill levels are correlated ("G factor"). These facts are not enough to compel you to change your valuation of people, and they are not something that must be darkly alluded to on this website.
smug intellectual superiority is really difficult to maintain when you think about all the moronic buffoonery that you have committed in your life
I'd charge that it's an error of reasoning to treat "Have I ever made a mistake?" and "Are my insights shallow or nonsensical?" as if they're the same question.
More than even flattening two questions into one, your method suggests giving them both an interminate answer ("sometimes")!
Not in a sarcastic way, but because I'm ignorant (and almost everyone here will be) and would appreciate you drilling down:
We don't reward name or concept dropping, we don't make people feel bad if they don't express themselves cogently, and we encourage people who have a disagreement to talk it out later so that we can hear from more people who want to contribute something to the discussion.
This all sounds rather wishy-washy to me, but I suspect it's not the full story. Could you paint a picture explaining how your group is investigating things more deeply than just chatting with people at a party? (Again, I'm not presupposing the answe is that you're bad; I'm just trying to get you to add details at the top of the comments section)
Wow, I've wondered this for a while and hadn't gotten around to even attempting to figure it out. Thanks for writing
It's because a good model should fail to explain falsehoods. You can think of this in a mathy way as a model taking data and using it to narrow down which possible world you're in. If it doesn't narrow it down, it's rather uninformative!