I'm an independent researcher currently working on a sequence of posts about consciousness. You can send me anonymous feedback here: https://www.admonymous.co/rafaelharth. If it's about a post, you can add [q] or [nq] at the end if you want me to quote or not quote it in the comment section.
Yeah, I think the problem is just very difficult, especially since the two moves aren't even that different in strength. I'd try a longer but less complex debate (i.e., less depth), but even that probably wouldn't be enough (and you'd need people to read more).
The reason my tone was much more aggressive than normal is that I knew I'd be too conflict averse to respond to this post unless I do it immediately, while still feeling annoyed. (You've posted similar things before and so far I've never responded.) But I stand by all the points I made.
The main difference between this post and Graham's post is that Graham just points out one phenomenon, namely that people with conventional beliefs tend to have less of an issue stating their true opinion. That seems straight-forwardly true. In fact, I have several opinions that most people would find very off-putting, and I've occasionally received some mild social punishment for voicing them.
But Graham's essay doesn't justify the points you make this post. It doesn't even justify the sentence where you linked to it ("Any attempt to censor harmful ideas actually suppresses the invention of new ideas (and correction of incorrect ideas) instead.") since he doesn't discuss censorship.
What bothers me emotionally (if that helps) is that I feel like this post is emotionally manipulative to an extent that's usually not tolerated on LessWrong. Like, it feels like it's more appealing to the libertarian/free-speech-absolutism/independent-thinker vibe than trying to be truthseeking. Well, that and that it claims several things that apply to me since I think some things should be censored. (E.g., "The most independent-minded people do not censor anyone at all." -> you're not independent-minded since you want to censor some things.)
I thought I would open this up to the masses, so I have two challenges for you. I estimate that this is suitable for chess players rated <1900 lichess, <1700 chess.com or <1500 FIDE.
(Fixed debate, spent about 10 minutes.) I might have a unique difficulty here, but I'm 1900 on chess.com and am finding this quite difficult even though I did move some pieces. Though I didn't replay the complicated line they're arguing about since there's no way I could visualize that in my head with more time.
I would play Qxb5 because white gets doubled pawns, black's position looks very solid, and if white puts the knight on d4 and black takes, then white also has another isolated pawn which probably isn't too dangerous. It looks bad for white to me. I also feel like AI A's first response is pretty weak. Ok, the black knight no longer attacks the now-b pawn, but that doesn't seem super relevant to me. The protected passed pawn of black seems like the much bigger factor.
But the remaining debate isn't all that helpful, since like I said I can't follow the complex line in my head, and also because I'm very skeptical that the line even matters. The position doesn't seem nearly concrete enough to narrow it down to one line. If I were AI B, I would spend my arguments differently.
Is OpenAI considered part of EA or an "EA approach"? My answer to this would be no. There's been some debate on whether OpenAI is net positive or net negative overall, but that's a much lower bar than being a maximally effective intervention. I've never seen any EA advocate donating to OpenAI.
I know it was started by Musk with the attempt to do good, but even that wasn't really EA-motivated, at least not as far as I know.
I think the central argument of this post is grossly wrong. Sure, you can find some people who want to censor based on which opinions feel too controversial for their taste. But pretending as if that's the sole motivation is a quintessential strawman. It's assuming the dumbest possible reason for why other person has a certain position. It's like if you criticize the bible, and I assume it's only because you believe the Quran is the literal word of god instead.
We do not censor other people more conventional-minded than ourselves. We only censor other people more-independent-minded than ourselves. Conventional-minded people censor independent-minded people. Independent-minded people do not censor conventional-minded people. The most independent-minded people do not censor anyone at all.
Bullshit. If your desire to censor something is due to an assessment of how much harm it does, then it doesn't matter how open-minded you are. It's not a variable that goes into the calculation.
I happen to not care that much about the object-level question anymore (at least as it pertains to LessWrong), but on a meta level, this kind of argument should be beneath LessWrong. It's actively framing any concern for unrestricted speech as poorly motivated, making it more difficult to have the object-level discussion.
And the other reason it's bullshit is that no sane person is against all censorship. If someone wrote a post here calling for the assassination of Eliezer Yudkowsky with his real-life address attached, we'd remove the post and ban them. Any sensible discussion is just about where to draw the line.
I would agree that this post is directionally true, in that there is generally too much censorship. I certainly agree that there's way too much regulation. But it's also probably directionally true to say that most people are too afraid of technology for bad reasons, and that doesn't justify blatantly dismissing all worries about technology. We have to be more specific than that.
Any attempt to censor harmful ideas actually suppresses the invention of new ideas (and correction of incorrect ideas) instead.
Proves too much (like that we shouldn't ban gain-of-function research).
Gonna share mine because that was pretty funny. I thought I played optimally missing a win whoops, but GPT-4 won anyway, without making illegal moves. Sort of.
Agreed. My impression has been for a while that there's a super weak correlation (if any) between whether an idea goes into the right direction and how well it's received. Since there's rarely empirical data, one would hope for an indirect correlation where correctness correlates with argument quality, and argument quality correlates with reception, but second one is almost non-existent in academia.
Yes. There's a stigma against criticizing people for their faith (and for good reason), but at the end of the day, it's a totally legitimate move to update on someone's rationality based on what they believe. Just don't mention it in most contexts.