Coding day in and out on LessWrong 2.0
Yeah, almost everyone who we ban who has any real content on the site is warned. It didn't feel necessary for curi, because he has already received so much feedback about his activity on the site over the years (from many users as well as mods), and I saw very little probability of things changing because of a warning.
This is the definition that I had in mind when I wrote the notice above, sorry for any confusion it might have caused.
Additionally, I think that while a ban is sometimes necessary (e.g. harassment), a 2-year ban seems like quite a jump. I could think of a number of different sanctions, e.g. blocking someone from commenting in general; giving users the option to block someone from commenting; blocking someone from writing anything; limiting someone's authority to her own shortform; all of these things for some time.
I am not sure. I really don't like the world where someone is banned from commenting on other people's posts, but can still make top-level posts, or is banned from making top-level posts but can still comment. Both of these end up in really weird equilibria where you sometimes can't reply to conversations you started and respond to objections other people make to your arguments, and that just seems really bad.
I also don't really know what those things would have done. I don't think those things would have reduced the uncertainty of whether curi is a good fit for LessWrong super much, and feel like they could have just dragged things out into a long period of conflict that would have been more stressful for everyone.
The "blocking someone from writing anything" does feel like an option. Like, at least you can still vote and read. I do think that seems potentially like the better option, but I don't think we currently actually have the technical infrastructure to make that happen. I might consider building that for future occasions like this.
"I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities" seems a bit weird to me. "a propensity for long unproductive discussions, a history of threats against people who engage with him" and "I assign too high of a probability that old patterns will repeat themselves" seem like quite a judgement and why would someone else not update on this?
The key thing I wanted to communicate is that it seems quite plausible to me that these patterns are the result of curi interfacing specifically with the LessWrong culture in unhealthy ways. I can imagine him interfacing with other cultures with much less bad results.
I also said "I don't want others to think this is much evidence", not "this is no evidence". Of course it is some evidence, but I think overall I would expect people to update a bit too much on this, and as I said, I wouldn't be very surprised to see curi participate well in other online communities.
Yep, after we are done with the import, we are going to redirect all the pages we imported. And then probably make all the remaining pages on the old wiki read-only, so we don't have to maintain a whole separate wiki system forever.
This sentence really makes no sense to me. The proof that it can have an incentive to allow itself to be switched off even if it isn't uncertain is trivial.
Just create a utility function that assigns intrinsic reward to shutting itself off, or create a payoff matrix that punishes it really hard if it doesn't turn itself off. In this context using this kind of technical language feels actively deceitful to me, since it's really obvious that the argument he is making in that chapter cannot actually be a proof.
In general, I... really don't understand Stuart Russell's thoughts on AI Alignment. The whole "uncertainty over utility functions" thing just doesn't really help at all with solving any part of the AI Alignment problem that I care about, and I do find myself really frustrated with the degree to which both this preface and Human Compatible repeatedly indicate that it somehow is a solution to the AI Alignment problem (not only like, a helpful contribution, but both this and Human Compatible repeatedly say things that to me read like "if you make the AI uncertain about the objective in the right way, then the AI Alignment problem is solved", which just seems obviously wrong to me, since it doesn't even deal with inner alignment problems, and it also doesn't solve really any major outer alignment problems, but that requires a bit more writing).
The sailing ships one sounds fun. GWP as terrible metric also sounds interesting. The others also seem good, but those two seem marginally better.
Today we have banned two users, curi and Periergo from LessWrong for two years each. The reasoning for both is bit entangled but are overall almost completely separate, so let me go individually:
Periergo is an account that is pretty easily traceable to a person that Curi has been in conflict with for a long time, and who seems to have signed up with the primary purpose of attacking curi. I don't think there is anything fundamentally wrong about signing up to LessWrong to warn other users of the potentially bad behavior of an existing user on some other part of the internet, but I do think it should be done transparently.
It also appears to be the case that he has done a bunch of things that go beyond merely warning others (like mailbombing curi, i.e. signing him up for tons of email spam that he didn't sign up for, and lots of sockpupetting on forums that curi frequents), and that seem better classified as harassment, and overall it seemed to me that this isn't the right place for Periergo.
Curi has been a user on LessWrong for a long time, and has made many posts and comments. He also has the dubious honor of being by far the most downvoted account in all of LessWrong history at -675 karma.
The biggest problem with his participation is that he has a history of dragging people into discussions that drag on for an incredibly long time, without seeming particularly productive, while also having a history of pretty aggressively attacking people who stop responding to him. On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack. It's first sentence is "This is a list of ppl who had discussion contact with FI and then quit/evaded/lied/etc.", and in-particular the framing of "quit/evaded/lied" sure sets the framing for the rest of the post as a kind of "wall of shame".
Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.
I do really want to make clear that this is not a personal judgement of curi. While I do find the "List of Fallible Ideas Evaders" post pretty tasteless, and don't like discussing things with him particularly much, he seems well-intentioned, and it's quite plausible that he could me an amazing contributor to other online forums and communities. Many of the things he is building over on his blog seem pretty cool to me, and I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities.
I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from), but they still don't strike me as great contributions to the LessWrong canon, are all low-karma, and I assign too high of a probability that old patterns will repeat themselves (and also that his presence will generally make people averse to be around, because of those past patterns). He has also explicitly written a post in which he updates his LW commenting policy towards something less demanding, and I do think that was the right move, but I don't think it's enough to tip the scales on this issue.
More broadly, LessWrong has seen a pretty significant growth of new users in the past few months, mostly driven by interest in Coronavirus discussion and the discussion we hosted on GPT3. I continue to think that "Well-Kept Gardens Die By Pacifism", and that it is essential for us to be very careful with handling that growth, and to generally err on the side of curating our userbase pretty heavily and maintaining high standards. This means making difficult moderation decision long before it is proven "beyond a reasonable doubt" that someone is not a net-positive contributor to the site.
In this case, I think it is definitely not proven beyond a reasonable doubt that curi is overall net-negative for the site, and banning him might well be a mistake, but I think the probabilities weigh heavily enough in favor of the net-negative, and the worst-case outcomes are bad-enough, that on-net I think this is the right choice.
Sorry for the delay! Here it is: https://www.facebook.com/bshlgrs/posts/10218388194790943
A recently released paper that seems kind of relevant: https://www.researchgate.net/publication/337275911_Taking_a_disagreeing_perspective_improves_the_accuracy_of_people%27s_quantitative_estimates