Hear that sound beneath your feet? It's the high-ground falling out from under you.
I'm offended by the censorship as well and was voting a number of your comments up previously. But as long as discussions of the censorship itself aren't being censored peaceful advocacy for a policy change and skirting the censors are the best strategies. And when the discussions of censorship start being censored the best strategy is for everyone to leave the site. This increasing risk nonsense is insanely disproportional. Traditionally, the way to get back at censors is to spread the censored material not blow up 2 1/2 World Trade Centers.
At this point I must conclude either that you have no grasp whatsoever of the math involved here or that you're completely insane. Assuming your claim is correct (which I sincerely doubt), you just killed ~6,790 people (on average) because someone deleted a blog post. If you believe that this is a commensurate and appropriate response, I'm not sure what to say to you.
Honestly, if you believe that attempting to increase the chance that mankind is destroyed is a good response to anything and are willing to brag about it in public, I think something is very clearly wrong.
Maybe they are of the belief that censorship on LessWrong is severely detrimental to the singularity. Then such a response might be justified.
In that case they should present their evidence and/or a strong argument for this, not attempt to blackmail moderators.
It seems like the sanest response would be to find some way of preventing waitingforgodel from viewing this site.
No, because then you have to think of what a troll would do, i.e. whatever would upset people for great lulz. The correct answer is to ignore future silly persons, and hence the present silly person.
(Note that this does not require waitingforgodel to be trolling - I see no reason not to assume complete sincerity. This is about the example set by the reaction/response.)
I'm curious.
I am in the following epistemic situation: a) I missed, and thus don't know, BANNED TOPIC b) I do, however understand enough of the context to grasp why it was banned (basing this confidence on the upvotes to my old comment here
Out of the members here who share roughly this position, am I the only one who - having strong evidence that EY is a better decision theorist than me, and understanding enough of previous LW discussions to realise that yes, information can hurt you in certain circumstances - is PLEASED that the topic was censored?
I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.
Of course, maybe I'm miscalibrated. It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses.
(David Gerard, I'd be grateful if you could let me know if the above trips any cultishness flags.)
I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.
I award you +1 sanity point.
(I note that the Langford Basilisk in question is the only information that I know and wish I did not know. People acquainted with me and my attitude towards secrecy and not-knowing-things in general may make all appropriate inferences about how unpleasant I must find it to know the information, to state that I would prefer not to.)
Why wish for:
I wish I wasn't as intelligent as I am, wish I was more normal mentally
and had less innate ability for math?
Why not just with for being better at socializing/communicating?
Not really :-) If you keep awareness of the cult attractor and can think of how thinking these things about an idea might trip you up, that's not a flawless defence but will help your defences against the dark arts.
What inspired you to the phrase "invincible mind fortresses"? I like it. Everyone thinks they live in one, that they're far too intelligent/knowledgeable/rational/Bayesian/aware of their biases/expert on cults/etc to fall into cultishness. They are of course wrong, but try telling them that. (It's like being smart enough to be quite aware of at least some of your own blithering stupidities.)
(I read the forbidden idea. It appears I'm dumb and ignorant enough to have thought it was just really silly, and this reaction appears common. This is why some people find the entire incident ridiculous. I admit my opinion could be wrong, and I don't actually find it interesting enough to have remembered the details.)
I would be interested in hearing about your evidence for the existence of people who are "most certainly invincible" to cultishness, as I'm not sure how I would go about testing that.
It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses
In general, I treat attempts to focus my attention on any particular highly-unlikely-but-really-bad scenario as an invitation to inappropriately privilege the hypothesis, probably a motivated one, and I discount accordingly. So on balance, yeah, you can count me as "playing along" the way you mean it here.
I don't think my mind-fortress is invincible, and I am perfectly capable of being hurt by stuff on the Internet. I'm also perfectly capable of being hurt by a moving car, and yet I drive to work every morning.
And yes, if the dangerousness of the Dangerous Idea seems more relevant to you in this case than the politics of the community, I think you're miscalibrated. The odds of a power struggle in a community in which you have transient membership affecting your life negatively are very small, but I'd be astonished if they were anything short of astronomically higher than the odds of the Dangerous Idea itself affecting your life at all.
I also regret contact with the basilisk, but would not say it's the only information I wish I didn't know, nor am I entirely sure it was a good idea to censor it.
When it was originally posted I did not take it seriously, it only triggered "severe mental trauma" as others are saying, when I later read someone referring to it being censored, and some curiosity regarding it, and I updated on the fact that it was being taken that seriously by others here.
I do not think the idea holds water, and I feel I owe much of my severe mental trauma to an ongoing anxiety and depression stemming from a host of ordinary factors, isolation chief among them. I would STRONGLY advise everyone in this community to take their mental health more seriously, not so much in terms of basilisks as in terms of being human beings.
This community is, as it stands, ill-equipped to charge forth valiantly into the unknown. It is neurotic at best.
I would also like to apologize for whatever extent I was a player in the early formation of the cauldron of ideas which spawned the basilisk and I'm sure will spawn other basilisks in due time. I participated with a fairly callous abandon in the SL4 threads which pr...
I saw the original post. I had trouble taking the problem that seriously in the general case. In particular, there seemed to be two obvious problems that arose from the post in question. One was a direct decision theoretic basilisk, the other was a closely associated problem that was empirically causing basilisk-like results to some people who knew about the problem in question. I consider the first problem (the obvious decision-theoretic basilisk) to be extremely unlikely. But since then I've talked to at least one person (not Eliezer) who knows a lot more about the idea who has asserted that there are more subtle aspects of the basilisk which could make it or related basilisks more likely. I don't know if that person has better understanding of decision theory than I do, but he's certainly thought about these issues a lot more than I do so it did move my estimate that there was a real threat here upwards. But even given that, I still consider the problems to be unlikely. I'm much more concerned about the pseudo-basilisk which empirically has struck some people. The pseudo-basilisk itself might justify the censorship. Overall, I'm unconvinced.
I agree.
Like Alicorn, this is the only thing I know that I wish I did not know.
On the plus side, it made me realise my utility function is not monotonic in knowledge.
(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my "being bugged" by the removal.)
The thing that's been bugging me about this whole issue is even given that a certain piece of information MAY (with really tiny probability) be highly (for lack of a better word), toxic... should we as humans really be in the habit of "this seems like dangerous idea, don't think about it"?
I can't help but think this must violate something analogous (though not identical) to an ethical injunction. ie, chances of human encountering inherently toxic idea are so small vs cost of smothering one's own curiosity/allowing censorship not due to trollishness or even revelation of technical details that could be used to do really dangerous thing, but simply because it is judged dangerous to even think about...
I get why this was perhaps a very particular special circumstance, but am still of several minds about this one. "Don't think about deliciously f...
Hi Eliezer. It took me way too long to figure out the right question to ask about this mess, but here it is: do you regret knowing about the basilisk?
I regret that I work in a job which will, at some future point, require me to be one of maybe 2 or 3 people who have to think about this matter in order to confirm whether any damage has probably been done and maximize the chances of repairing the damage after the fact. No one who is not directly working on the exact code of a foomgoing AI has any legitimate reason to think about this, and from my perspective the thoughts involved are not even that interesting or complicated.
The existence of this class of basilisks was obvious to me in 2004-2005 or thereabouts. At the time I did not believe that anyone could possibly be smart enough to see the possibility of such a basilisk and stupid enough to talk about it publicly, or at all for that matter. As a result of this affair I have updated in the direction of "people are genuinely that stupid and that incapable of shutting up".
This is not a difficult research problem on which I require assistance. This is other people being stupid and me getting stuck cleaning up the mess, in what will be a fairly straightforward fashion if it can be done at all.
oh goody, lesswrong finally has its own super villain. is any community really complete without one?
WHAT?! We need a much better supervillain. Ideally a sympathetic well-intentioned one so we can have some black and grey morality flying back and forth. Someone like.... Yvain.
Dammit. Someone beat me to the punch on taking up the 'Clippy' role and it looks like someone beat me to it on roleplaying the supervillian too. I have to work on my reaction time if I'm going to get in on any of the fun stuff!
next time gadget! next time!
I can't really imagine a resolution at this point that doesn't signal vulnerability to trolls in the future.
edit: How about a script that prefaces waitingforgodel's posts with meanwhile, at the hall of doom:
I agree with Eliezer's comment asking you to leave. Even if LW had heavy censorship, I'd still read it (and hopefully participate) because of the great signal to noise ratio, which is what you're hurting with all your posts and comments - they don't add anything to my understanding.
You're participating in a flamewar here, though it's a credit to you, EY, and LessWrong that nobody has yet posted in all caps. Tempers are running high all around; I recommend that one or all parties involved stop fighting before someone gets hurt. (read: is banned, has their reputation irrevocably damaged, or otherwise has their ability to argue compromised).
0.0001% is a huge amount of risk, enough that if one person in six thousand did what you just did, humanity should be doomed to certain extinction. Even murder doesn't have such a huge effect. I think you overestimate the impact of your actions. Sending a few emails to a blogger has an impact I would estimate to be 10^(-15) or less.
Certainly making this post has little purpose beyond inciting an argument. All you'll do is polarise LessWrong and turn us against each other.
Reversed stupidity not being intelligence, I'll point out that I "side with" waitingforgodel to the extent of disapproving of the censorship that occurred yesterday (though I haven't complained about the original censorship from July).
Needless to say, of course, I also think this post is silly.
I haven't followed the whole thing, because I couldn't. How can I decide wether he is right or not. I don't know what was censored, and why. Reading the thread on academic careers just had some big holes where, presumably, things were deleted, and I couldn't reconstruct why.
Other forums have some kind of policy, where they explicitly say what kind of post will be censored. I'm not against censoring stuff, but knowing what is worthy of being censored and what isn't would be nice.
With the knowledge I currently have about this whole thing, I still feel slightly sympathetic for WaitingForGoedel's cause. The "Free Speech is important" heuristic that Vladimir Nesov mentioned in the other thread is pretty useful, in my opinion, and without knowing the reason for posts being deleted, I can't decide for myself wether it made sense or not.
I intend to stick around, anyway, because I don't feel very strongly about this issue, so I won't frustrate anybody, I hope. But an answer would still be nice.
I do know what was censored and why, and I think Eliezer was wrong to delete the material in question.
That's a separate issue from whether waitingforgodel's method of expressing his (correct) disagreement with the censorship is sane or reasonable -- of course it isn't.
It's true that the basilisk in question is a wild fantasy even by Singularitarian standards, and that people took it seriously enough to get upset about it, could well be considered cause for alarm.
But that's not why people are telling waitingforgodel they'd rather he left. People are telling him that because he took action he sincerely (perhaps wrongly, but sincerely) believed would reduce humanity's chances of survival. That's a lot crazier than believing in basilisks!
And the pity is, it's not true he couldn't effect change. The right thing to do in a scenario like this is propose reasonable compromises (like the idea of rot13'ing posts on topics people find upsetting) and if those fail then, with the moral high ground under your feet, find or create an alternative site for discussion of the banned topics. Not only would that be morally better than this nutty blackmail scheme, it would also be more effective.
This is a great example of the general rule that if you think you need to do something crazy or evil for the greater good, you are probably wrong -- keep looking for a better solution instead.
All four examples involve threats - one party threatening to punish another unless the other party obeys some rule - but the last threat (threatening to increase existential risk contingent on acts of forum moderation) sticks out as different from the others in several ways:
And 5. Ridiculousness. "He threatened what? ... And they took it seriously?"
(Posted as an example of a way this is notably different to the typical example. Note that this is also my reaction, but I might well be wrong.)
As someone who only now found out about this whole nonsense, and who believes that the maximum existential risk increase you can cause on a whim has a lot more decimal zeros in front of it, I'd like to thank you for providing a quarter-hour of genuine entertainment in the form of quality Internet drama.
With regards to Eliezer deleting what he regards as Langford Basilisks, I don't think he should do it *, but I also think their regular deletion does not cause perceptible harm to the LessWrong site as I care about it. Now, if he were to censor people who oppose his positions on various pet issues, even only if they brought particularly stupid reasons, that would be different (I could see him eventually degenerating into "a post saying that uFAI isn't dangerous increases existential risk"), but as far as I know that's currently not the case and he has stated so outright.
* (I read Roko's banned post, and while I wouldn't confidently state that I suffered zero damage, I am confident I suffered less damage than I did half an hour ago by eating some store-bought salmon without previously doing extensive research on its provenance.)
Enough with the hypothetical,this one's real: The moderator of one of your favorite online forums declares that if you post things he feels are dangerous to read, he will censor them. He may or may not tell you when he does this. If you post such things repeatedly, you will bebanned.
Does this count as blackmail? Does this count as terrorism? Should we not comply with him to prevent similar future abuses of power?
Have you considered that not everyone feels as strongly as you do about moderators deleting posts in online communities?
To those of us who think that moderators deleting stupid or dangerous content can be an essential ingredient to maintaining the level of quality, your post comes off as silly as threatening to kill a kitten unless LessWrong.com is made W3C compliant by 2011.
(That isn't to say moderation can't have problems - after all, lesswrong's voting system is a mechanism to improve on it. But it's a far cry from "can be improved" to "must be punished".)
I'm curious, would you object if similar censorship occurred of instructions on how to make a nuclear weapon? What if someone posted code that they thought would likely lead to a very unFriendly AI if it were run? What if there were some close to nonsense phrase in English that causes permanent mental damage to people who read it?
I'm incidentally curious if you are familiar with the notion that there's a distinction between censorship by governments as opposed to private organizations. In general, most people who are against censorship agree that private organizations can decide what content they do and do not allow. Thus for example, you probably don't object to Less Wrong moderators removing spam. And we've had a few people posting who simply damaged the signal to noise ratio (like the fellow who claimed that he had ancient Egyptian nanotechnology that had been stolen by the rapper Jay-Z). Is there any difference between those cases and the case you are objecting to? As far as I can tell, the primary difference is that the probability of very bad things happening if the comments are around is much higher in the case you object to. It seems that that's causing some sort of cognitive bias where you regard everything related to those remarks (including censorship of those remarks) as more serious issues than you might otherwise claim.
Incidentally, as a matter of instrumental rationality, using a title that complains about the karma system is likely making people less likely to take your remarks seriously.
Thank you. It's fantastic.
I went to school at my family's Kingdom of Oyotunji Royal Academy where we learn about the ancient science of astral physics.
This was even more hilarious after I found out that Oyotunji is in North Carolina.
Is there anything I, as an individual you have chosen to hold hostage to Eliezer's compliance via your attempts at increasing existential risk, can do to placate you? Or are you simply notifying us that resistance is futile, we will be put at risk until you get the moderation policy you want?
Would the comment have been deleted if the author had ROT13'd it?
Would the anti-censors have been incensed by the moderator ROT13-ing the content instead of deleting it?
Upvoted - this is an eminently sensible suggestion on how to deal with comments that some people would rather not view because they find the topic upsetting.
waitingforgodel: see, there usually are at least somewhat reasonable ways to deal with this sort of conflict. If you'd reacted to "I can't think of a reasonable way yet" with "I'll keep thinking about it" instead of "I'm going to go off and do something completely loony like pretending to destroy the world" you might have been the one to make this suggestion, or something even better, and you wouldn't be shooting for a record number of (deserved) downvotes.
Don't you have better things to do than fight a turf war over a blog? Start your own if you think your rules make more sense -- the code is mostly open source.
Not commenting the content (which others have done satisfactorily), the formatting of this post is horrible. There are more white lines than lines of text. I would downvote for that only.
You actually lost me before you even got to the main point, since record companies have good reasons to try to protect their intellectual property and governments have good reasons to institute seat belt laws. By the time I read the angry part I was already in disagreement; everything after that only made it worse.
Laws are not comparable to blackmail because they have process behind them. If one loan individual told me that if I didn't wear my seatbelt, he'd bust my kneecaps, then that would be blackmail. Might even qualify as terrorism, since he is trying to constrain my actions by threat of illegitimate force.
A lone individual making a threat against the main moderator of a site if he uses his discretion in a certain way is indeed blackmail/terrorism, particularly when the threat is over a thing substantially outside the purview of the site, and the act threatene...
Of course actual religious believers who accept that doctrine don't usually bite the bullet
I know a number of believers in various "homegrown" faiths who conclude essentially this, actually. That is, they assert that being aware of the spiritual organizing principle of existence without acting on it leaves one worse off than being ignorant of it, and they assert that consequently they refuse to share their knowledge of that principle.
Outside view question for anyone with relevant expertise:
It seems to be that lesswrong has some features of early cult (belief that the rest of the world is totally wrong about wide range of subjects, messiah figure, secretive inner circle, mission to save the world etc.). Are ridiculous challenges of group's leadership, met with similarly ridiculous response from it, typical feature of group's gradual transformation into a fully developed cult?
My intuitive guess is yes, but I'm no expert in cults. Anyone has relevant knowledge?
This is outside view questio...
I know a thing or two (expert on Scientology, knowledgeable about lesser nasty memetic infections). In my opinion as someone who knows a thing or two about the subject, LW really isn't in danger or the source of danger. It has plenty of weird bits, which set off people's "this person appears to be suffering a damaging memetic infection" alarms ("has Bob joined a cult?"), but it's really not off on crack.
SIAI, I can't comment on. I'd hope enough people there (preferably every single one) are expressly mindful of Every Cause Wants To Be A Cult and of the dangers of small closed groups with confidential knowledge and the aim to achieve something big pulling members toward the cult attractor.
I was chatting with ciphergoth about this last night, while he worked at chipping away my disinterest in signing up for cryonics. I'm actually excessively cautious about new ideas and extremely conservative about changing my mind. I think I've turned myself into Mad Eye Moody when it comes to infectious memes. (At least in paranoia; I'm not bragging about my defences.) On the other hand, this doesn't feel like it's actually hampered my life. On the other other hand, I would not of course know.
SIAI, I can't comment on. I'd hope enough people there (preferably every single one) are expressly mindful of Every Cause Wants To Be A Cult and of the dangers of small closed groups with confidential knowledge and the aim to achieve something big pulling members toward the cult attractor.
I don't have extensive personal experience with SIAI (spent two weekends at their Visiting Fellows house, attended two meetups there, and talked to plenty of SIAI-affiliated people), but the following have been my impressions:
People there are generally expected to have read most of the Sequences... which could be a point for cultishness in some sense, but at least they've all read the Death Spirals & Cult Attractor sequence. :P
There's a whole lot of disagreement there. They don't consider that a good thing, of course, but any attempts to resolve disagreement are done by debating, looking at evidence, etc., not by adjusting toward any kind of "party line". I don't know of any beliefs that people there are required or expected to profess (other than basic things like taking seriously the ideas of technological singularity, existential risk, FAI, etc., not because it's an officia
I don't have a quick comment-length intro to how cults work. Every Cause Wants To Be A Cult will give you some idea.
Humans have a natural tendency to form close-knit ingroups. This can turn into the cult attractor. If the group starts going a bit weird, evaporative cooling makes it weirder. edit: jimrandomh nailed it: it's isolation from outside social calibration that lets a group go weird.
Predatory infectious memes are mostly not constructed, they evolve. Hence the cult attractor.
Scientology was actually constructed - Hubbard had a keen understanding of human psychology (and no moral compass and no concern as to the difference between truth and falsity, but anyway) and stitched it together entirely from existing components. He started with Dianetics and then he bolted more stuff onto it as he went.
But talking about Scientology is actually not helpful for the question you're asking, because Scientology is the Godwin example of bad infectious memes - it's so bad (one of the most damaging, in terms of how long it takes ex-members to recover - I couldn't quickly find the cite) that it makes lesser nasty cults look really quite benign by comparison. It is literally as if your only exa...
Is there mostly a single way how groups gradually turn into cults, or does it vary a lot?
Yes, there is. One of the key features of cults is that they make their members sever all social ties to people outside the cult, so that they lose the safeguard of friends and family who can see what's happening and pull them out if necessary. Sci*****ogy was doing that from the very beginning, and Less Wrong has never done anything like that.
Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point. But that's just detail, you've nailed the biggie. Good one.
and Less Wrong has never done anything like that.
SIAI staff will have learnt to think in ways that are hard to calibrate against the outside world (singularitarian ideas, home-brewed decision theories). Also, they're working on a project they think is really important. Also, they have information they can't tell everyone (e.g. things they consider decision-theoretic basilisks). So there's a few untoward forces there. As I said, hope they all have their wits about them.
/me makes mental note to reread piles of stuff on Scientology. I wonder who would be a good consulting expert, i.e. more than me.
I'd like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.
I don't think they necessarily make them - all that's needed is for the person to loosen the ties in their head, and strengthen them to the group.
An example is terrorist cells, which are small groups with a goal who have gone weird together. They may not cut themselves off from their families, but their bad idea has them enough that their social calibrator goes group-focused. I suspect this is part of why people who decompartmentalise toxic waste go funny. (I haven't worked out precisely how to get from the first to the second.)
There are small Christian churches that also go cultish in the same way. Note that in this case, the religious ideas are apparently mainstream - but there's enough weird stuff in the Blble to justify all manner of strangeness.
At some stage cohesion of the group becomes very important, possibly more important than the supposed point of the group. (I'm not sure how to measure that.)
I need to ask some people about this. Unfortunately, the real experts on cult thinking include sever...
(a) I have lower karma than you
You get karma mostly for contributing more, not by higher quality. Posts and comments both have positive expected karma.
Also you get more karma for more alignment with groupthink. I even recall how in early days of lesswrong I stated based on very solid outside view evidence (from every single subreddit I've been to) that karma and reception will come to correlate with not only quality but also alignment with groupthink - that on reddit-style karma system downvoting-as-disagreement / upvoting-as-agreement becomes very significant at some point. People disagreed, but the outside view prevailed.
This unfortunately means that one needs to put a lot more effort into writing something that disagrees with groupthink than something that agrees with it - and such trivial inconveniences matter.
(b) There are some LW posters with whom I feel strong affinity
I don't think I feel particular "affinity" with anyone here, but I find many posters highly enjoyable to read and/or having a lot of insightful ideas.
I mostly write when I disagree with someone, so for a change (I don't hate everyone all the time, honestly :-p) here are two among the best writi...
Here's my view of current lesswrong situation.
On compartmentalization failure and related issues there are two schools present on less wrong:
Right now there doesn't seem to be any hope of reaching Aumann agreement between these points of view, and at least some members of both camps view many of other camp's ideas with contempt. The primary reason seems to be that the kind of arguments that people on one end of the spectrum find convincing people on the other end see as total nonsense, and with full reciprocity.
Of course there's plenty of issues on which both views agree as well - like religion, evolution, akrasia, proper approach to statistics, and various biases (I think outside viewers seem to demand more evidence that these are also a problem outside laboratory than inside viewers, but it's not a huge disagreement). And many other disa...
The Wikipedia articles on Scientology are pretty good, by the way. (If I say so myself. I started WikiProject Scientology :-) Mostly started by critics but with lots of input from Scientologists, and the Neutral Point Of View turns out to be a fantastically effective way of writing about the stuff - before Wikipedia, there were CoS sites which were friendly and pleasant but rather glaringly incomplete in important ways, and critics' sites which were highly informative but frequently so bitter as to be all but unreadable.
(Despite the key rule of NPOV - write for your opponent - I doubt the CoS is a fan of WP's Scientology articles. Ah well!)
Moved post to Discussion section. Note that user's karma has now dropped below what's necessary to submit to the main site.
At karma 0 I can't reply to each of you one at a time (rate limited - 10 min per post), so here are my replies in a single large comment:
I would feel differently about nuke designs. As I said in the "why" links, I believe that EY has a bug when it comes to tail risks. This is an attempt to fix that bug.
Basically non-nuke censorship isn't necessary when you use a reddit engine... and Roko's post isn't a nuke.
Yes, though you'd have to say more.
Incredible, thanks for the link
Incredible. Where were y...
In this case my estimate is a 5% chance that EY wants to spread the censored material, and used censoring for publicity. Therefore spreading the censored material is questionable as a tactic.
Be careful to keep your eye on the ball. This isn't some zero-sum contest of wills, where if EY gets what he wants that's bad. The ball is human welfare, or should be.
Wow, that's even more impressive than the claim made by some Christian theologians that part of the enjoyment in heaven is getting to watch the damned be tormented. If any AI thinks anything even close to this then we have failed Friendliness even more than if we made a simple object maximizer.
Really? That seems odd. It would be pretty silly for it to affect those who don't know about it. That would just be pointless.
Instead of trying to convince right wingers to ban FAI, how about trying to convince Peter Thiel to defund SIAI proportional to the number of comments in a certain period of time.
Advantages:
Better [incentive to Eliezer]/[increase in existential risk as estimated by waitingforgodel] ratio
Reversible if an equitable agreement is reached.
Smaller risk increase, as the problem warrants.