Sorted by New

Wiki Contributions



None of the points on this list about which things are bad, good, effective etc. have any corroborating evidence that the things they are addressing are generally real or the solutions they describe are effective.

Moreover as someone who knows next to nothing about the subject matter all of the supposed positives on this list seem to paint an actively negative picture of the twitter postrat community as a group of people with special needs for not acknowledging reality who only ever communicate using political dog whistles to whine about things on the internet in a way that has poor standards for determining whether they actually apply to the things they criticize and systematically avoids ever actually submitting their beliefs to any tests of rigor.


None of these seem to reflect on EY unless you would expect him to be able to predict that a journalist would write an incoherent almost maximally inaccurate description of an event where he criticized an idea for being implausible then banned its discussion for being off-topic/pointlessly disruptive to something like two people or that his clearly written rationale for not releasing the transcripts for the ai box experiments would be interpreted as a recruiting tool for the only cult that requires no contributions to be a part of, doesn't promise its members salvation/supernatural powers, has no formal hierarchy and is based on a central part of economics.


How did you conclude from Nate Soares saying that that the tools to create agi likely already exist that he wanted people to believe he knew how to construct one?

Why were none of these examples mentioned in the original discussion thread and comment section from which a lot of the quoted sections come from?


What string of posts about behavior are you referring to?

The only minutely similar things I know of are about the management of Leverage research (which doesn’t seem related to rationalism at all outside of geographical proximity) which only ever seems to have been discussed in terms of criticism on LW.

The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.


People already implicitly consider your example to be acceptable given that vegetables are held in conditions of isolation that would be considered torture if they were counterfactually conscious and many people support being allowed to kill/euthanize vegetables in cases such as Terry Schiavo's.


None of the arguments in this post seem as if they actually indict anything about MIRI or CFAR. The first claim of CFAR/MIRI somehow motivating 4 suicides provides no evidence that CFAR is unique in this regard or conducive to this kind of outcome and seems like a bizarre framing of events considering that stories about things like someone committing suicide out of suspicion over the post office's nefarious agenda generally aren't seen as an issue on the part of the postal service.

Additionally the focus on Roko's Basilisk-esque "info hazards" as a part of MIRI/CFAR reduces the credibility of this point seeing as the original basilisk thought experiment was invented as a criticism of SIAI and according to every LDT the basilisk has no incentive to actually carry out any threats. The second part is even weaker with how it essentially posits a non-argument for how the formation of a conspiracy mindset would be a foreseeable hazard from one's coworkers disagreeing with them on something important for possibly malevolent reasons and there being secrecy in a workplace. The point about how someone other than CFAR calling the police on CFAR-opposed people who were doing something illegal to them was evidence of authoritarianism on the part of CFAR and the broader rationality community is charitably speaking a bizarre argument to stake a claim on.

The section on world saving/scarcity narratives again provides no counter arguments or evidence against any arguments for why MIRI would be right to consider alignment especially important or how this sense of importance is on aggregate especially different from that of people who enjoy recycling or are seriously concerned about Anthropogenic global warming.

The evidence presented for the existence of the existence of a scarcity narrative is similarly weak with how it essentially amounts to a statement that the author imagined that the people around them would incorrectly disagree about how good of a philosopher someone was and the assertion that the works of Kant as opposed to say Dan Brown are systematically overlooked and extremely important for a.i. alignment for no given reason.

There are other issues I have with this post such as the comparisons between an organization's members being motivated by mental models of their leadership providing evidence for being cult like being a bad argument due to being applicable to almost every organization that exists but I don't feel like writing more about this subject than I already have.


To my knowledge this claim seems to be almost entirely fabricated as the only text that is vaguely reminiscent of this claim in the original thread is a claim from Roko that “One might think that the possibility of CEV punishing people couldn’t possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous.” which along with describing the experience of a grand total of one person only refers to anxiety resulting from an idea related to the premises of the thought experiment and not the thought experiment itself.