Sorted by New

Wiki Contributions


Occupational Infohazards

How did you conclude from Nate Soares saying that that the tools to create agi likely already exist that he wanted people to believe he knew how to construct one?

Why were none of these examples mentioned in the original discussion thread and comment section from which a lot of the quoted sections come from?

Frame Control

What string of posts about behavior are you referring to?

The only minutely similar things I know of are about the management of Leverage research (which doesn’t seem related to rationalism at all outside of geographical proximity) which only ever seems to have been discussed in terms of criticism on LW.

The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

People already implicitly consider your example to be acceptable given that vegetables are held in conditions of isolation that would be considered torture if they were counterfactually conscious and many people support being allowed to kill/euthanize vegetables in cases such as Terry Schiavo's.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

None of the arguments in this post seem as if they actually indict anything about MIRI or CFAR. The first claim of CFAR/MIRI somehow motivating 4 suicides provides no evidence that CFAR is unique in this regard or conducive to this kind of outcome and seems like a bizarre framing of events considering that stories about things like someone committing suicide out of suspicion over the post office's nefarious agenda generally aren't seen as an issue on the part of the postal service.

Additionally the focus on Roko's Basilisk-esque "info hazards" as a part of MIRI/CFAR reduces the credibility of this point seeing as the original basilisk thought experiment was invented as a criticism of SIAI and according to every LDT the basilisk has no incentive to actually carry out any threats. The second part is even weaker with how it essentially posits a non-argument for how the formation of a conspiracy mindset would be a foreseeable hazard from one's coworkers disagreeing with them on something important for possibly malevolent reasons and there being secrecy in a workplace. The point about how someone other than CFAR calling the police on CFAR-opposed people who were doing something illegal to them was evidence of authoritarianism on the part of CFAR and the broader rationality community is charitably speaking a bizarre argument to stake a claim on.

The section on world saving/scarcity narratives again provides no counter arguments or evidence against any arguments for why MIRI would be right to consider alignment especially important or how this sense of importance is on aggregate especially different from that of people who enjoy recycling or are seriously concerned about Anthropogenic global warming.

The evidence presented for the existence of the existence of a scarcity narrative is similarly weak with how it essentially amounts to a statement that the author imagined that the people around them would incorrectly disagree about how good of a philosopher someone was and the assertion that the works of Kant as opposed to say Dan Brown are systematically overlooked and extremely important for a.i. alignment for no given reason.

There are other issues I have with this post such as the comparisons between an organization's members being motivated by mental models of their leadership providing evidence for being cult like being a bad argument due to being applicable to almost every organization that exists but I don't feel like writing more about this subject than I already have.

In Wikipedia — reading about Roko's basilisk causing "nervous breakdowns" ...

To my knowledge this claim seems to be almost entirely fabricated as the only text that is vaguely reminiscent of this claim in the original thread is a claim from Roko that “One might think that the possibility of CEV punishing people couldn’t possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous.” which along with describing the experience of a grand total of one person only refers to anxiety resulting from an idea related to the premises of the thought experiment and not the thought experiment itself.