Here's a thing: https://beff.substack.com/p/notes-on-eacc-principles-and-tenets
If you are interested in more content on e/acc, check out recent posts by Bayeslord and the original post by Swarthy, or follow any of us on Twitter as we often host Twitter spaces discussing these ideas.
Neither the original post nor @bayeslord seem to exist anymore (I found this was also the case for another account named something like "Bitalik Vuterin" or something...) Seems fishy. I suspect at least that someone "e/acc" is trying to look like more people than they are. Not sure what the base rate for this sort of stuff is, though.
Much of why my priors say that the e/acc thing is organic is just my gestalt impression of being on Twitter while it was happening. Unfortunately, that's not a legible source of evidence to people-who-aren't-me. I'll tell you what information I do remember, though:
I'm not necessarily advocating for direct engagement! If engagement with this stuff won't decrease AI risk, then I don't want to engage. If it does, then I do. Some of these people/orgs are influential (Venkatesh Rao, HuggingFace), so unfortunately, their opinions do actually matter. As nice as it would feel to ignore the haters, public opinion is in fact a strategic asset when it comes to actually implementing AI safety proposals at major labs.
I would expect you to be be able to find these tweets, and hundreds more like them no matter how good alignment optics is. A lot of people use Twitter, and I could probably find similar tweets about Mother Theresa or Princess Diana. As such showing this doesn't actually tell us all that much TBH.
Practice rationalism on this. What predictions do you make, and what conditional predictions on whatever actions you're advocating? It feels a little like you're getting sucked into a status game by caring very much about who's saying what, rather than steelmanning the critiques and deciding if members of the EA community (disclosure: I am not one - I'm not part of sneerclub, but I do see the cult-like aspects of the bay-area subculture) should do anything differently. As in, should you behave differently, separately from should you participate in the signaling and public conversations around this kind of thing for status purposes?
Note also that the criticism is not purely wrong. "Revealed preferences say a lot" is a pretty compelling point.
In no particular order, here's a collection of Twitter screenshots of people attacking AI Safety. A lot of them are poorly reasoned, and some of them are simply ad-hominem. Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Conclusions
I originally intended to end this post with a call to action, but we mustn't propose solutions immediately. In lieu of a specific proposal, I ask you, can the optics of AI safety be improved?