While I'm currently working as a Community Based Counselor and Tea Slinger, I would generally describe myself as a modern day renaissance man, having slaughtered pigs on my family farm and become a vegan, having done HVAC work and academic research, having been a member of both the Republican and Democratic clubs at my university. I've sought to experience much in life, and on the horizon I wish to deepen my experiences within EA, currently having gone from a fellow to a facilitator to a prospective employee looking for work in the space while simultaneously doing personal cause prioritization work. Ill include below a list of some of my deep interests, and would be happy to connect over any of these areas, specifically as they may intersect with EA.
Philosophy, Psychology, Music (have deep interests in all genres but especially electronic and indie), Politics (especially American), Drugs (mostly psychs), Gaming (mostly League these days), Cooking (have been a head chef), Photography, Meditation (specifically mindfulness).
How you can help me: I'm in the process right now of decided which cause area to focus my future work on (nuclear, mental health, EA community building, politics, animal welfare, AI, and criminal justice reform) so any compelling reasons to go (or not to go) into any of these would be really helpful at this point.
How I can help others: While I can't really offer any expertise in EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested.
2. What is Overton's window? Otherwise I think I probably agree, but one question is, once this non-x-risk campaign is underway, how to you keep it on track and prevent value drift? Or do you not see that as a pressing worry?
3. Cool, will have to check that out.
4. Completely agree, and just wonder what the best way to promote less distancing is.
Yeah, I suppose I'm just trying to put myself in the shoes of the FHI people here that coordinated this and feel like many comments here are a bit more lacking in compassion than I'd like, especially for more half baked negative takes. I also agree that we want to put attention into detail and timing, but there is also the world in which too much of this leads to nothing getting done, and it's highly plausible to me that this had probably been an idea for long enough already to make that the case here.
Thanks for responding though! Much appreciated :)
The LessWrong comments here are generally (quite) (brutal), and I think I disagree, which I'll try to outline very briefly below. But I think it may be generally more fruitful here to ask some questions I had to break down the possible subpoints of disagreement as to the goodness of this letter.
I expected some negative reaction because I know that Elon is generally looked down upon by the EAs that I know, with some solid backing to those claims when it comes to AI given that he cofounded OpenAI, but with the (immediate) (press) (attention) it's getting in combination with some heavy hitting signatures (including Elon Musk, Stuart Russel, Steve Wozniak (Co-founder, Apple), Andrew Yang, Jaan Tallinn (Co-Founder, Skype, CSER, FLI), Max Tegmark (President, FLI), and Tristan Harris (from The Social Dilemma) among many others) I kind of can't really see the overall impact of this letter being net negative. At worst it seems mistimed and with technical issues, but at best it seems one of the better calls to action (or global moratoriums as Greg Colbourn put it) that could have happened, given AI's current presence in the news and much of the world's psyche.
But I'm not super certain in anything, and generally came away with a lot of questions, here's a few:
How exactly do you come to "up to and including acts of war"? His writing here was concise due to it being TIME, which meant he probably couldn't caveat things in the way that protects him against EAs/Rationalists picking apart his individual claims bit by bit. But from what I understand of Yudkowsky, he doesn't seem to in spirit necessarily support an act of war here, largely I think for similar reasons as you mention below for individual violence, as the negative effects of this action may be larger than the positive and thus make it somewhat ineffective.