Re: taboos in EA, I think it would be good if somebody who downvoted this comment said why.
Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve's top comment on this post is an example of enforcing/reiterating this norm.
It's an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid. That fits what I'd consider a taboo, something any socially savvy person would pick up on and internalize if they were around it.
Maybe this norm for open tolerance is downstream of the implications of truly considering some people to be your adversaries (which you might do if you thought delaying AI development by even an hour was a considerable moral victory, as the OP seems to). Doing so does expose you to danger. I would point out that while lc's post analogizes their relationship with AI researchers to Isreal's relationship with Iran. When I think of Israel's resistance to Iran nonviolence is not the first thing that comes to mind.
So the first step to good outreach is not treating AI capabilities researchers as the enemy. We need to view them as our future allies, and gently win them over to our side by the force of good arguments that meets them where they're at, in a spirit of pedagogy and truth-seeking.
To this effect I have advocated that we should call it "Different Altruism" instead of "Effective Altruism", because by leading with the idea that a movement involves doing altruism better than status quo, we are going to trigger and alienate people part of status quo that we could have instead won over by being friendly and gentle.
I often imagine a world where we had ended up with a less aggressive and impolite name attached to our arguments. I mean, think about how virality works: making every single AI researcher even slightly more resistant to engaging your movement (by priming them to be defensive) is going to have massive impact on the probability of ever reaching critical mass.
Thanks a lot for doing this and posting about your experience. I definitely think that nonviolent resistance is a weirdly neglected approach. "mainstream" EA certainly seems against it. I am glad you are getting results and not even that surprised.
You may be interested in discussion here, I made a similar post after meeting yet another AI capabilities researcher at FTX's EA Fellowship (she was a guest, not a fellow): https://forum.effectivealtruism.org/posts/qjsWZJWcvj3ug5Xja/agrippa-s-shortform?commentId=SP7AQahEpy2PBr4XS
I'm interestd in working on dying with dignity
I actually feel calmer after reading this, thanks. It's nice to be frank.
For all the handwringing in comments about whether somebody might find this post demotivating, I wonder if there are any such people. It seems to me like reframing a task from something that is not in your control (saving the world) to something that is (dying with personal dignity) is the exact kind of reframing that people find much more motivating.
Related post: https://www.lesswrong.com/posts/ybQdaN3RGvC685DZX/the-emh-is-false-specific-strong-evidence
One relevant thing here is baseline P(beats market) on given [rat / smart] & [tries to beat market]. In my own anecdotal dataset of about 15 people the probability here is about 100%, and the amount of wealth among these people is also really high. Obvious selection effects or whatever are obvious. But EMH is just a heuristic and you probably have access to stronger evidence.
I found this post persuasive, and only noticed after the fact that I wasn't clear on exactly what it had persuaded me of.
I want to affirm that this to me seems like it should be alarming to you. To me a big part of rationality is about being resilient to this phenomenon and a big part of successful rationality norms is banning the tools for producing this phenomenon.
Great, thanks.
Recently I learned that Pixel phones actually contain TPUs. This is a good indicator of how much deep learning is being used (particularly it is used by the camera I think)