Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren't of that type, their members can't easily engage with those arguments. For example:
...if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data. So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.
So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.' The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It's a bad practice in virtually any theory of risk communication.
-- Tyler Cowen, Risks and Impact of Artificial Intelligence
is there a canonical source for "the argument for AGI ruin" somewhere, preferably laid out as an explicit argument with premises and a conclusion?
-- David Chalmers, Twitter
Is work already being done to reformulate AI-risk arguments for these communities?
IMO, Andrew Ng is the most important name that could have been there but isn't. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.
Consider the following rhetorical question:
Ethical vegans are annoyed when people suggest their rhetoric hints at violence against factory farms and farmers. But even if ethical vegans don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?
Do we expect the answer to this to be any different for vegans than for AI-risk worriers?
Does that mean the current administration is finally taking AGI risk seriously or does that mean they aren't taking it seriously?
I noticed that Meta (Facebook) isn't mentioned as being participants. Is that because they weren't asked to or because they were asked but declined?
...there is hardly any mention about memorization on either LessWrong or EA Forum.
I'm curious how you came to believe this. IIRC, I first learned about spaced repetition from these forums over a decade ago and hovering over the Memory and Mnemonics and Spaced Repetition tags on this very post shows 13 and 67 other posts on those topics, respectively. In addition, searching for "Anki" specifically is currently returning ~800+ comments.
FWIW, if my kids were freshmen at a top college, I would advise them to continue schooling, but switch to CS and take every AI-related course that was available if they hasn't already done so.
When I worked for a police department a decade ago, we used Zebra, not Zulu, for Z, but our phonetic alphabet started with Adam, Baker, Charles, etc...
Thanks, that's getting pretty close to what I'm asking for. Since posting the above, I've also found Katja Grace's Argument for AI x-risk from competent malign agents and Joseph Carlsmith's Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.
Any idea if something similar is being done to cater to economists (or other social scientists)?