It seems to me that there is some tension in the creed between (6), (9), and (11). On the one hand, we are supposed to affirm that "changes to one’s beliefs should generally also be probabilistic, rather than total", but on the other hand, we are using belief/lack of belief as a litmus test for inclusion in the group.
My prediction is that giving such population-level arguments in response to why they are by themselves is much less likely to result in being left alone (presumably, the goal) than by saying their parents said it's okay, so would show lower levels of instrumental rationality, rather than demonstrate more agency.
There's nothing unjustified about appealing to your parents' authority. Parents are legally responsible for their children: they have literal (not epistemic) authority over them, although it's not absolute.
I think those are good lessons to learn from the episode, but it should be pointed out that Copernicus' model also required epicycles in order to achieve approximately the same predictive accuracy as the most widely used Ptolemaic systems. Sometimes later, Kepler-inspired corrected versions of Copernicus' model, are projected back into the past making the history both less accurate and interesting, but more able fit a simplistic morality tale.
...I (mostly) trust them to just not do things like build an AI that acts like an invasive species...
What is the basis of this trust? Anecdotal impressions of a few that you know personally in the space, opinion polling data, something else?
I don't have a solution to this, but I have a question that might rule in or out an important class of solutions.The US spent about $75 billion in assistance to the Ukraine. If both the US and EU pitched in an amount of similar size, that's $150 billion. There are about 2 million people in Gaza.
If you split the money evenly between each person and the country that was taking them in, how much of the population could you relocate? That is, Egypt gets $37,500 for allowing Yusuf in and Yusuf gets $37,500 for emigrating, Morocco gets $37,000 for allowing Fatima in and Fatima receives $37,000 for emigrating, etc... How many such pairings would that facilitate?
Thanks, that's getting pretty close to what I'm asking for. Since posting the above, I've also found Katja Grace's Argument for AI x-risk from competent malign agents and Joseph Carlsmith's Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.
Any idea if something similar is being done to cater to economists (or other social scientists)?
Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren't of that type, their members can't easily engage with those arguments. For example:
...if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data. So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.' The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It's a bad practice in virtually any theory of risk communication.-- Tyler Cowen, Risks and Impact of Artificial Intelligence
is there a canonical source for "the argument for AGI ruin" somewhere, preferably laid out as an explicit argument with premises and a conclusion?-- David Chalmers, Twitter
Is work already being done to reformulate AI-risk arguments for these communities?
IMO, Andrew Ng is the most important name that could have been there but isn't. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.
Consider the following rhetorical question:
Ethical vegans are annoyed when people suggest their rhetoric hints at violence against factory farms and farmers. But even if ethical vegans don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?
Do we expect the answer to this to be any different for vegans than for AI-risk worriers?