One point that I tend to believe is true, but that I don't see raised much:
The straightforward argument (machines that are smarter than us might try to take over) is intuitively clear to many people, but for whatever reason many people have developed memetic anti-genes against it. (E.g. talk about the AI bubble, AI risk is all sci-fi, technological progress is good, we just won't program it that way, etc.).
In my personal experience, the people I talk to with a relatively basic education and who are not terminally online are much more intuitively concerned about AI than either academics or people in tech, since they haven't absorbed so much of the bad discourse.
(The other big reason for people not taking the issue is people not feeling the AGI, but there's been less of that recently)
Sorry if that was weirdly obscure. I was asking because the principal reason I go out of my way to avoid rain is that I'm worried my phone would get wet and potentially die (and I've been somewhat sad about having to forego the experience of braving the rain at points). But it's possible that this is not a big issue with current devices (and maybe never was)!
One general point I've heard in this regard, is that Japan's debt is mostly owned by large Japanese companies and so carries a much smaller risk for the government.
Do you carry a smartphone with you in those occasions?
I would not go so far to say simply having proximity to OpenAI is problematic as long as the instructor manages to convey that their decision to associate with a leading AI company is controversial (or at least the more general point, that many reasonable people find what the AI companies do to be highly irresponsible). I somewhat trust that this is handled reasonably in this case.
Overall, I think the worry here is less that people who might only be somewhat safety-pilled give courses like this, and more that people who are very safety pilled don't have the opportunity to do so (due to selection effects in CS/ML academia).
I don't have fully formed thoughts on this, but I think there's a reasonable point to make that if we both grant AIs moral patient-hood/rights and go about creating them at will without thinking this through very well, then we create a moral catastrophe one way or another.
I tentatively disagree with OP that the conclusion is we should just flat-out not grant AIs moral weight (although I think this is a sensible default to fall back to as a modus operandi), but I think it also seems optimistic to assert that if we did so, then this didn't have some kind of horrendous implications for where we're headed and what's currently happening (I'm not saying it does, just that I don't know either way).
I think this would benefit from having examples (maybe just pointing at the top level post/belief that was unpleasantly attacked without calling out specific responses)
I almost fully agree.
In a strict sense, the people saying that scientists losing their privilege of doing research is obviously not that big of a deal compared to all the progress are right and I think Togelius is wrong in doubling down on this.
However, it clearly won't only be the small fraction of people who are scientists who'll be affected, so taking what happens to scientists (who are relatively privileged) as indicative of how the general population will be affected paints a different picture.
I'd venture an uninformed guess that in 95 % or so percent of these cases the problem isn't "taking ideas seriously" but rather people deferring proper judgement due to some emotional or social effect.