Having read more AI alarmist literature recently, as someone who strongly disagrees with the subject, I think I've come up with a decent classification for them based on the fallacies they commit.
There's the kind of alarmist that understands how machine learning works but commits the fallacy of assuming that data-gathering is easy and that intelligence is very valuable. The caricature of this position is something along the lines of "PAC learning basically proves that with enough computational resources AGI will take over the universe".... (read more)
derived from fairy tales.
(This is arguably testable.)