George's Shortform

Having read more AI alarmist literature recently, as someone who strongly disagrees with the subject, I think I've come up with a decent classification for them based on the fallacies they commit.

There's the kind of alarmist that understands how machine learning works but commits the fallacy of assuming that data-gathering is easy and that intelligence is very valuable. The caricature of this position is something along the lines of "PAC learning basically proves that with enough computational resources AGI will take over the universe".... (read more)

Showing 3 of 20 replies (Click to show all)
2TAG2moAnd the thing I said that isn't factually correct is...
derived from fairy tales.

(This is arguably testable.)

1[anonymous]2moThe only thing factually incorrect is your implied assumption that voting has anything to do with truth assessment here ;)

George's Shortform

by George 25th Oct 201960 comments