onslaught
Message
Howdy y'all, my nome de plume 'round these parts is onslaught.
I'm really into futurism, GCR mitigation, science for good, broad moral circles, and ~consequentialist utilitarianism. I think building superintelligent systems is an inherently unsafe endeavor.
Naturally, there are also a lot of other causes and interests that...
5
6
I am a fan of Yudkowsky and it was nice hearing him of Ezra Klein, but I would have to say that for my part the arguments didn't feel very tight in this one. Less so than in IABED (which I thought was good not great).
Ezra seems to contend that surely we have evidence that we can at least kind of align current systems to at least basically what we usually want most of the time. I think this is reasonable. He contends that maybe that level of "mostly works" as well as the opportunity to gradually give feedback and increment current systems seems like it... (read more)