From the view of someone who mostly has no clue what they are talking about (that person being me), I don’t understand why people working in AI safety seem to think that a successful alignment solution (as in, one that stops everyone from being killed or tortured) is something that...
To be clear, when I reference MIRI as being pessimistic, I’m mostly referring to the broad caricature of them that exists in my mind. I’m assuming that their collective arguments can be broken down into: 1.) There is no plan or workable strategy which helps us survive, and that this...
Or, what stops someone else from making their own AGI 1-6 months later that doesn’t play nice and ends the world?
I’d like to preface by saying that I am not an expert on AI by any means, nor am I remotely involved with any kind of research or studies relevant to ML. I have no insight regarding any of the technical or mathematical aspects of discussions about this technology, and...