I recently read What do ML researchers think about AI in 2022.
The probability of Doom is sub-10%. Which is high, but as I understand it, in the minds of people like Eliezer Yudkowsky, we're more likely doomed than not.
I personally lean towards Yudkowsky's views, because
- I don't believe human/evolution-selected minds have thinking power that a machine could not have
- I believe in the Orthogonality Thesis
(I think that those two questions can be defended empirically)
- I think it is easier to make a non-aligned machine than an aligned one
(I believe that research currently being carried out strongly hints at the fact that this is true)
- I believe that more people are working on... (read 893 more words →)
I liked this!
Isn’t this post an elaborate way of saying that today’s posteriors are tomorrow’s priors?
As in- all posteriors eventually get baked into the prior.