When people debating AI x-risk on Twitter talk past each other, my impression is that a significant crux is whether or not the individual buys the instrumental convergence argument.
I wouldn't be surprised if the supermajority of people who don't buy the idea simply haven't engaged with it enough, and I think it is common to have a negative gut reaction to high levels of confidence about something so seemingly far reaching. That said, I'm curious if there are any strong arguments against it? Looking for something stronger than "that's a big claim, and I don't see any empirical proof."
I think orthogonality and instrumental convergence are mostly arguments for why the singleton scenario is scary. And in my experience, the singleton scenario is the biggest sticking point when talking with people who are skeptical of AI risk. One alternative is to talk about the rising tide scenario: no single AI taking over everything, but AIs just grow in economic and military importance across the board while still sharing some human values and participating in the human economy. That leads to a world of basically AI corporations which are too strong for us to overthrow and whose value system is evolving in possibly non-human directions. That's plenty scary too.
What would be an example of a value that is clearly 'non-human'? AI power being used for 'random stuff' by the AIs' volition?