To me what would make the most sense is some sort of survey of the opinions of experts. It's a very difficult thing to form an opinion on from first principles. And looking at expert opinion is what we usually do in other scenarios.
For example, I have had Achilles tendinitis for about eight years now. Recently the doctor proposed PRP injections or stem cell therapy. To form an opinion of whether this is worth it, I could have looked at it from first principles, digging into the biochemistry, but that just doesn't seem very practical. It'd require an amount of expertise that I don't have and would take an unreasonable amount of time to acquire, even for me as someone who has a degree in neuroscience.
Instead, what would be better would be to look at the opinion of other experts. Or even more conveniently, read a distillation of those expert opinions by someone who I trust. Which I found, and which allowed me to adopt some average of those expert opinions as my own.
But even if that distillation of expert opinions existed, there are still some roadblocks. With AI, we're talking about something that sounds crazy, science fiction-y and naive. We're talking about robots having an intelligence explosion where they end up turning the universe into paperclips, and we're talking about this happening in something like 10-50 years. Even if there was a distillation of expert opinion expressing that experts say it's legit, it takes a special type of memetic immune disorder to believe something like this, and the "typical smart tech-savvy person" does not have this. Instead, they have "antibodies" protecting themselves from such beliefs, and furthermore, from taking ideas seriously.
Well, maybe. Maybe if there truly was a large scientific consensus, that would be enough to fight off those antibodies and the "typical smart tech-savvy person" would "get it". But we currently don't have anything close to such a consensus. From what I understand, the consensus is quite narrow. It's mostly just AI safety researchers, and of course people who go into that as a career would have such a belief, just like how people who research underwater basket weaving believe that underwater basket weaving is extremely, extremely important. If you expand into AI professionals more generally, my understanding is that the fear is much smaller. Same with if you expand into machine learning, software, STEM, and then smart people across other fields. If there was a strong consensus amongst, let's say STEM professionals, that AGI risk is an urgent problem, then I'd probably assume that "typical smart tech-savvy people" would adopt that belief as well. But we aren't at that point. Right now I think the consensus is too narrow to kill off the antibodies.
And that might be justifiable. I'm not sure. Personally I have spent a long time in the rationality community and developed a very high level of epistemic trust for people in the community, especially the ones at the top, and this is what allows me to update my beliefs hard about AI as a tremendously serious existential risk. If we imagine a different scenario where, say, astronomers believe that a black hole is going to swallow the entire universe some time in the next 10-50 years, but physicists more generally aren't too concerned, and scientists and other smart people more generally are basically unconcerned, well, I'm not sure what I'd think. Maybe the astronomers are taking things too far. Maybe outsiders are being ignorant. I dunno. Given the magnitude of "destroy the universe" and my level of solid epistemic respect for astronomers, I'd probably have to look into it more closely, but I'm skeptical that the "typical smart tech-savvy person" would have that instinct.
The orthogonality thesis is already natural accessible and obvious: we know about highly intelligent sociopaths, the 'evil genius' trope, etc. The sequences are flawed and dated in key respects concerning AI, such that fresh material is probably best.
Hurricanes and tsunamis don't think; humans do, so actual AGI is much closer to a human (super obvious now: GPT-3, etc).
If your model of AI comes from reading the sequences, it was largely wrong when it was written, and is now terribly out of date. The likely path to AI is reverse engineering the brain, as I (and many others) predicted based on the efficiency of the brain and tractability of its universal learning algorithms, and demonstrated by the enormous convergent success of deep learning.