It seems that Bill is also something of a Bayesian with respect to global catastrophic risk
This isn't Bayesianism. this is something closer to caring about expected utility. Not the same thing.
It might be the way one phrases bayesianism in a popular article, where the aim is to argue in favor of the object-level proposal rather than weaken the article by relying explicitly on bayesianism.
How so? They seem disconnected. Bayesianism is an epistemological approach. There's nothing for example that would stop someone from being a Bayesian and a virtue ethicist. Or a Bayesian with a deontology based on divine command theory.
True. I hadn't thought of that.
Where does Kurzweil talk about AI x-risk? I read the whole of TSIN and there is precisely one tiny paragraph in it about AI risk.
By the way my memory fails me: what exactly does Joy say about AI risk? What is his angle? If I recall correctly he cites the dangers of robots, not of superintelligence.
E.g. the word "superintelligence(ent)" only appears once in Bill Joy's famous essay "Why the future doesn't need us", and that in a Moravec quote. "Robot(ics)" appears 52 times.
Still, that doesn't tell me why Gates said "superintelligent computers" rather than "highly-evolved robots"
Give a superintelligence some actuators and it becomes a robot. A superintelligence without actuators is not much use to anyone.
An oracle built with solid-state hard drives and no cooling fans would not be of use to anyone?
One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments).
The idea is also quite common in Science Fiction.
Or reading OvercomingBias (unlikely), or talking to someone who did (more likely) - my impression is that more people may have been in contact with the "Scary Idea" though Eliezer's writing than through that of the other people you list (except probably Kurzweil). Back when Eliezer was posting daily on OB, I'd see mentions of the blog from quite varied sources (all quite geeky).
Of course, still more people have been exposed to a form of the Scary Idea through the Terminator movies and other works of fiction.
Note that "rational optimism" seems rather opposed to an apocalyptic end of civilisation.
Cultural evolution apparently consists of things getting better - since better things are what selection favours.
"There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic ... But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people."
- Bill Gates
From
Africa Needs Aid, Not Flawed Theories
One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments).
It seems that Bill is also something of a Bayesian with respect to global catastrophic risk:
"Even though we can't compute the odds for threats like bioterrorism or a pandemic, it's important to have the right people worrying about them and taking steps to minimize their likelihood and potential impact. On these issues, I am not impressed right now with the work being done by the U.S. and other governments."