Huh, I never heard of this umbrella Effective Ventures Foundation before. Let alone about its ability to muzzle individual speech.
Well, I have a privileged position of being able to derive it from the first principles, so it is "true" given certain rather mild assumptions about the way the universe works, which stem from some observations (speed of light is constant, observations leading to the Maxwell equations, etc.) leading to the relativistic free particle Lagrangian, and confirmed by others (e.g. atmospheric cosmic ray muon decay). So this is not an isolated belief, but more like an essential part of the model of the world. Without it the whole ontology falls apart. And so does epistemology.
Given that there is no known physical theory that allows deliberate time travel (rather than being stuck in a loop forever to begin with), I am confused as to how you can estimate the cost of it.
A more realistic and rational outcome: Alice is indeed an ass and it's not fun to be around her. Bob walks out and blocks her everywhere. Now, dutchbook this!
It's a good start, but I don't think this is a reasonably exhaustive list, since I don't find myself on it :)
My position is closest to your number 3: "ASI will not want to take over or destroy the world." Mostly because "want" is a very anthropomorphic concept. The Orthogonality Thesis is not false, but inapplicable, since AI are so different from humans. They did not evolve to survive, they were designed to answer questions.
It will be possible to coordinate to prevent any AI from being given deliberately dangerous instructions, and also any unintended consequences will not be that much of a problem
I do not think it will be possible, and I expect some serious calamities from people intentionally or accidentally giving an AI "deliberately dangerous instructions". I just wouldn't expect it to result in systematic extermination of all life on earth, since the AI itself does not care in the same way humans do. Sure, it's a dangerous tool to wield, but it is not a malevolent one. Sort of 3-b-iv, but not quite.
But mostly the issue with doomerism I see is that the Knightian uncertainty on any non-trivial time frame: there will be black swans in all directions, just like there have been lately (for example, no one expected near-human-level LARPing that LLMs do, while not being in any way close to a sentient agent).
To be clear, I expect the world to change quickly and maybe even unrecognizably in the next decade or two, with lots of catastrophic calamities, but the odds of complete "destruction of all value", the way Zvi puts it, cannot be evaluated at this point with any confidence. The only way to get this confidence is to walk the walk. Pausing and being careful and deliberate about each step does not seem to make sense, at least not yet.
Yeah, that looks like a bizarre claim. I do not think there is any reason whatsoever to doubt yours or Ben's integrity.