I think there might be a confusion between optimizing for an instrumental vs. an upper-level goal. Is maintaining good epistemics more relevant than working on the right topic? To me the rigor of an inquiry seems secondary to choosing the right subject.
I also had to look it up and got interested in testing whether or how it could apply.
Here's an explanation of Bulverism that suggests a concrete logical form of the fallacy:
Here's a possible assignment for X and Y that tries to remain rather general:
Why would that be a fallacy? Whether an argument is true or false, depends on the structure and content of the argument, but not on the source of the argument (genetic fallacy), and not on a property of the source that gets equated with being wrong (circular reasoning). Whether an argument for doom is true, does not depend on who is arguing for it, and being traumatized does not automatically imply being wrong.
Here's another possible assignment for X and Y that tries to be more concrete. To be able to do so, "Person 1" is also replaced by more than one person, now called "Group 1":
From looking at this, I think the post suggests a slightly stronger logical form that extends 3:
From this, I think one can see that not only Bulverism makes the model a bit suspicious, but two additional aspects come into play:
"it's psychologically appealing to have a hypothesis that means you don't have to do any mundane work"
I don't doubt that something like inverse bike-shedding can be a driving force for some individuals to focus on the field of AI safety. I highly doubt it is explanatory for the field and the associated risk predictions to exist in the first place, or that its validity should be questioned on such grounds, but this seems to happen in the article if I'm not entirely misreading it. From my point of view, there is already an overemphasis on psychological factors in the broader debate and it would be desirable to get back to the object level, be it with theoretical or empirical research, which both have their value. This latter aspect seems to lead to a partial agreement here, even though there's more than one path to arrive at it.
Point addressed with unnecessarily polemic tone:
It is alright to consider it. I find it implausible that a wide range of accomplished researchers lay out arguments, collect data, interpret what has and hasn't been observed and come to the conclusion that our current trajectory of AI development poses a significant amount of existential risk, which can potentially manifest in short timelines, because a majority of them has a childhood trauma that blurs their epistemology on this particular issue but not on others where success criteria could already be observed.
I'm close to getting a postverbal trauma from having to observe all the mental gymnastics around the question of whether building a superintelligence without having reliable methods to shape its behavior is actually dangerous. Yes, it is. No, that fact does not depend on whether Hinton, Bengio, Russell, Omohundro, Bostrom, Yudkowsky, et al. were held as a baby.
Further context about the "recent advancements in the AI sector have resolved this issue" paragraph:
I assume they can't make a statement and that their choice of next occupation will be the clearest signal they can and will send out to the public.
He has a stance towards risk that is a necessary condition for becoming the CEO of a company like OpenAI, but doesn't give you a high probability of building a safe ASI:
"The Algorithm" is in the hands of very few actors. This is the prime gear where "Evil People have figured it out, and hold The Power" isn't a fantasy. There would be many obvious improvements if it were in adult hands.