Writing this late at night, sorry for any errors or misunderstandings.
I think the cruxy bit is that you're assuming that the AIs are 200 IQ but still make a huge mistake.
You mainly suggest this error: they have incorrect beliefs about the difficulty of taking over and their abilities, thus misestimating their chance of success.
You also hint at the following error: they are more risk seeking and choose to try and takeover despite knowing they have a low chance of success. This seems more likely given that I just find it hard to be sufficiently smart and also SO wrong on the calculation (like I could see being slightly miscalibrated, the variance in "Should I take over" based on subjectively calculated likelihood of success seems smaller than the variance in how much value you lose/gain based on others taking over.)
The question seems similar to asking "does the unilateralist's curse apply to AIs considering taking over the world". So in the unilateralist's curse framing, if depends a lot on how many actors there are.
Your question could also be thought of as asking how selection effects will influence AGIs trying to take over the world, where we might naively expect the first AIs that try to take over will be risk seeking and highly over-confident about their own abilities. It's not clear how strong this selection effect actually is, like with the unilateralist's curse case it matters if it's 5 actors to 500 and with the latter it seems likely that factors like this matter. My general prediction here is that I doubt the particular selection effect "the overconfident AGIs will be the first ones to seriously attempt takeover" will be a very strong effect because there are forces pushing toward better calibration and smarter AIs (mainly that the less well-calibrated AIs get selected against in the training situations I imagine, like they lose a bunch of money on the stock market), but then again, this selection effect may not turn out to be the case.
I would also toss in the fray that these AIs may be doing substantial recursive self-improvement (probably more than 5x progress speed then), and may run into alignment problems. This is where world-model uncertainty seems like a big deal, but just because the territory of the world is tough here — it just seems very hard to verify the alignment of AGI development plans. So, I might predict "conditional on catastrophe from an AI making a world modeling mistake, the AI probably was doing alignment research rather than taking over the world"