The Metaculus/Manifold questions for AGI aren't very helpful! The most popular questions focus on passing a Turing test, but this is likely to be a lagging indicator. As long as AI capabilities remain "spiky", then an experienced interlocutor can focus in on whatever areas AI is known to be weak at. Passing an adversarial Turing test with an expert judge thus requires an AI which is at human level even in its weakest domains, at which point its other domains will probably be wildly superhuman and it will be closer to ASI than AGI.
The current focus of the frontier labs is on automating coding and R&D, and so the most important questions in my mind are "When can AI R&D be automated?" or "What effect will AI automation of R&D have on capabilities progress?" There are a couple of questions that are close to this in spirit, but the details are very odd and not really very connected to important capabilities, which means that the prediction is not very informative.
Is there a better question, or better resolution criteria that would actually elicit interesting answers?