You can add options later.
Because only one resolution is possible? That's true for Multiple Choice markets, but not Set markets.
Also, manifold has the option to make markets with a bunch of different possible resolutions displayed in one place(they're called SET markets in the market creation menu), maybe you should make one of those?
So you're going with metaculus instead of manifold? This is important as AFAIK you can't just buy voting power in metaculus.
I think that generally when people say "overconfident" they have a broader class of irrational beliefs in mind than "overly narrow confidence intervals around their beliefs", things like bias towards thinking well of yourself can be part of it too.
And maybe some are "overconfident" that early AGI will be helpful for solving future problems, but again this is just a mistake, not systemic overconfidence
OK but whatever the exact pattern of irrationality is, it clearly exists simultanaeously with humans being competent enough to possibly cause x-risk. It seems plausible that AIs might share similar (or novel!) patterns of irrationality that contribute to x-risk probability while being orthogonal to alignment per se.
Maybe? No recent ones spring to mind.
No. The kind of intelligent agent that is scary is the kind that would notice its own overconfidence—after some small number of experiences being overconfident—and then work out how to correct for it.
I mean, the main source of current x-risk is that humans are agents which are capable enough to do dangerous things(like making AI) but too overconfident to notice that doing so is a bad idea, no?
There are lots of things that an ideal utility maximizer would do via means-end reasoning, that humans and animals do instead because[...]
Right. What you said in your comment seems pretty general -- any thoughts on what in particular leads to Approval Reward being a good thing for the brain to optimize? Spitballing, maybe it's because human life is a long iterated game so reputation ends up being the dominant factor in most situations and this might not be easily learned by a behaviorist reward function?
Hmmm, did you tick "You" under "who can add new options" in the creation menu? I just made a Set market myself and there's an immediately visible UI element to add new options above the existing answers.