Indeed!
But the tournaments only provide the head-to-head scores for direct comparisons with top human forecasting performance. ForecastBench has clear human baselines.
It would be helpful if the Metaculus tournament leaderboards also reported Brier scores, even if they would not be directly comparable to human scores since the humans make predictions on fewer questions.
Domain: Forecasting
Link: Forecasting AI Futures Resource Hub
Author(s): Alvin Ånestrand (self)
Type: directory
Why: A collection of information and resources for forecasting about AI, including a predictions database, related blogs and organizations, AI scenarios and interesting analyses, and various other resources.
It's kind of a complement to https://www.predictionmarketmap.com/ for forecasting specifically about AI
I kind of started out thinking the effects would be larger, but Agent-2-based rogue AIs (~human level at many tasks) is too large for the rogue population to become more than a few million instances at most.
Sure, some rogues may focus on building powerbases of humans, it would be interesting to explore that further. The AI rights movement is kind of like that.
Happy Amazing Breakthrough Day!
Good observation. The only questions that don't explicitly exclude it in the resolution criteria are "Will there be a massive catastrophe caused by AI before 2030?" and "Will an AI related disaster kill a million people or cause $1T of damage before 2070?", but I think the question creators mean a catastrophic event that is more directly caused by the AI, rather than just a reaction to AI being released.
Manifold questions are sometimes somewhat subjective in nature, which is a bit problematic.
I think my points argue more that control research might have higher expected value than some other approaches, that don't address delegation at all or are much less tractable. But I agree, if slop is the major problem, then most current control research doesn't adress it, though it's nice to see that this might change if Buck is right.
And my point about formal verification was to work around the slop problem by verifying the safety approach to a high degree of certainty. I don't know if it's feasible, though, but some seem to think so. Why do you think it's a bad idea?
I can think of a few reasons someone might think AI Control research should receive very high priority, apart from what is mentioned in the post or in Buck's comment:
I agree with basically everything in the post but put enough probability on these points to think that control research has really high expected value anyway.
Interesting!
I thought of a couple of things that I was wondering if you have considered.
It seems to me like when examining mutual information between two objects, there might be a lot of mutual information that an agent cannot use. Like there is a lot of mutual information between my present self and me in 10 minutes, but most of that is in information about myself that I am not aware of, that I cannot use for decision making.
Also, if you examine an object that is fairly constant, would you not get high mutual information for the object at different times, even though it is not very agentic? Can you differentiate autonomy and a stable object?
Thank you for sharing your thoughts! My responses:
(1) I believe most historical advocacy movements have required more time than we might have for AI safety. More comprehensive plans might speed things up. It might be valuable to examine what methods have worked for fast success in the past.
(2) Absolutely.
(3) Yeah, raising awareness seems like it might be a key part of most good plans.
(4) All paths leading to victory would be great, but I think even plans that would most likely fail are still valuable. They illuminate options and tie ultimate goals to concrete action. I find it very unlikely that failing plans are worse than no plans. Perhaps high standards for comprehensive plans might have contributed to the current shortage of plans. “Plans are worthless, but planning is everything.” Naturally I will aim for all-paths-lead-to-victory plans, but I won't be shy in putting ideas out there that don't live up to that standard.
(5) I don't currently have much influence, so the risk would be sacrificing inclusion in future conversations. I think it's worth the risk.
I would consider it a huge success if the ideas were filtered through other orgs, even if they just help make incremental progress. In general, I think the AI safety community might benefit from having comprehensive plans to discuss and critique and iterate on over time. It would be great if I could inspire more people to try.