I am a volunteer organizer with PauseAI and PauseAI US, a pro forecaster, and some other things that are currently much less important.
The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes.
This is very unlikely to matter. All of these companies are trying to create something that could kill everyone, and none of these companies are working on safety at all in a way that could actually prevent that from happening.
If none of these companies had any kind of safety teams, the world would be slightly safer, because nothing of value in preventing human extinction would be lost, and it would be even easier to lobby to get them all shut down.
Wow. The "Ryazan Miracle" incident is almost unbelievable. It's hard for me to imagine one person making decisions that are so egregiously short-sighted, let alone a whole committee.
Larionov ordered almost all cattle to be slaughtered ('the women, and the children too' at that), and then promised more beef next year. What was going on in his mind? Was there so much pressure that he stopped caring whether he would even live that long? Did the incentives just bring forward the same kind of impulse that causes someone to steal half a paycheck's worth of money from the register, rather than just coming into work and getting paid?
Even post-mortem he was not stripped of his title of Hero of Labour.
...Ah. Mostly depressingly, maybe Larionov was smart after all.
the point of my original post was that there's a limit to how good you can get doing only that, without going out and gathering new information.
That is true. Human forecasters mostly don't do this, though, so if an AI forecaster did maximize cost-effective information-gathering, it could still gain an advantage from doing so. The cost of AI doing the gathering could also presumably drop below the cost of humans doing the gathering, which would create a strict advantage on both effective gathering of information and effective use of information.
Bots are already outperforming humans on some markets because of speed.
Markets, yes. Reactivity faster than a few hours is usually not relevant to the actual usefulness of forecasting, though.
I'd be shocked if AI were generally better than humans at forecasting in the next year or two.
That's the projection according to ForecastBench, anyway:
Forecasting is predicting, which in the limit requires general intelligence, so I don't think forecasting falls until everything falls.
LLMs certainly aren't narrow, and it's not clear that "general intelligence" is a well-defined concept. Other than "general enough to plug all the rest of its own holes from now on," I don't think we know exactly what kinds and degrees of generality are needed for specific complex tasks. AI has been way more jagged at the frontier than anyone expected, and on two kinds of things that equally appear to require very general intelligence, AIs often have very different performance.
It's an interesting point that some information isn't worth trying to gain. AI could still pareto-dominate human pros, though, myself readily included.
I don't see why AI would need to participate in a real money prediction market, or even a market at all. AI systems aren't motivated by money, and non-market prediction aggregators have fewer failure modes. The only cost would be the cost to run the models, which would eventually be extremely cheap per question compared to human pros. I think it would suffice to create an AI-only version of basically Metaculus, subsidized by businesses and governments who benefit from well-calibrated forecasts on a wide variety of topics (sans the degenerate examples like sports predictions and "fun" questions).
Daniel Kokotajlo briefly points out that private entities may want private forecasting services, to gain an edge over the competition. Sadly, I think it's likely that private AI forecasting farms would dominate, despite the massive overall cost savings if they pooled resources into a shared project.
As soon as the end of this year, we appear to be heading into an era where forecasting isn't the domain of humans anymore. The resulting epistemic miracle might not be widely adopted, and might not even be tooled toward the public good. I feel sad about this.
Why would anyone want a galaxy? I don't even want a very big house.
If all your friends have galaxies, do you all still get to live in the same city and play games and make each other laugh? If so, what are the galaxies for? If not... what are the galaxies for?
Hm. I found a Twitter thread on the topic, with some leads: https://x.com/GrantSlatton/status/1830302697125478630
I have undergone the exact same move, but I think my political beliefs are not sophisticated enough for me to be able to identify a solid target to "believe already." My time on the right gave me some pieces of information that strongly falsified a few beliefs often bucketed with the left, even as I moved leftward, which has helped me moderate my trust that continuing leftward would capture the things I expect to believe in the future.
Put another way, politics is multivariate / high dimensional. A clear trend in one specific dimension isn't meaningless, but is so lossy that I wouldn't be surprised if it stopped or apparently reversed slightly.
Adding some descriptors I have frequently used:
General-purpose AI Systems -- Unwieldy. Possibly overemphasizes their tool nature.
Digital Minds / Digital Brains -- Very accurate in some important ways, allergically disputed in others. Not technical.
Some further shots from the hip:
Broad AI -- Not narrow, without claiming full generality. Highly unspecific.
Digital Cognition Engines -- Anything "engine" doesn't acknowledge the system as being whole unto itself. Also this is sci-fi name territory.
Cognition Manifolds -- Also sci-fi, but scratches an itch in my brain. I like this one a little too much and I am a little sad now that this isn't the accepted term.
Idiot disaster monkeys indeed. I still believe we as a species can make less fatal choices, even though many individual people in the AI industry are working very hard of their own free will to prove me wrong.
I wouldn't be so sure that e.g. Mark Kelly was implying that the President himself had given unlawful orders. (I am open to evidence that this is what was being implied, or that this actually occurred.) The boat double-tap incident in particular suggested that unlawful orders may have been given by someone in the chain of command. Minus any speculative or actual nth-order effects, I think it was a sensible time to remind service members not to follow unlawful orders.
And of course, the POTUS himself frequently declines to defer to laws that would constrain him, so the idea that he might give unlawful orders shouldn't be surprising to people in any given political camp.