I am a volunteer organizer with PauseAI and PauseAI US, a pro forecaster, and some other things that are currently much less important.
The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes.
It's an interesting point that some information isn't worth trying to gain. AI could still pareto-dominate human pros, though, myself readily included.
I don't see why AI would need to participate in a real money prediction market, or even a market at all. AI systems aren't motivated by money, and non-market prediction aggregators have fewer failure modes. The only cost would be the cost to run the models, which would eventually be extremely cheap per question compared to human pros. I think it would suffice to create an AI-only version of basically Metaculus, subsidized by businesses and governments who benefit from well-calibrated forecasts on a wide variety of topics (sans the degenerate examples like sports predictions and "fun" questions).
Daniel Kokotajlo briefly points out that private entities may want private forecasting services, to gain an edge over the competition. Sadly, I think it's likely that private AI forecasting farms would dominate, despite the massive overall cost savings if they pooled resources into a shared project.
As soon as the end of this year, we appear to be heading into an era where forecasting isn't the domain of humans anymore. The resulting epistemic miracle might not be widely adopted, and might not even be tooled toward the public good. I feel sad about this.
Why would anyone want a galaxy? I don't even want a very big house.
If all your friends have galaxies, do you all still get to live in the same city and play games and make each other laugh? If so, what are the galaxies for? If not... what are the galaxies for?
Hm. I found a Twitter thread on the topic, with some leads: https://x.com/GrantSlatton/status/1830302697125478630
I have undergone the exact same move, but I think my political beliefs are not sophisticated enough for me to be able to identify a solid target to "believe already." My time on the right gave me some pieces of information that strongly falsified a few beliefs often bucketed with the left, even as I moved leftward, which has helped me moderate my trust that continuing leftward would capture the things I expect to believe in the future.
Put another way, politics is multivariate / high dimensional. A clear trend in one specific dimension isn't meaningless, but is so lossy that I wouldn't be surprised if it stopped or apparently reversed slightly.
Adding some descriptors I have frequently used:
General-purpose AI Systems -- Unwieldy. Possibly overemphasizes their tool nature.
Digital Minds / Digital Brains -- Very accurate in some important ways, allergically disputed in others. Not technical.
Some further shots from the hip:
Broad AI -- Not narrow, without claiming full generality. Highly unspecific.
Digital Cognition Engines -- Anything "engine" doesn't acknowledge the system as being whole unto itself. Also this is sci-fi name territory.
Cognition Manifolds -- Also sci-fi, but scratches an itch in my brain. I like this one a little too much and I am a little sad now that this isn't the accepted term.
Idiot disaster monkeys indeed. I still believe we as a species can make less fatal choices, even though many individual people in the AI industry are working very hard of their own free will to prove me wrong.
My friends still frequently say "I have been a good Bing" because of my telling of this story ages ago.
It's not memory-holed as far as I can tell, but it isn't the best example anymore of most misalignment-related things that I want examples of.
I found this after a brief Wikipedia rabbit hole: an article from the 1982 North American Computer Chess Championship. https://www.cgwmuseum.org/galleries/issues/softline_1.3.pdf
On the evening of the last round, there was some discussion amongst tournament participants about when or whether a computer program might become chess champion of the world. Monroe Newborn, programmer of McGill University's Ostrich, predicted it could happen within five years. Valvo thought it would be more like ten, and the Spracklens were betting on fifteen years. Thompson thought it would be more than twenty years before a program could be written that would beat all comers, and a few others said it would never happen.
The most widely held view, however, was that a computer program would become world champion by or shortly after the year 2000. Considering both the complexity of the game and the complexity of the human mind, that seems like a remarkably positive outlook on the future of computing.
Garry Kasparov believed as late as 1989 that machines would never completely best humans in chess, and thought he personally would never be beaten by a machine. https://www.chesshistory.com/winter/extra/kasparovinterviews.html
Question: Two top grandmasters have gone down to chess computers: Portisch against “Leonardo” and Larsen against “Deep Thought”. It is well known that you have strong views on this subject. Will a computer be world champion, one day?
Kasparov: Ridiculous! A machine will always remain a machine, that is to say a tool to help the player work and prepare. Never shall I be beaten by a machine! Never will a program be invented which surpasses human intelligence. And when I say intelligence, I also mean intuition and imagination. Can you see a machine writing a novel or poetry? Better still, can you imagine a machine conducting this interview instead of you? With me replying to its questions?’
I was able to confirm that directly in the magazine here: https://escaleajeux.fr/?principal=/jeu/js_55?
The aesthetics of strategies of this shape are unattractive to most rationalists, since it relies on evoking tribalism. Rationalism instructs against tribalism as one of the first steps toward thinking well (as it should!), but when stoking tribalism in others is actually a winning strategy, the internalised moralism of non-tribalism can override the rational pursuit of winning in favor of the irrational pursuit of rationalism as its own end.
I think worlds in which we survive are likely ones in which "anger toward the outgroup" among the general public is mobilized as a blunt weapon against the pro-ASI-development memeplex. I think we are likely to see much more of this humanist angle in the coming year.
That is true. Human forecasters mostly don't do this, though, so if an AI forecaster did maximize cost-effective information-gathering, it could still gain an advantage from doing so. The cost of AI doing the gathering could also presumably drop below the cost of humans doing the gathering, which would create a strict advantage on both effective gathering of information and effective use of information.
Markets, yes. Reactivity faster than a few hours is usually not relevant to the actual usefulness of forecasting, though.
That's the projection according to ForecastBench, anyway:
LLMs certainly aren't narrow, and it's not clear that "general intelligence" is a well-defined concept. Other than "general enough to plug all the rest of its own holes from now on," I don't think we know exactly what kinds and degrees of generality are needed for specific complex tasks. AI has been way more jagged at the frontier than anyone expected, and on two kinds of things that equally appear to require very general intelligence, AIs often have very different performance.