Economist.
The link is a link to a facebook webpage telling my that I am about to leave facebook. Is that intentional?
I agree that Nanobots are not a necessary part of AI takeover scenarios. However, I perceive them as a very illustrative kind of "the AI is smart enough for plans that make resistance futile and make AI takeover fast" scenarios.
The word "typical" is probably misleading, sorry; most scenarios on LW do not include Nanobots. OTOH, LW is a place where such scenarios are at least taken seriously.
So p(scenario contains Nanobots|LW or rationality community is the place of discussion of the scenario) is probably not very high, but p(LW or rationality community is the place of discussion of the scenario|scenario contains Nanobots) probably is...?
Yes, people care about things that are expected to happen today rather than in 1,000 years or later. That is a problem that people fighting against climate change have been pointing out for a long time. At the same time, with respect to AI, my impression is that many people do not react to developments that will quickly have strong implications, while some others write a lot about caring about humanity's long-term future.
Thanks for the list! Yes, it is possible to imagine stories that involve a superintelligence.
I could not imagine a movie/successful story where everybody is killed by an AGI within seconds because it has prepared that in secrecy and nobody realized it, and nobody could do anything about it. Seems like lacking a happy end and even a story.
However, I am glad to be corrected, and will check the links, the stories will surely be interesting!
Gnargh. Of course someone has a counterexample. But I don't think that is the typical lw AGI warning scenario. However, this could become a "no true Scotsman" discussion...
I don't understand this question. Why would the answer to that question matter? (In your post, you write "If the answer is yes to all of the above, I’d be a little more skeptical.") Also, the "story" is not really popular. Outside of LessWrong discussions and few other places, people seem to think that every expectation about the future that involves a superintelligent agentic AGI sounds like science fiction and therefore does not have to be taken seriously.
Actually, lesswrong AGI warnings don't sound like they could be the plot of a successful movie. In a movie, John Connor organizes humanity to fight against skynet. That does not seem plausible with LW-typical nanobot scenarios.
Wouldn't way 2 likely create a new species unaligned with humans?
The goal would be that forecasters would be forced to make internally consistent forecasts. That should reduce noise, firstly by reducing unintentional errors, secondly by cleaning up probabilities (by quasi-automatically adjusting the percentages of candidates who may previously have been considered to be low-but-relevant-probability candidates), and thirdly by crowding out forecasters who do not want to give consistent forecasts (which I assume correlates with low-quality forecasts). It should also make forecasts more legible and thus increase the demand for metaculus.
Metaculus currently lists 20 people who could be elected US President ("This question will resolve as Yes for the person who wins the 2024 US presidential election, and No for all other options.", "Closes Nov 7, 2024"), and the sum of their probabilities is greater than 104%. Either this is not consistent, or I don't understand it and with all due modesty, if that is the reason for my confusion, then I think many people in the target audience will also be confused.