Why are we giving up on plain "superintelligence" so quickly? According to Wikipedia:
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the most gifted human minds. Philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".
According to Google AI Overview:
Superintelligence (or Artificial Superintelligence - ASI) is a hypothetical AI that vastly surpasses human intellect in virtually all cognitive domains, possessing superior scientific creativity, general wisdom, and social skills, operating at speeds and capacities far beyond human capability, and potentially leading to profound societal transformation or existential risks if not safely aligned with human goals.
I don't think I saw anyone use "superintelligence" as "better than a majority of humans on some specific tasks" before very recently. (Was DeepBlue a superintelligence? Is a calculator superintelligence?)
also because sharing the planet with a slightly smarter species still doesn’t seem like it bodes well. (See humans, neanderthals, chimpanzees).
From what I can tell from a quick Google search, current evidence doesn't show that neanderthals were any less smart than humans.
Yeah I don't super stand by the Neanderthal comment, was just grabbing an illustrative example.
I just did a heavy-thinking GPT-5 search, which said "we don't know for sure, there's some evidence that, on an individually they may have been comparably smart as us, but, we seem to have had the ability to acquire and share innovations." This might not be a direct intelligence thing, but, "having some infrastructure that makes you collectively smarter as a group" still counts for my purposes.
Overwhelming superintelligence sounds like a useful term. A term I started using is independence gaining artificial general intelligence as the threshold for when we need to start being concerned about the AGI's alignment. An AI program that is sufficiently intelligent to be able to gain independence, such as by creating a self-replicating computer capable of obtaining energy and other things needed to achieve goals without any further assistance from humans.
For example, an independence gaining AGI connected to today's internet might complete intellectual tasks for money and then use the money to mail order printed circuit boards and other hardware. An independence gaining AGI with access to 1800s level technology might mine coal and build a steam engine to power a Babbage-like computer and then bootstrap to faster computing elements. An independence gaining AGI on Earth's moon might be able to produce solar panels and CPUs from the elements in the moon's crust, and produce an electromagnetic rail to launch probes off the moon. Of course, how smart the AGI has to be to gain independence is a function of what kind of hardware the AGI can get access to. An overwhelming superintelligence might be able to take over the planet with just access to a hardware random number generator and a high precision timer, but a computer controlling a factory could probably be less intelligent and still be able to gain independence.
One of the reasons I started using the term is because human level AGI is vague, and we don't know if we should be concerned by a human level AGI. Also, to determine if something is human level, we need to specify human level in what? 1950s computers were superhuman at arithmetic, but not chess, so is a 1950s computer human level or not? It may be hard to determine of a given computer + software is capable of gaining independence, but it is a more exact definition than just human level AGI.
There's many debates about "what counts as AGI" or "what counts as superintelligence?".
Some people might consider those arguments "goalpost moving." Some people were using "superintelligence" to mean "overwhelmingly smarter than humanity". So, it may feel to them like it's watering it down if you use it to mean "spikily good at some coding tasks while still not really successfully generalizing or maintaining focus."
I think there's just actually a wide range of concepts that need to get talked about. And, right now, most of the AIs that people will wanna talk about are kinda general and kinda superintelligent and kinda aligned.
If you have an specific concept you wanna protect, I think it's better to just give it a clunky name that people don't want to use in casual conversation,[1] rather than pumping against entropy to defend a simple term that could be defined to mean other things.
Previously OpenPhil had used "Transformative AI" to mean "AI that is, you know, powerful enough to radically transform society, somehow." I think that's a useful term. But, it's not exactly what If Anyone Builds It is cautioning about.
The type of AI I'm most directly worried about is "overwhelmingly superhuman compared to humanity." (And, AIs that might quickly bootstrap to become overwhelmingly superhuman).
I've been lately calling that Overwhelming Superintelligence.
Overwhelming Superintelligence is scary both because it's capable of strategically outthinking humanity, and, because any subtle flaws or incompatibilities between what it wants, and what humans want, will get driven to extreme levels.
I think if anyone builds Overwhelmed Superintelligence without hitting a pretty narrow alignment target, everyone probably dies. (And, if not, the future is probably quite bad).
Appendix: Lots of "Careful Moderate Superintelligence"
I am separately worried about "Carefully Controlled Moderate Superintelligences that we're running at scale, each instance of which is not threatening, but, we're running a lot of them, giving them lots of room to maneuver."
This is threatening partly because at some point that they may give rise to Overwhelming Superintelligence, but, also because sharing the planet with a slightly smarter species still doesn't seem like it bodes well. (See humans, neanderthals, chimpanzees). They don't have to do anything directly threatening, just keep being very useful while subtly steering things such that they get more power in the future.
I actually think AIdon'tkilleveryoneism is pretty good.