There are 2 distinctive but compatible modes of superintelligence.

First one (arguably what we see right now with latest NNs) - super resource access, but not necessary super skills. Operates not better, maybe even significantly worse, than the person who's able to read entire library of congress, wiki and all open professional forums. But no person can do that due to reading IO speed and lifetime limitations, that's what makes it "superintelligence". "Knowing more", "horizontal line in T-shaped person" etc etc. No pushing the boundaries but great ability navigating within them.

Second one - super-skills but not necessary super resource access. You can do only X, but in X there neither is or ever was anyone better. Pushing boundaries, inventing new styles and approaches, "doing better", "vertical line in T-shaped person" etc etc.

My question - is there already accepted terminology for them? For myself I was always calling them Wide and Tall (because of T-shaped person analogy and also association with different strategy names).

I found a similar idea explored in but both Wide and Tall arguably fit the "quality" term there.

New Answer
New Comment

2 Answers sorted by

I reject this terminology; I think #2 is superintelligence and #1 is a different dimension.

Also, I would actually differentiate two kinds of #1. There's how much stuff the AI can reason about, which is generality (you can have a "narrow superintelligence" like a chess engine), and there's how much it knows, which is knowledge base/resource access. But I wouldn't call either of them (super)intelligence.

How would you call #1 then? It is certainly possible to achieve super-human results using just it. E.g. there were examples of problems in history that were unsolved because they required knowledge of some completely different area, but no human can have PhD-level knowledge of Chemistry, Biology, Math AND looking at exactly this one problem requiring inputs from all three.

This isn't a problem for the AI though - it may not be the best in each of the area, but if it has at least student-level knowledge in a hundred different topics it can already achieve a lot just by effectively combining them.

I notice I fail to see a way to create  super-humanly Tall Intelligence that would not quite quickly become very Wide. either the Tall Intelligence would be able to reach sideways, and broaden itself, or it would be trivial to stack a whole bunch of Tall Intelligences working as a Tall Phalanx and thus be functionally Wide.

Wide Artificial Intelligence is simple, hence why we already have it. Making it Wider is easy, just time consuming and lots of legwork. Having a Taller-Than-Human Intelligence would make the process much easier, because (to milk the metaphor dry) A Tall Intelligence can see further, and learn how to reach Wider.

Alpha Go is superhuman in Go an Go only. It will also be possible to make an AI that is very good in math, but has no idea about the real world.

I agree, there is some innate "Angle of repose" (continuing with tall/wide analogy) present in the structure of the knowledge itself. The higher the concept we operate the more "base" knowledge it needs to support. So they aren't completely independet.

Mostly was thinking about how I can call these "axii" in conversation so that it's understandable what I'm talking about.

2Going Durden6mo
might not be the best approach, but I saw people use the term Artificial Cleverness for Wide but Short AI. Things like CHatGPT perfectly fit the bill, it is "clever" (quick but superficial at analysis of a broad set of data), but not "Tall" at all.
2 comments, sorted by Click to highlight new comments since: Today at 12:42 PM

It sounds like the terms for humans would be generalist vs specialist.

Thank you! I'll add these for consideration. Not exactly what I'm looking for but close enough to be difficult to put in words what they are missing.