1424

LESSWRONG
LW

1423
SuperintelligenceAI
Frontpage
2025 Top Fifty: 16%

207

Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low

by Expertium
15th Jun 2025
2 min read
27

207

207

Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low
24Viliam
3Cole Wyeth
4Expertium
5Edmund Nelson
5Expertium
2John Huang
21Davidmanheim
10Viliam
20Adele Lopez
61stuserhere
14ChristianKl
11Lukas Finnveden
11Cole Wyeth
17Mis-Understandings
22Cole Wyeth
25Expertium
11Zack_M_Davis
4Expertium
4Cole Wyeth
2quetzal_rainbow
2Mars_Will_Be_Ours
5plex
2jmh
1Bunthut
1XelaP
1Bunthut
0BryceStansfield
New Comment
27 comments, sorted by
top scoring
Click to highlight new comments since: Today at 5:27 PM
[-]Viliam3mo244

One person having an extreme level of a skill is already on the boundary of "business as usual". Having all of them at the same time would be a completely different game. (An on top of that we can get the ability to do N of those things at the same time, or doing one of them N times faster, simply by using N times more resources.)

So when you imagine how powerful AI can get, you only have two options: either you believe that there is a specific human ability that the AI can never achieve, or it means that the AI can get insanely powerful.

As an intuition pump, imagine it as a superhero ability: the protagonist can copy abilities of any person he sees, and can use any number of those abilities in parallel. How powerful would the character get?

Reply
[-]Cole Wyeth3mo3-4

It seems that this model requires a lot of argumentation that is absent from post and only implicit in your comment. Why should I imagine that AGI would have that ability? Are there any examples of very smart humans who simultaneously acquire multiple seemingly magical abilities? If so, and if AGI scales well past human level, it would certainly be quite dangerous. But that seems to assume most of the conclusion.

Explicitly, in the current paradigm this is mostly about training data, though I suppose that with sufficient integration that data will eventually become available. 

Anyway, I personally have little doubt that it is possible in principle to build very dangerous AGI. The question is really about the dynamics - how long will it take, how much will it cost, how centralized and agentic will it be, how long are the tails? 

It occurs to me that acquiring a few of these “magical” abilities is actually not super useful. I can replicate the helicopter one with a camera and the chess on by consulting an engine. Even if I could do those things secretly, i.e. cheat in chess tournaments, I would not suddenly become god emperor or anything. It actually wouldn’t help me much.

5 isn’t that impressive without further context. And I’ve already said that the el chapo thing is probably more about preexisting connections and resources than intelligence. 

So, I’m cautious of updating on these arguments. 

Reply
[-]Expertium3mo43

Why should I imagine that AGI would have that ability?

Modern LLMs are already like that. They have expert or at least above-average knowledge in many domains simultaneously. They may not have developed "magical" abilities yet, but "AI that has lots of knowledge from a vast number of different domains" is something that we already see. So I think "AI that has more than one magical ability" it's a pretty straightforward extrapolation.

Btw, I think it's possible that even before AGI, LLMs will have at least 2 "magical" abilities. They're getting better at Geoguessr, so we could have a Rainbolt-level LLM in a few years; this seems like the most likely first "magical" ability IMO.
Superhuman forecasting could be the next one, especially once LLMs become good at finding relevant news articles in real time.
Identifying book authors from a single paragraph with 99% accuracy seems like something LLMs will be able to do (or maybe even already can), though I can't find a benchmark for that.
Accurately guessing age from a short voice sample is something that machine learning algorithms can do, so with enough training data, LLMs could probably do it too.

Reply
[-]Edmund Nelson3mo50

I'll say this much

Rainbolt tier LLMs already exist https://geobench.org/

 

AI's trained on Geoguessr are dramatically better than rainbolt  and have been for years

Reply
[-]Expertium3mo50

Yes, I've seen that benchmark (I mean, I literally linked to it in my comment) and the video.

Regarding geobench specifically: the main leaderboard on that benchmark is essentially NMPZ (No Moving, Panning or Zooming). Gemini 2.5 Pro achieves an average score of 4085. That's certainly really good for NMPZ, but I don't think that's Rainbolt-tier. Rainbolt-tier is more like 4700-4800, if we want an LLM that has average-case performance equal to Rainbolt's best-case performance.

Also, LLMs can't do the "guess the country solely by pavement" thing like he can, so there's room for improvement.

Reply
[-]John Huang3mo2-10

We already deal with entities with theoretically limitless capabilities. They're called either corporations or states or organizations. Organizations potentially are ever growing. 

 

Of course if AI ever obtained superhuman abilities, the first place these abilities would be deployed is in a corporation or state. 

The great AI danger is a corporate danger. Wielding a corporation, the AI automatically obtains all the abilities of the individual humans making up a corporation, and AI can manipulate humanity the traditional way, through money. Any ability the AI lacks, well, AI can just hire the right people to fulfill that niche. 

If AI obtains state power, it will manipulate humanity through the other tradition, war and violence. 

Reply
[-]Davidmanheim3mo2120

Organizations can't spawn copies for linear cost increases, can't run at faster than human speeds, and generally suck at project management due to incentives. LLM agent systems seem poised to be insanely more powerful.

Reply
[-]Viliam3mo102

Organizations can't even hire Elon Musk -- and if they could, they would gain at most 16 of his hours every day -- but if an AI obtains his skills, it will be able to simulate thousand Elon Musks.

Reply
[-]Adele Lopez3mo204

Example 1: Trevor Rainbolt. There is an 8-minute-long video where he does seemingly impossible things, such as correctly guessing that a photo of nothing but literal blue sky was taken in Indonesia or guessing Jordan based only on pavement. He can also correctly identify the country after looking at a photo for 0.1 seconds.

To be clear, that video is heavily cherry-picked. His second channel is more representative of his true skill: https://www.youtube.com/@rainbolttwo/videos

Reply
[-]1stuserhere3mo64

Also, Rainbolt himself admits that this skill is largely down to deliberate practice. There are dozens of geoguessr players better than him, though he is really good at NMPZ and insta-sending the guess.

Reply
[-]ChristianKl3mo144

Example 4: Magnus Carlsen. Being good at chess is one thing. Being able to play 3 games against 3 people while blindfolded is a different thing. 

It isn't. To be good at chess your brain needs to learn to present game states in your head. To be good you need to read ahead many moves. That's the same skill you need to play blindfolded. People who never tried to play blindfolded or are not good overestimate the amount of skill it takes. Some people who try to play blindfolded are surprised about how easy it is. 

I would expect that explaining a complex situation of business politics to current o3-Pro and asking it "from a game theoretic perspective" what are smart and maybe unconventional moves I can make in this situation, is already going to give let you make political moves that are much stronger than what many people currently do.

Most of the time, humans are not really trying to use intelligence to make optimal well thought out choices. You already get a huge leg up, by actually trying to use intelligence to optimize all the choices you make. 

Reply
[-]Lukas Finnveden3mo111

Example 2: Joaquín "El Chapo" Guzmán. He ran a drug empire while being imprisoned. Tell this to anyone who still believes that "boxing" a superintelligent AI is a good idea.

I think the relevant quote is: "While he was in prison, Guzmán's drug empire and cartel continued to operate unabated, run by his brother, Arturo Guzmán Loera, known as El Pollo, with Guzmán himself still considered a major international drug trafficker by Mexico and the U.S. even while he was behind bars. Associates brought him suitcases of cash to bribe prison workers and allow the drug lord to maintain his opulent lifestyle even in prison, with prison guards acting like his servants"

This seems to indicate less "running things" than what I initially thought this post was saying. It's impressive that the drug empire stayed loyal to him even while he was in prison, though.

Example 5: Chris Voss, an FBI negotiator. This is a much less well-known example, I learned it from o3, actually. Chris Voss has convinced two armed bank robbers to surrender (this isn't the only example in his career, of course) while only using a phone, no face-to-face interactions, so no opportunities to read facial expressions.

My (pretty uninformed) impression is that it's often rational for US hostage takers to surrender without violence, if they're fully surrounded, because the US police has a policy of not allowing them to trade hostages for escape, and violence will risk their own death and longer sentences. (Though maybe it's best to first negotiate for a reduced sentence?) If that's true, this is probably an example of someone convincing some pretty scary and unpredictable individuals to do the thing that's in their best self-interest, despite starting out in an adversarial situation, and while only talking over the phone. Impressive, to be sure, but it wouldn't feel very surprising that we have recorded examples of this even if persuasion ability plateaus pretty hard at some point.

Reply
[-]Cole Wyeth3mo113

What makes you think that those people were able to do those things because of high levels of intelligence? It seems to me that in most cases, the reported feat is probably driven by some capability / context combination that stretches the definition of intelligence to varying degrees. For instance I would guess that El Chapo pulled that off because he already had a lot of connections and money when he got to prison. The other examples seem to demonstrate that it is possible for a person to develop impressive capabilities in a restricted domain given enough experience. 

Reply
[-]Mis-Understandings3mo1712

We are exactly worried about that though. It is not that AGI will be inteligent (that is the name), but that it can and probably will develop dangerous capabilities. Inteligence is the word we use to describe it, since it is associated with the ability to gain capability, but even if the AGI is sometimes kind of brute force or dumb does not mean that it cannot also have dangerous enough capabilities to beat us out. 

Reply
[-]Cole Wyeth3mo22-2

The post is an intuition pump for the idea that intelligence enables capabilities that look like "magic." 

It seems to me that all it really demonstrates is that some people have capabilities that look like magic, within domains where they are highly specialized to succeed. The only example that seems particularly dangerous (El Chapo) does not seem convincingly connected to intelligence. I am also not sure what the chess example is supposed to prove - we already have chess engines that can defeat multiple people at once blindfolded, including (presumably) Magnus Carlsen. Are those chess engines smarter than Magnus Carlsen? No.  

This kind of nitpick is important precisely because the argument is so vague and intuitive. Its pushing on a fuzzy abstraction that intelligence is dangerous in a way that seems convincing only if you've already accepted a certain model of intelligence. The detailed arguments don't seem to work. 

The conclusion that AGI may be able to do things that seem like magic to us is probably right, but this post does not hold up to scrutiny as an intuition pump. 

Reply
[-]Expertium3mo2517

The only example that seems particularly dangerous (El Chapo) does not seem convincingly connected to intelligence

I'd say "being able to navigate a highly complex network of agents, a lot of which are adversaries" counts as "intelligence". Well, one form of intelligence, at least.

Reply
[-]Zack_M_Davis3mo116

This point suggests alternative models for risks and opportunities from "AI". If deep learning applied to various narrow problems is a new source of various superhuman capabilities, that has a lot of implications for the future of the world, setting "AGI" aside.

Reply
[-]Expertium3mo42

Perhaps you think of intelligence as just raw IQ. I count persuasion as a part of intelligence. After all, if someone can't put two coherent sentences together, they won't be very persuasive. Obviously being able to solve math/logic problems and being persuasive are very different things, but again, I count both as "intelligence".

Of course, El Chapo had money (to bribe prison guards), which a "boxed" AGI won't have, that I agree with. I disagree that it will make a big difference.

Reply
[-]Cole Wyeth3mo4-2

I intentionally avoided as much as possible the implication that intelligence is "only" raw IQ. But if intelligence is not on some kind of real-valued scale, what does any part of this post mean?

Reply
[-]quetzal_rainbow3mo20

If you want example particularly connected to prisons, you can take anarchist revolutionary Sergey Nechayev, who was able to propagandize prison guards enough to connect with outside terrorist cell. The only reason why Nechayev didn't escape is because Narodnaya Volya planned assasination of Tsar and they didn't want escape to interfere.

Reply
[-]Mars_Will_Be_Ours3mo21

I think that high levels of intelligence make it easier to develop capabilities similar to the ones discussed in 1 and 3-5, up to a point. (I agree that El Chapo should be discounted due to the porosity of Mexican prisons) A being with an inherently high level of intelligence will be able to gather more information from events in their life and process that information more quickly, resulting in a faster rate of learning. Hence, a superintelligence will acquire capabilities similar to magic more quickly. Furthermore, the capability ceiling of a superintelligence will be higher than the capability ceiling of a human, so they will acquire magic-like capabilities impossible for humans to ever preform.

Reply
[-]plex3mo50

I think Critch's post on slow-motion videos as an intuition pump for x-risk points much more vividly to the extent we will be outclassed.

Reply
[-]jmh3mo20

It's an interesting post and on some levels seems both correct and, to me at least, somewhat common sense.

Still I have a small tingle in the back of my head asking "is this magic really from intelligence or something else?" Or perhaps intelligence (perhaps not all that exceptional) and something else. It seems like in a number of the cases we're presented a somewhat narrow frame of the situation. If the magic is not highly correlated with, or better a function largely of, intelligence I wonder exactly how meaningful this is regarding ASI.

Reply
[-]Bunthut3mo1-1

Example 3: Stephen Wiltshire. He made a nineteen-foot-long drawing of New York City after flying on a helicopter for 20 minutes, and he got the number of windows and floors of all the buildings correct.

I think ~everyone understands that computers can do this. The "magical" part is doing it with a human brain, not doing it at all. Similarly, blindfolded chess is not more difficult than normal chess for computers. That may take a little knowledge to see. And "doing it faster" is again clear. So the threshold for magic you describe is not the one even the most naive use for AI.

Reply
[-]XelaP3mo10

Can a computer do this? That is, take in the footage and output a drawing or a 3D model that accurate? I don't know what SOTA of that sort of image processing is (and of course nowadays we have better ML models).

Reply
[-]Bunthut3mo10

A simple version of this is done for panoramic photos. If he looked at the city from a consistent direction throughout the flight, that's all that's needed. If the direction varied, it can't have varied a lot - he had to at least see the sides of the building he was drawing, if maybe from a different angle, and not all the buildings would have been parallel. That kind of rotation seems doable with current image transformers (and that's only neccesary if the drawing has accurate angles even over long distances).

In any case, I don't think it matters to my argument if current ML can do it. All the parts that might be difficult for the computer are doable even for normal humans, and therefore not magical. The only thing that's added to the normal human skill here is perfect memory, which we know is easy for computers and have known for a long time.

Reply
[-]BryceStansfield3mo0-2

"No matter how intelligent you are, you cannot override fundamental laws of physics."

Call me crazy, but I don't think this is necessarily true. While it's obviously true, by definition, that you can't break the fundamental laws of physics, the laws of physics almost certainly don't neatly map to the mathematical symbol games a bunch of us monkeys derived to describe them. It would be wholely unshocking to me if, for example, something we view as "fundamental" like the laws of thermodynamics are more an artifact of our models than a true facet of reality.

I definitely agree with the rest of your point though. So long as humans aren't on the pareto optimal frontier of the energy/intelligence tradeoff (and I really doubt that's the case), you should expect intelligent machines to be strictly superior in capabilities to us in all meaningful ways.

Reply
Moderation Log
More from Expertium
View more
Curated and popular this week
27Comments
SuperintelligenceAI
Frontpage

A while ago I saw a person in the comments to Scott Alexander's blog arguing that a superintelligent AI would not be able to do anything too weird and that "intelligence is not magic", hence it's Business As Usual.

Of course, in a purely technical sense, he's right. No matter how intelligent you are, you cannot override fundamental laws of physics. But people (myself included) have a fairly low threshold for what counts as "magic," to the point where other humans (not even AI) can surpass that threshold.

Example 1: Trevor Rainbolt. There is an 8-minute-long video where he does seemingly impossible things, such as correctly guessing that a photo of nothing but literal blue sky was taken in Indonesia or guessing Jordan based only on pavement. He can also correctly identify the country after looking at a photo for 0.1 seconds.

Example 2: Joaquín "El Chapo" Guzmán. He ran a drug empire while being imprisoned. Tell this to anyone who still believes that "boxing" a superintelligent AI is a good idea.

Example 3: Stephen Wiltshire. He made a nineteen-foot-long drawing of New York City after flying on a helicopter for 20 minutes, and he got the number of windows and floors of all the buildings correct.

Example 4: Magnus Carlsen. Being good at chess is one thing. Being able to play 3 games against 3 people while blindfolded is a different thing. And he also did it with 10 people. He can also memorize the positions of all pieces on the board in 2 seconds (to be fair, the pieces weren't arranged randomly, it was a snapshot from a famous game).

Example 5: Chris Voss, an FBI negotiator. This is a much less well-known example, I learned it from o3, actually. Chris Voss has convinced two armed bank robbers to surrender (this isn't the only example in his career, of course) while only using a phone, no face-to-face interactions, so no opportunities to read facial expressions. Imagine that you have to convince two dudes with guns who are about to get homicidal to just...chill. Using only a phone. And you succeed.
So if you think, "Pfft, what, AI will convince me to go from hostile to cooperative within minutes, after a little chit-chat?" well, yes, it just might.

Examples 2 and 5 are especially relevant in the context of controlling AI. So if you are surprised by these examples, you will be even more surprised by what a superintelligent AI can do.

Intelligence is not magic. But if even "natural" intelligence can get you this far, imagine what an AGI can do.

Mentioned in
36AI #122: Paying The Market Price