I want to investigate with you if my belief about intelligence is rational or irrational. Beware, it may turn out to be based on wishful thinking.

To put it crudely: I believe there should be a "cheat code" for high intelligence (problem-solving skills) for people.

Down below is my evidence for this. For me this evidence is abstract (not based on very specific facts), but strong. Basically, all my evidence boils down to what I perceive as conflicts between everyday subjective experience and the idea of intelligence. Those conflicts are "cruxes of my feelings" that push me to the conclusion.

And if my opinion turns out to be rational enough (for common enough sets of priors), then I think it's important: it means there's a significant chance that people have an yet unknown way to boost their "learning to learn" ability. Finding this way would be as important as learning rationality.


Conflict 1

99% of your experience is subjective experience and 1% is problem solving. Because even when you're solving problems you have subjective experience about it.

For me problem solving is like a drop in the ocean. Like rational numbers on the real number line.

Intelligence appears to be related to subjective experience (you may need to think hard about stuff or to do quite a bit of world modeling and self-reflection and whatnot in order to get certain feelings). And at the same time there appears to be a large disconnect between subjective experience and intelligence. I find it strange.

Crazy analogy: if subjective experience were like mass and problem solving were like energy, there would be a neat equivalence.


Conflict 2

I think there's another conflict between subjective experience and intelligence. Because of the self-defining properties of the subjective experience.

Imagine you're solving a problem

  • If you feel that a problem is hard, it's objectively hard inside of your subjective experience.
  • If you don't use all the available information to solve the problem, that's because this information just doesn't exist inside of your subjective experience.

In a way, having consciousness is like feeling that you're maximally smart. From the outside it looks like people solve problems with different efficiency, but from the inside everyone's using the most effective method. And subjective experience of people is similar.

So I think there's a big tension between the nature of consciousness and the idea of levels of intelligence. Is the subjective experience of a smarter person fundamentally equivalent to the experience of a way less smart person? If "no", can you become smarter by changing your subjective experience?

Vague analogy: equivalence principle.


Boundaries

There should be boundaries between different levels of intelligence. Approaching those boundaries should feel like something or have other effects. The same way approaching the limit of your strength feels like something and has other effects: if it didn't feel like anything and didn't have any other effects there would be no limits (psychological or physical) to your strength.

I can imagine the effects of the boundary between your intelligence and the intelligence of an entity with a brain the size of a planet. For example, just its working memory allocated for solving some "minor" problem could wipe out all of your long-term memories. Well, just all of your synaptic weights.

But I can't imagine the feel and the effects of the boundary between your intelligence and the intelligence of Einstein. What would happen to you if you tried to approach Einstein, the ideas of Einstein? People learn those ideas anyway.

Note: the struggles of trying to solve a hard problem are likely NOT an example of the boundary between intelligence levels. To explain why you can't solve a certain problem even by pushing yourself is way easier than to explain the reason why there's a ceiling of your intelligence (if there is such a thing).


Abstract thinking

In the courses of our lives we form A LOT of meaningful connections in our brains. All kinds of memories, impressions, intuitions, ideas, plans, feelings. But those connections don't seem to be utilized in problem solving. Why is this so? I think that's strange. The usual answer is that problem solving is about abstract thinking, and those connections are not abstract.

But I don't think universal levels of abstraction exist. Different areas of math abstract things in different ways. When you invent new math and when you invent new physics you do different kinds of abstraction. I believe that any human uses concepts that are "latently" very abstract. (Have at least the same degree of abstractness as calculus and set theory/group theory.) There's just no incentive to make those concepts explicitly abstract.

Rephrasing one of the previous points: problem solving is related to 1% of people's experience.

I don't think those are fair conditions in which you can judge people's potential. Such conditions a priori give a person only a very small chance to develop their thinking skills. We live in a fundamentally unfair conditions and that's another suspicious thing about intelligence.

Note: and when you think about an AI that can build specialized AIs, the idea of "abstract thinking" can become even more unclear. For example, some "player AI" could build a "Go genius" (AlphaGo) and a "chess genius" (AlphaZero) without developing abstract concepts applicable to both chess and Go. Sometimes the learning process is more abstract than the result of learning, sometimes they're equally abstract.


Abstractions (2)

I think, similar to "common sense", people have some "common level of abstraction". At that level all concepts are equally abstract and don't form a hierarchy. I think something like this is required in order to have human-level awareness. I think human thinking is like the surface of a bubble: you can increase the surface of the bubble (adding more concept, tools), but conscious thinking exists only on the surface, it doesn't have inner or outer layers.

Example 1. You can define natural numbers as nested empty sets. Does it mean that the idea of nested empty sets is more abstract than the idea of natural numbers, outside of a specific mathematical context?

I think not. In fact it may even be less abstract, 'cause it's more convoluted. And because it's only one approach to natural numbers out of many.

Example 2. You can take a simple naive definition of fractions such as "parts of a whole". Or you can take a more abstract definition such as "ordered pairs, each belonging to a nonzero commutative ring having the following allowed operations". Is the latter definition more abstract outside of the mathematical context?

I think it's less abstract, because it's based on more specific concepts. I'm not even sure those two definitions describe the same thing: a philosophical idea of a fraction isn't the same thing as the mathematical concept.

Example 3. Or you can think about the intuitive idea of distance and the formal definition of a metric. Why is an idea dependent on more concepts and chains of reasoning supposed to be more abstract?

When a white horse is not a horse

So, I believe levels of abstraction are relative or even nonexistent at a certain level of thinking. And this is yet another suspicious thing about intelligence. We need general relativity of general intelligence!


Properties of subjective experience

Do subjective experiences have properties?

Is the experience of eating food fundamentally different from the experience of loving a person?

If the space of subjective experiences is somewhat similar to a mathematical space, does this space have any interesting properties?

You can have thoughts about other thoughts (in many different ways). Can you have subjective experiences about subjective experiences? In what ways?

Does subjective experience itself contain knowledge?

If the answer is "yes", it will completely change our understanding of intelligence and its role.

If subjective experience contains knowledge, it means that knowledge exists even for sentient beings that live in extremely chaotic worlds, worlds where the concepts of physics and math and even logic don't exist. Isn't that crazy?

Understanding that knowledge could be cooler than understanding the physics of the Big Bang. I meant hotter.

And such knowledge would lead to a better theory than preference utilitarianism, because it gives a more fundamental concept than a "preference". This could help to align a conscious AGI: if caring about people is fundamentally different from making paperclips, it gives some ideas for alignment.


Personal evidence

Sometimes I think "It's time to quit this ridiculous coping. The world is probably not so interesting/convenient for me". But then I remember other people. And I feel that my memory about them (their differences and similarities) does contain some special knowledge. I can't be sure, I'm not a professional, but I think math and physics and statistics don't have a known/popular concept for describing those differences. Otherwise they would be known and applied (if the differences exist).

My favorite subjective experiences are those two:

Number 1, to think about different things related to people (people's faces, writing styles, characters).

Number 2, to think about different "worlds"/places/videogame levels (and magic realism paintings). People are important and interesting a priori, but I can't explain why the worlds/places are so interesting. Maybe because visual information is interesting and the idea of a "place" gives this information a lot of meaning. The idea of a "place" is like bones for the raw meat of visual experience.

By the way, that's the reason why I'm so curious about math and physics: those fields have so much concepts that feel interesting. Fields, spaces, frames of reference, functions and groups of symmetries. And they add "bones" (specific conditions/rules) to those vague concepts. However, those fields kind of become only about the hard bones. And I'm interested in the intersection of soft vague concepts and hard bones. Reasoning about unusual differences of people and places gives me this, but I feel like I don't get enough and miss something. I feel like I didn't manage to hit the perfect proportion of bones and meat. Is it possible to hit the perfect sweet spot? What knowledge does it contain? Can it change the way you think?

So, here's another crux of my belief: the biggest areas of intelligence, Math and Physics, don't feel like they cover everything. That's suspicious.

Books like Alice in Wonderland and Through the Looking-Glass show a little bit of what's possible if you combine fuzzy/magic stuff and logical rules. But I don't believe that it's the limit. There should be some "math/logic magic realism". And recreational mathematics is not that. Philosophy (e.g. Raven paradox) is sometimes like that, but not enough times.


P.S.: there's a classic short story "Mimsy Were the Borogoves" by Henry Kuttner and C. L. Moore. Hall of fame, "among the best stories before 1965". It is about toys that can change the way a child thinks.

I didn't mention IQ statistics (or child prodigies) in this post because for me the observations I described are more fundamental.

P.P.S.: I do believe that AGI is an existential threat, that it could solve problems better than humans. Yet I still wonder why people don't solve problems better. Or why there aren't more areas of math and logic.


Happiness (bonus)

This part is not about evidence (hence "bonus"), unless you allow some forms of wishful thinking. And maybe I won't be able to articulate my feelings clearly.

When I think about preference utilitarianism, I think about this question:

How do you want to make people happy without making their experience meaningful?

People live their own stories. But because of various reasons some stories end up not making sense or devalued. A person may even get attached to an already broken story, locked in a contradiction. What are you gonna do with those stories, throw them out and grant people "A Brand New Story, this time around it will definitely make sense" TM ?

And a story of one person may be connected to destroying the meaning of someone else's story because both stories can't be true at the same time. "But how can someone win if winning means that someone loses?"

Preference utilitarianism utopia reads like this to me:

If you're good and smart enough, you get a reward.

If you're not so good and smart dirtbag, your brains get updated into the closest to you smart/good person. You're not a (very) bad person, you just don't make sense.

Is it fair to treat people's experience like that?

Of course, you can raise those questions from the inside of the preference utilitarianism framework. But I find it strange that those questions are not central for the theory. It says that a person's experience is the most important thing, yet the most important questions about people's experience are not so important for the theory. For me this is some intrinsic conflict, a little bit of contradiction.

If preferences are the only thing that exists, then:

  • True Hell exists and will always exist. And is equivalent, in some sense, to Heaven (just not preferred). A meaningless story is fundamentally equivalent to a meaningful one. A meaningful story can always be turned into a meaningless one and there's nothing greater that can't be destroyed/subverted.

And also that:

  • Subjective experience of an irredeemably bad person who directly hurts everyone around is fundamentally equivalent to the subjective experience of a good person.

Is it fair to all the hurt people? Or even to the "irredeemably bad" person, if they aren't 100% bad and would want to be good, but just can't jump out of their head 'cause "good" and "bad" are equivalent from the inside?

For me this is not a desirable conclusion. I want to know if another conclusion is possible.


How can you grant experience of people meaning and value using a system that assumes the fundamental equivalence between "good" and "evil"? It doesn't make sense right from the start.

Is the experience of a bad person fundamentally equivalent to the experience of a good person? Is the experience of a smarter person fundamentally equivalent to the experience of a way less smart person? For me this is another mystery of equivalence and a conflict.

I don't remember Flowers for Algernon good enough to recall how the book described the experience on different levels of intelligence.


g factor and IQ (bonus)

This part is a "bonus" because I didn't think about this point before.

G_factor_(psychometrics)#Theories

I think the idea of a general intellectual ability that

  • Helps you in all most important problems
  • Can't be increased by learning
  • Significantly differs among people
  • (Optional) Can be detected by a simple/universal test
  • (Optional) Especially by a test that can't increase this ability or can't be made trivial by training

Is just really weird, has some very weird implications about the nature of intelligence and the landscape of the intellectual problems.


If you think that tests (like the IQ tests) measure something real/important, it leads to some weird ideas, for example that solving Raven matrices is the pinnacle of intellectual activity (there are other activities, but this is the most general and important thing: no other activity that you learn gives you anything fundamentally newer or greater). To avoid such weirdness you need to immediately assume:

But I'm not sure those additional hypotheses get rid of the weirdness. If the correlation between the mental abilities can be broken, then why doesn't learning break it, why only high IQ breaks it? If different problems that measure IQ are equivalent, then why does this class of equivalent problems matter so much and why is it so fundamental? How is "indifference of the indicator" even possible? If some set of problems captures different "pieces" of g, then why does this set capture all of the pieces?

So, in a way, for me the evidence of the importance of IQ would mostly mean the evidence that our understanding of intelligence is deeply broken.

Pascal's mugging

Pascal's mugging

This may be another component of my belief/decision:

  • The world where intelligence of different people isn't equal and where subjective experience is useless and gives nothing (and leads to nothing) is lowkey isn't worth living.
  • And such world is unlikely to survive anyway. There's a lot of things that can kill "almost anyone". And if I understand correctly, we're definitely going to die from AGI soon.
  • Survival is tied to 2 unsolvable problems. "Make people agree with each other" and "solve Alignment"

In a way safer world where our lives don't depend on impossible problems I would think there's some other choice for me, but in this world I think my only choice is to seek the cracks in the intelligence ceiling.

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 10:01 AM

I don't agree or disagree. You have interesting ideas, but they don't seem cohesive to me. Suggest giving them more thought, applying very specific hypotheses, and outlining arguments in favor/contra.

Even if my ideas are vague, shouldn't rationality be applicable even at that stage? The idea of levels of intelligence (or hard intelligence ceilings) isn't very specific either. "Are there unexpected/easy ways to get smarter?", people should have some opinions about that even without my ideas. It's safe to assume Eliezer doesn't believe there's an unknown way to get smarter (or that it's easier to find such a way than to solve the Alignment problem).

My more specific hypotheses are related to guessing what such a way might be. But that's not what you meant, I think.

I might (ironically) not be smart enough to follow what you're broader point is, but I do think there's something interesting here. Maybe, in a later post you, if you can clearly and narrowly hone in on your premise ("I believe there should be a "cheat code" for high intelligence (problem-solving skills) for people.") I think you would have something that would generate a very good conversation.

 In regards to what you said about the 'g-factor' and the correlation between mental abilities, this lack of correlation does not only happen amongst high IQ individuals. For instance, people with autism spectrum disorder can often have average or superior IQ scores when it comes to perceptual reasoning, while their scores on other aspects of 'g' will be below average. The most extreme example of this being autistic savants. 

No, the WISC-IV doesn’t underestimate the intelligence of children with autism. | Assessing Psyche, Engaging Gauss, Seeking Sophia (wordpress.com)

In this post I described the information I use to reach the conclusion. I'm afraid I don't know rationality good enough to make it more clear (or investigate if my belief is rational myself). So one of my later posts will likely be about some of my specific ideas.

About the g-factor. I can imagine a weak person who has an extremely strong leg. I would think that such a person isn't "generally" strong. Because I already have an idea how a generally strong (and above average) person looks like.

But with IQ tests, I'm not starting from believing that they measure general intelligence. Maybe I don't even have a good idea how a generally intelligent (and above average) person should look like. So the fact that there are in fact multiple different ways to break the correlation makes me doubt IQ more.