Concept Safety
Multiagent Models of Mind
Keith Stanovich: What Intelligence Tests Miss


Learning is (Asymptotically) Computationally Inefficient, Choose Your Exponents Wisely

My actual claim is that learning becomes not just harder linearly, or quadratically (e.g. when you have to spend, say, an extra hour on learning the same amount of new material compared to what you used to need), but exponentially (e.g. when you have to spend, say, twice as much time/effort as before on learning the same amount of new material).

This is an interesting claim, but I'm not sure if it matches my own subjective experience. Though I also haven't dug deeply into math so maybe it's more true there - it seems to me like this could vary by field, where some fields are more "broad" while others are "deep". 

And looking around, e.g. this page suggests that at least four different kinds of formulas are used for modeling learning speed, apparently depending on the domain. The first one is the "diminishing returns" curve, which sounds similar to your model:

From the source: "This describes a situation where the task may be easy to learn and progression of learning is initially fast and rapid."

But it also includes graphs such as the s-curve (where initial progress is slow but then you have a breakthrough that lets you pick up more faster, until you reach a plateau) and the complex curve (with several plateaus and breakthroughs).

From the source: "This model is the most commonly cited learning curve and is known as the “S-curve” model.  It measures an individual who is new to a task. The bottom of the curve indicates slow learning as the learner works to master the skills required and takes more time to do so. The latter half of the curve indicates that the learner now takes less time to complete the task as they have become proficient in the skills required. Often the end of the curve begins to level off, indicating a plateau or new challenges."


From the source: "This model represents a more complex pattern of learning and reflects more extensive tracking."

Why do I see this apparent phenomenon of inefficient learning important? One example I see somewhat often is someone saying "I believe in the AI alignment research and I want to contribute directly, and while I am not that great at math, I can put in the effort and get good."  Sadly, that is not the case. Because learning is asymptotically inefficient, you will run out of time, money and patience long before you get to the level where you can understand, let alone do the relevant research: it's not the matter of learning 10 times harder, it's the matter of having to take longer than the age of the universe, because your personal exponent eventually gets that much steeper than that of someone with a natural aptitude to math. 

This claim seems to run counter to the occasionally-encountered claim that people who are too talented at math may actually be outperformed by less talented students at one point, as the people who were too talented hit their wall later and haven't picked up the patience and skills for dealing with it, whereas the less talented ones will be used to slow but steady progress by then. E.g. Terry Tao mentions it:

Of course, even if one dismisses the notion of genius, it is still the case that at any given point in time, some mathematicians are faster, more experienced, more knowledgeable, more efficient, more careful, or more creative than others. This does not imply, though, that only the “best” mathematicians should do mathematics; this is the common error of mistaking absolute advantage for comparative advantage. The number of interesting mathematical research areas and problems to work on is vast – far more than can be covered in detail just by the “best” mathematicians, and sometimes the set of tools or ideas that you have will find something that other good mathematicians have overlooked, especially given that even the greatest mathematicians still have weaknesses in some aspects of mathematical research. As long as you have education, interest, and a reasonable amount of talent, there will be some part of mathematics where you can make a solid and useful contribution. It might not be the most glamorous part of mathematics, but actually this tends to be a healthy thing; in many cases the mundane nuts-and-bolts of a subject turn out to actually be more important than any fancy applications. Also, it is necessary to “cut one’s teeth” on the non-glamorous parts of a field before one really has any chance at all to tackle the famous problems in the area; take a look at the early publications of any of today’s great mathematicians to see what I mean by this.

In some cases, an abundance of raw talent may end up (somewhat perversely) to actually be harmful for one’s long-term mathematical development; if solutions to problems come too easily, for instance, one may not put as much energy into working hard, asking dumb questions, or increasing one’s range, and thus may eventually cause one’s skills to stagnate. Also, if one is accustomed to easy success, one may not develop the patience necessary to deal with truly difficult problems (see also this talk by Peter Norvig for an analogous phenomenon in software engineering). Talent is important, of course; but how one develops and nurtures it is even more so.

What posts do you want written?

As a response to this request, wrote something here.

What are some beautiful, rationalist artworks?

They certainly apply, but the formulation of the instrumental convergence thesis is very general, e.g. as stated in Bostrom's paper:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by many intelligent agents.

That only states that those instrumental values are likely to be pursued by many agents to some extent, depending on how useful they are for fulfilling the ultimate values of the agents. But there's nothing to say that it would be particularly useful for the goals of most humans to pursue them to the point of e.g. advancing space colonization.

Things are allowed to be good and bad at the same time

Was pointed to this great article that makes the same point with several additional examples, e.g.:

If you take suffering seriously (farmed chickens, children in poor countries, etc), you're in a lot of trouble—because there's a lot of suffering, and suffering is very important. So you should drop whatever you're doing, and start doing something about suffering. Or at the very least, you should donate money.

Same if you take politics seriously. Same if you take many other things seriously.

The easy solution is to say: "Those aren't that important". I've been doing that for years. "Actually, I don't care about chickens, or any other animals".

With synthesis, I have arrived at a much better solution:

Things are important
and I won't work on them
and it doesn't make me a bad person

This means that I can care about chickens now—because caring about chickens, or poor children, or anything, no longer compels me to start doing something to help them. This has amazing long-term implications:

  • I am more likely to help chickens in the future, because this is easier when I care;
  • I am more likely to spend time helping whoever I want, become good at it, level up at various skills like "execution", and if I decide to help chickens in the future, I will be more efficient at that.
Things are allowed to be good and bad at the same time

If you now had to make a decision on whether to take the job, how would you use this electrifying zap help you make the decision?

My current feeling is that I'd probably take it. (The job example was fictional, as the actual cases where I've used this have been more personal in nature, but if I translate your question to those contexts then "I'd take it" is what I would say if I translated the answer back.)

What are some beautiful, rationalist artworks?

While it's technically possible to have a preference that doesn't value things that can be made out of galaxies, it would be shocking if there is a statistically significant number of humans whose correct idealization has that property.

I have pretty broad uncertainty on whether "people's correct idealization" is a useful concept in this kind of a context, and assuming that it is, what those idealizations would value - seems to me like they might incorporate a fair amount of path dependence, with different equally correct idealizations arriving at completely different ultimate outcomes.

which makes habryka's appeal to values relevant, where it would be a much weaker argument if we were only discussing aesthetic preference.

I tend to think that (like identities) aesthetics are something like cached judgements which combine values and strategies for achieving those values.

What are some beautiful, rationalist artworks?

I hesitated a little on whether to post this, given that it has been pointed out that the curvature of a ringworld wouldn't actually be that obvious from the inside, so it's in tension with the spirit of rationality to post a picture that depicts something physically impossible.

Still, after almost posting this, then deleting it, then feeling like I wanted to post it anyway, I decided to just do it. As I feel that it captures a combination of joy and love of life co-existing  with, and made possible by, science and rationality.

Load More