• Abstractions are important.
  • Abstractions are functions that map a high-dimensional space to a low-dimensional space. They have more possible inputs than possible outputs, so abstractions have to shed some information.
  • Abstractions filter out useless information, while keeping useful information.
  • In order for there to be such a thing as "useful information", there must be some goal(s) being pursued.

You might argue that "abstraction" only means preserving information used to predict faraway observations from a given system. However, coarse modeling of distant objects is often a convergent subgoal of the kind of organisms that Nature selects for.

The scout does not tell the general about the bluejays he saw. He reports the number of bombers in the enemy's hangar. Condensation of information always selects for goal-relevant information. Any condensation of information implies that the omitted information is less goal-relevant than the reported information; there is no abstraction without a goal.

New to LessWrong?

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 9:11 AM

I think this is exactly right, because without some goals, purpose, or even just a norm to be applied, there's nothing to power knowing anything since knowing is at its ground about picking and choosing what goes into what category to separate out the world into things.

Probably it makes sense to emphasize that it's the selection of the abstraction that implies a goal, not the use of the abstraction. If an abstraction shows up in an optimised thing, that's evidence that whatever optimised it had a goal.

That's true. But do abstractions ever show up in non-optimized things? I can't think of a single example.

The set of things not influenced by any optimisation process is pretty small - so we'd probably have to be clearer in what counts as "non-optimized". (I'm also not sure I'd want to say that selection processes need to have a 'goal' exactly.)

It strikes me that the argument you're making might not say much about abstraction specifically - unless I'm missing something essential, it'd apply to any a-priori-unlikely configuration of information.

The set of things not influenced by any optimisation process is pretty small - so we'd probably have to be clearer in what counts as "non-optimized". (I'm also not sure I'd want to say that selection processes need to have a 'goal' exactly.)

Both good points. "Goal" isn't the best word for what selection processes move towards.

It strikes me that the argument you're making might not say much about abstraction specifically - unless I'm missing something essential, it'd apply to any a-priori-unlikely configuration of information.

Besides just being an unlikely configuration of information, abstractions destroy sensory information that did not previously have much of a bearing on actions that increased fitness (or is "selection stability" a better term?).

Abstraction is a compression algorithm for a computationally bounded agent, I don't see how it is related to a "goal", except insofar as a goal is just another abstraction, and they all have to work together for the agent to have a reasonable level of fidelity of the internal map of the world.

Yes, abstraction is compression, but real-world abstractions (like trees, birds, etc.) are very lossy forms of compression. When performing lossy compression, you need to ask yourself what information you value.

When compressing images, for example, humans usually don't care about the values of the least-significant bits, so you can round all 8-bit RGB intensity values down to the nearest even number and save yourself 3 bits per pixel in exchange for a negligible degradation in subjective image quality. Humans not caring about the least-significant bit is useful information about your goal, which is to compress an image for someone to look at.

I think it's not a coincidence that the high-order bits are the ones that are preserved by more physical processes. Like, if you take two photos of the same thing, the high order bits are more likely to be the same than the low order bits. Or if you take a photo of a picture on a screen or printed out. Or if you dye two pieces of fabric in the same vat.

I'm not saying you couldn't get an agent that cared about the low-order bits and not the high-order bits, and if you did have such an agent maybe it would find abstractions that we wouldn't. But I don't think I'm being parochial when I say that would be a really weird agent.

The argument given by the OP seems valid to me ... that is the reason to believe that abstractions relate to goals.

Goals are not abstractions in the sense of compressions if an existing territory. When Kennedy asserted a goal to put a man on the moon, he was not representing something that was already true

The scout does not tell the general about the bluejays he saw. He reports the number of bombers in the enemy's hangar.

Number of enemy bombers seems more relevant than number of bluejays for predicting most far-future variables to me? E.g. who will control the land, how much damage there will be to existing things in the area, etc.. Maybe it's because I'm not an ornithologist so I don't know anything about bluejays. But I'd think humans tend to be the dominant force in influencing an area with bluejays exerting only negligible influence.

This implies that you care about things like "who owns the land", "are the buildings intact", et cetera. The information you care about leaks information about your values.

"Who owns the land" has influences on many far away variables, as those who own the land can implent policies about what to do with the land. Similarly, "Are the buildings intact?" has influences on many far away variables, because it determines whether the people who live on the land continue to live on the land, and people who live in a place are the ones who influence the place the most.

If I wanted to understand the long-term future of an area that was currently at war, I'd want to know the information relevant for who wins the war and how destructive the war is, as that has a lot of effects. Meanwhile I don't know of any major effects of bluejays.

there is no abstraction without a goal.

This isn't immediately obvious. What goal is necessary for 'trees' (in general) as opposed to individual trees?

There are lots of goals that are helped by having the abstraction "tree", like "run to the nearest tree and climb it in order to escape the charging rhino". My point was that the set of goals that are helped by having the abstraction "tree" is smaller than the set of all possible goals, so if we know that the abstraction "tree" is useful to you, we have more information about your goals.

Good point! Hadn't thought of it this way before but totally agree

Being a stricler for generalization I could believe that for any naturally occurring abstraction there is a goal behind it in a "no smoke without fire" kind of way. However if you bruteforce through all the possible ways to abstract I am less sure that those variants that do not have natural occurencies have an associated goal. For example what is the goal of an abstraction that includes what bombers and bluejays include?

I am less sure that those variants that do not have natural occurencies have an associated goal.

The abstractions that do not occur naturally do not prioritize fitness-relevant information. You could conceive of goals that they serve, but these goals are not obviously subgoals of fitness-maximization.

This seems tautological? If the military scout returns with reports of birds and ants, that is still an abstraction, but it isn't relevant to the goal (as those words are commonly used). You seem to be defining a goal in terms of what the abstraction retains.

You can make the claim that a useful abstraction must serve a purpose, without making the claim that all abstractions are useful.

If the military scout returns with a poem about nature, then yes, that's still an abstraction. The scout's abstraction prioritizes information that is useless to the general's goals, so we can guess that the scout's goals are not well aligned with the general's.

You seem to be defining a goal in terms of what the abstraction retains.

I'm not sure if it's possible to fully specify goals given abstractions. But for a system subject to some kind of optimization pressure, knowing an abstraction that the system uses is evidence that shifts probability mass within goal-space.

Perhaps the scout had a goal of "provide a list of wildlife"

The abstractions used (specific sound waves standing in for concepts of animals, grouping similar animals together under single headings etc.) are still orthogonal to that goal. They are in service to a narrow goal of "communicate the concept in my brain to yours" but that answer gets you a strike on the Family Feud prompt "Name a goal of a wilderness scout."

"Provide a list of wildlife" has subgoal "communicate the concept in my brain to yours" has subgoal "use specific sounds to represent animals". "Provide a list of wildlife" is not a subgoal of "win the war".

Of wildlife and winning the war:

The supply train is delayed by an enemy attack. Shoring up supplies might be achieved by:

  • taking resources from the enemy.
  • hunting wildlife

Seemingly anything can be related to any goal, under some circumstance. "Provide a list of wildlife" might indicate whether there's anything (or a lot of things) that can potentially be hunted. It can also indicate whether the enemy can subsist off wildlife if they are good at hunting.

Yes, I spoke too strongly. In the weighted causal graph of subgoals, I would bet that "provide a list of wildlife" would be less relevant to the goal "win the war" than "report #bombers".

My point was less about weight, and more about conditions that make it relevant. Yes, this might treat relevant/not as a binary, but it is an abstraction related to action, for example:

'orders are about a focus* (while someone scouting may act responsively to changing conditions)'. Arguably, scouting is open ended - the scout knows what might be important (at least if they see it). How things are done in practice here might be worth looking into.

*I'm making this up. The point is, actions can also throw stuff out.

"Condensation of information always selects for goal-relevant information." To me this seems either not true, or it generalizes the concept of "goal-relevant" so broadly that it doesnt seem useful to me. If one is actively trying to create abstractions that are useful to achieving some goal then it is true. But the general case of losing information need not be towards some goal. For instance, it's easy to construct a lossy map that takes high dimensional data to low dimensional data, whether or not it's useful seems like a different issue.

One might say that they are interested in abstractions in the case they are useful. They might also make an emperical claim (or a stylistic choice) that thinking about abstractions in the framework of goal-directed actions will be a fruitful way to do AI, study the brain, etc. etc., but these are emperical claims that will be borne out in how useful different research programs help us understand things, and are not a statement of fact as far as I can tell.

You might also reply to this, "no, condensation of information without goal-relevance is just condensation of information, but it is not an abstraction" but then the claim that an abstraction only exists with goal-relevance seems tautilogical.

For instance, it's easy to construct a lossy map that takes high dimensional data to low dimensional data, whether or not it's useful seems like a different issue.

Yep. Most such maps are useless (to you) because the goals you have occupy a small fraction of the possible goals in goal-space.

You might also reply to this, "no, condensation of information without goal-relevance is just condensation of information, but it is not an abstraction" but then the claim that an abstraction only exists with goal-relevance seems tautilogical.

Nope, all condensation of information is abstraction. Different abstractions imply different regions of goal-space are more likely to contain your goals.