This is a special post for short-form writing by Alex_Altair. Only they can create top-level comments. Comments here also appear on the Shortform Page and All Posts page.
5 comments, sorted by Click to highlight new comments since: Today at 8:41 PM

Here's my guess as to how the universality hypothesis a.k.a. natural abstractions will turn out. (This is not written to be particularly understandable.)

  1. At the very "bottom", or perceptual level of the conceptual hierarchy, there will be a pretty straight-forward objective set of concept. Think the first layer of CNNs in image processing, the neurons in the retina/V1, letter frequencies, how to break text strings into words. There's some parameterization here, but the functional form will be clear (like having a basis of n vectors in R^n, but it (almost) doesn't matter which vectors).
  2. For a few levels above that, it's much less clear to me that the concepts will be objective. Curve detectors may be universal, but the way they get combined is less obviously objective to me.
  3. This continues until we get to a middle level that I'd call "objects". I think it's clear that things like cats and trees are objective concepts. Sufficiently good language models will all share concepts that correspond to a bunch of words. This level is very much due to the part where we live in this universe, which tends to create objects, and on earth, which has a biosphere with a bunch of mid-level complexity going on.
  4. Then there will be another series of layers that are less obvious. Partly these levels are filled with whatever content is relevant to the system. If you study cats a lot then there is a bunch of objectively discernible cat behavior. But it's not necessary to know that to operate in the world competently. Rivers and waterfalls will be a level 3 concept, but the details of fluid dynamics are in this level.
  5. Somewhere around the top level of the conceptual hierarchy, I think there will be kind of a weird split. Some of the concepts up here will be profoundly objective; things like "and", mathematics, and the abstract concept of "object". Absolutely every competent system will have these. But then there will also be this other set of concepts that I would map onto "philosophy" or "worldview". Humans demonstrate that you can have vastly different versions of these very high-level concepts, given very similar data, each of which is in some sense a functional local optimum. If this also holds for AIs, then that seems very tricky.
  6. Actually my guess is that there is also a basically objective top-level of the conceptual hierarchy. Humans are capable of figuring it out but most of them get it wrong. So sufficiently advanced AIs will converge on this, but it may be hard to interact with humans about it. Also, some humans' values may be defined in terms of their incorrect worldviews, leading to ontological crises with what the AIs are trying to do.

Totally baseless conjecture that I have not thought about for very long; chaos is identical to Turing completeness. All dynamical systems that demonstrate chaotic behavior are Turing complete (or at least implement an undecidable procedure).

Has anyone heard of an established connection here?

Might look at Wolfram's work. One of the major themes of his CA classification project is that chaotic (in some sense, possibly not the rigorous ergodic dynamics definition) rulesets are not Turing-complete; only CAs which are in an intermediate region of complexity/simplicity have ever been shown to be TC.

Maybe you already thought of this, but it might be a nice project for someone to take the unfinished drafts you've published, talk to you, and then clean them up for you.  Apprentice/student kind of thing. (I'm not personally interested in this, though.)

I like that idea! I definitely welcome people to do that as practice in distillation/research, and to make their own polished posts of the content. (Although I'm not sure how interested I would be in having said person be mostly helping me get the posts "over the finish line".)