Entropy from first principles


Maybe, "try gaining skill somewhere with lower standards"?

Somehow I read "non-results" in the title and unthinkingly interpreted it as "we now have more data that says inositol does nothing". Maybe the title could be "still not enough data on insotol"?

I wonder if we couldn't convert this into some kind of community wiki, so that the people represented in it can provide endorsed representations of their own work, and so that the community as a whole can keep it updated as time goes on.

Obviously there's the problem where you don't want random people to be able to put illegitimate stuff on the list. But it's also hard to agree on a way to declare legitimacy.

...Maybe we could have a big post like lukeprog's old textbook post, where researchers can make top-level comments describing their own research? And then others can up- or down-vote the comments based on the perceived legitimacy of the research program?

Honestly this isn't that long, I might say to re-merge it with the main post. Normally I'm a huge proponent of breaking posts up smaller, but yours is literally trying to be an index, so breaking a piece off makes it harder to use.

Here's my guess as to how the universality hypothesis a.k.a. natural abstractions will turn out. (This is not written to be particularly understandable.)

  1. At the very "bottom", or perceptual level of the conceptual hierarchy, there will be a pretty straight-forward objective set of concept. Think the first layer of CNNs in image processing, the neurons in the retina/V1, letter frequencies, how to break text strings into words. There's some parameterization here, but the functional form will be clear (like having a basis of n vectors in R^n, but it (almost) doesn't matter which vectors).
  2. For a few levels above that, it's much less clear to me that the concepts will be objective. Curve detectors may be universal, but the way they get combined is less obviously objective to me.
  3. This continues until we get to a middle level that I'd call "objects". I think it's clear that things like cats and trees are objective concepts. Sufficiently good language models will all share concepts that correspond to a bunch of words. This level is very much due to the part where we live in this universe, which tends to create objects, and on earth, which has a biosphere with a bunch of mid-level complexity going on.
  4. Then there will be another series of layers that are less obvious. Partly these levels are filled with whatever content is relevant to the system. If you study cats a lot then there is a bunch of objectively discernible cat behavior. But it's not necessary to know that to operate in the world competently. Rivers and waterfalls will be a level 3 concept, but the details of fluid dynamics are in this level.
  5. Somewhere around the top level of the conceptual hierarchy, I think there will be kind of a weird split. Some of the concepts up here will be profoundly objective; things like "and", mathematics, and the abstract concept of "object". Absolutely every competent system will have these. But then there will also be this other set of concepts that I would map onto "philosophy" or "worldview". Humans demonstrate that you can have vastly different versions of these very high-level concepts, given very similar data, each of which is in some sense a functional local optimum. If this also holds for AIs, then that seems very tricky.
  6. Actually my guess is that there is also a basically objective top-level of the conceptual hierarchy. Humans are capable of figuring it out but most of them get it wrong. So sufficiently advanced AIs will converge on this, but it may be hard to interact with humans about it. Also, some humans' values may be defined in terms of their incorrect worldviews, leading to ontological crises with what the AIs are trying to do.

Note that we are interested in people at all levels of seniority, including graduate students,


If I imagine being an undergraduate student who's interested, then this sentence leaves me unclear on whether I should fill it out.

I would love to try having dialogues with people about Agent Foundations! I'm on the vaguely-pro side, and want to have a better understanding of people on the vaguely-con side; either people who think it's not useful, or people who are confused about what it is and why we're doing it, etc.

I like this post for the way it illustrates how the probability distribution over blocks of strings changes as you increase block length.

Otherwise, I think the representation of other ideas and how they related to it is not very accurate, and might mislead reader about the consensus among academics.

As an example, strings where the frequency of substrings converges to a uniform distribution is are called "normal". The idea that this could be the definition of a random string was a big debate through the first half of the 20th century, as people tried to put probability theory on solid foundations. But you can have a fixed, deterministic program that generates normal strings! And so people generally rejected this ideas as the definition of random. Algorithmic information theory uses the definition of Martin-Löf random, which is that an (infinite) string is random if it can't be compressed by any program (with a bunch of subtleties and distinctions in there).

  • Utility functions might already be the true name - after all, they do directly measure optimisation, while probability doesn't directly measure information.
  • The true name might have nothing to do with utility functions - Alex Altair has made the case that it should be defined in terms of preference orderings instead.

My vote here is for something between "Utility functions might already be the true name" and "The true name might have nothing to do with utility functions".

It sounds to me like you're chasing an intuition that is validly reflecting one of nature's joints, and that that joint is more or less already named by the concept of "utility function" (but where further research is useful).

And separately, I think there's another natural joint that I (and Yudkowsky and others) call "optimization", and this joint has nothing to do with utility functions. Or more accurately, maximizing a utility function is an instance of optimization, but has additional structure.

Load More