Posts

Sorted by New

Wiki Contributions

Comments

If anyone is interested in joining a learning community around the ideas of active inference, the mission of https://www.activeinference.org/ is to educate the community around these topics. There's a study group around the 2022 active inference textbook by Parr, Friston, and Pezzulo. I'm in the 5th cohort and it's been very useful for me.

In theory, if humans and AIs aligned on their generative models (i.e., if there is methodological, scientific, and fact alignment), then goal alignment, even if sensible to talk about, will take care of itself: indeed, starting from the same "factual" beliefs, and using the same principles of epistemology, rationality, ethics, and science, people and AIs should in principle arrive at the same predictions and plans.

 

What about zero sum games? If you took took an agent, cloned it, then put both copies into a shared environment with only enough resources to support one agent, they would be forced to compete with one another. I guess they both have the same "goals" per se, but they are not aligned even though they are identical.

> Markov blankets, to the best of my knowledge, have never been derived, either precisely or approximately, for physical systems

This paper does just that. It introduces a 'blanket index' by which any state space can be analyzed to see whether a markov blanket assumption is suitable or not. Quoting MJD Ramstead's summary of the paper's results with respect to the markov blanket assumption:

We now know that, in the limit of increasing dimensionality, essentially all systems (both linear and nonlinear) will have Markov blankets, in the appropriate sense. That is, as both linear and nonlinear systems become increasingly high-dimensional, the probability of finding a Markov blanket between subsets approaches 1.

 

The assumption I find most problematic is that the environment is presumed to be at steady state

Note the assumption is that the environment is at a nonequilibrium steady state, not a heat-death-of-the-universe steady state. My reading of this is that it is an explicit assumption that probabilistic inference is possible.