This paper seems relevant to various LW interests. It smells like The Second Law of Thermodynamics, and Engines of Cognition, but I haven't wrapped my head enough around either to say more than that. Abstract:

Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human “cognitive niche”—tool use and social cooperation—to spontaneously emerge in simple physical systems. Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 9:59 AM

I think any program designed to maximise some quantity within a simulated situation will have the potential to solve some problems. It is interesting that, when the quantity you choose to try to maximise is the entropy of the situation, then some of the problems this solves are useful ones, but I don't think it is particularly significant, with respect to understanding the nature of and reason for intelligence in a universe with our particular set of physical laws, as some are claiming.

Take, for example, Wissner-Gross' explanation of "tool use" in his video.

relevant still image illustrating tool use

Set a simulation going. See where the disks end up, under the rules you set for the simulation. THEN label the disks as being things (a hand, a tool and a piece of food) that would provide a plausible explanation for why an intelligent creature would want the disks to finish up at that particular end configuration.

If a creature were actually doing it, the intelligence would lie at least as much in selecting in advance which quantity to maximise in order to achieve a desired result, as in carrying out such an algorithm (and, also, there's no evidence that this is actually how we implement the algorithm in our heads).

There's also the matter that the universe isn't particularly efficient at maximising entropy. Through the statistical properties underlying thermodynamics there's a ratchet effect that entropy will tend to increase rather than decrease which, eventually, will lead to the universe ending up at maximum entropy; but that's rather different from localised seeking behaviour intended to find a situation with maximum entropy in order to solve a problem.

If I understand this right, they're not talking about entropy. They're talking about putting yourself in a position where you have more choices. I think a better word would be power.

This is an idea I've been attempting to promote since 2001.

There's a literature on the topic dating back almost 100 years. Here's me in 2009 on the topic.

Do you have the script of your talk? It's very hard to capture the phrases in the video for some reason.

The transcript is just below the video.

It smells like The Second Law of Thermodynamics, and Engines of Cognition, but I haven't wrapped my head enough around either to say more than that.

Both articles mention "entropy" - but I think that's about it.

Our research group and collaborators, foremost Daniel Polani, have been studying this for many years now. Polani calls an essentially identical concept empowerment. These guys are welcome to the party, and as former outsiders it's understandable (if not totally acceptable) that they wouldn't know about these piles of prior work.

Another article on the paper, with comments from Wissner-Gross, including possible implications for AI Friendliness: http://io9.com/how-skynet-might-emerge-from-simple-physics-482402911.

Here's the accompanying article on the paper.

This is a utility function that says "maximize entropy in the environment". The sequence post says, to condense it into an inadequate analogy, "evidence is to thinking as entropy is to thermodynamics".

I'm not asserting this isn't a deep and important finding, but it reminds me of the NES-game playing AI that plays games by taking whatever action maximizes some value in memory over a short time span. Works really well for some games, but it doesn't seem promising as a general NES-game player the same way a human can be (after countless hours of frustrating practice).