God Help Us, Let’s Try To Understand Friston On Free Energy

5th Mar 2018

23jamii

24Eli Sennesh

7Daniel Kokotajlo

8Eli Sennesh

5Eli Sennesh

6jacobjacob

2Eli Sennesh

1Connor McCormick

6jamii

10Eli Sennesh

5habryka

8Scott Alexander

1jamii

10Eli Sennesh

3jacobjacob

17Eli Sennesh

1do7777

16Eli Sennesh

13paulfchristiano

14paulfchristiano

11paulfchristiano

4Vanessa Kosoy

4Eli Sennesh

8vV_Vv

4paulfchristiano

12Stuart_Armstrong

5Eli Sennesh

2Charlie Steiner

4Eli Sennesh

4Qiaochu_Yuan

4ryan_b

4Gordon Seidoh Worley

4Shannon

4habryka

3Charlie Steiner

9Eli Sennesh

3Charlie Steiner

6Eli Sennesh

3Jan_Kulveit

3habryka

2ShardPhoenix

6Ben Pace

1joe magner

New Comment

43 comments, sorted by Click to highlight new comments since: Today at 12:54 PM

(Posting here rather than SSC because I wrote the whole comment in markdown before remembering that SSC doesn't support it).

We had a guest lecture from Friston last year and I cornered him afterwards to try to get some enlightenment (notes here). I also spent the next few days working through the literature, using a multi-armed bandit bandit as a concrete problem (notes here ).

Very few of the papers have concrete examples. Those that do often skip important parts of the math and use inconsistent/ambiguous notation. He doesn't seem to have released any of the code for his game-playing examples.

The various papers don't all even implement the same model - the free energy principle seems to be more a design principle than a specific model.

The wikipedia page doesn't explain much but at least uses consistent and reasonable notation.

Reinforcement learning or active inference has most of a worked model, and is the closest I've found to explaining how utility functions get encoded into meta-priors. It also contains:

When friends and colleagues first come across this conclusion, they invariably respond with; “but that means I should just close my eyes or head for a dark room and stay there”. In one sense this is absolutely right; and is a nice description of going to bed. However, this can only be sustained for a limited amount of time, because the world does not support, in the language of dynamical systems, stable fixed-point attractors. At some point you will experience surprising states (e.g., dehydration or hypoglycaemia). More formally, itinerant dynamics in the environment preclude simple solutions to avoiding surprise; the best one can do is to minimise surprise in the face of stochastic and chaotic sensory perturbations. In short, a necessary condition for an agent to exist is that it adopts a policy that minimizes surprise.

I am leaning towards 'the emperor has no clothes'. In support of this:

- Friston doesn't explain things well, but nobody else seems to have produced an accessible worked example either, even though many people claim to understand the theory and think that is important.
- Nobody seems to have has used this to solve any novel problems, or even to solve well-understood trivial problems.
- I can't find any good mappings/comparisons to existing models. Are there priors that cannot be represented as utility functions, or vice versa? What explore/exploit tradeoffs do free-energy models lead to, or can they encode any given tradeoff?

At this point I'm unwilling to invest any further effort into the area, but I could be re-interested if someone were to produce a python notebook or similar with a working solution for some standard problem (eg multi-armed bandit).

The various papers don't all even implement the same model - the free energy principle seems to be more a design principle than a specific model.`

Bingo. Friston trained as a physicist, and he wants the free-energy principle to be more like a physical law than a computer program. You can write basically any computer program that implements or supports variational inference, throw in some action states as variational parameters, and you've "implemented" the free-energy principle _in some way_.

Overall, the Principle is more of a domain-specific language than a single unified model, more like "supervised learning" than like "this 6-layer convnet I trained for neural style transfer."

Are there priors that cannot be represented as utility functions, or vice versa?

No. They're isomorphic, via the Complete Class Theorem. Any utility/cost function that grows sub-super-exponentially (ie: for which Pascal's Mugging doesn't happen) can be expressed as a distribution, and used in the free-energy principle. You can get the intuition by thinking, "This goal specifies how often I want to see outcome X (P), versus its disjoint cousins Y and Z that I want to see such-or-so often (1-P)."

What explore/exploit tradeoffs do free-energy models lead to, or can they encode any given tradeoff?

The is actually one of the Very Good things about free-energy models: since free-energy is "Energy - Entropy", or "Exploit + Explore", cast in the same units (bits/nats from info theory), it theorizes a principled, prescriptive way to make the tradeoff, once you've specified how concentrated the probability mass is under the goals in the support set (and thus the multiplicative inverse of the exploit term's global optimum).

We ought to be able to use this to test the Principle empirically, I think.

(EDIT: Dear God, why was everything bold!?)

No. They're isomorphic, via the Complete Class Theorem. Any utility/cost function that grows sub-super-exponentially (ie: for which Pascal's Mugging doesn't happen) can be expressed as a distribution, and used in the free-energy principle. You can get the intuition by thinking, "This goal specifies how often I want to see outcome X (P), versus its disjoint cousins Y and Z that I want to see such-or-so often (1-P)."

Can you please link me to more on this? I was under the impression that pascal's mugging happens for any utility function that grows at least as fast as the probabilities shrink, and the probabilities shrink exponentially for normal probability functions. (For example: In the toy model of the St. Petersburg problem, the utility function grows exactly as fast as the probability function shrinks, resulting in infinite expected utility for playing the game.)

Also: As I understand them, utility functions aren't of the form "I want to see X P often and Y 1-P often." They are more like "X has utility 200, Y has utility 150, Z has utility 24..." Maybe the form you are talking about is a special case of the form I am talking about, but I don't yet see how it could be the other way around. As I'm thinking of them, utility functions aren't about what you see at all. They are just about the world. The point is, I'm confused by your explanation & would love to read more about this.

Can you please link me to more on this? I was under the impression that pascal's mugging happens for any utility function that grows at least as fast as the probabilities shrink, and the probabilities shrink exponentially for normal probability functions. (For example: In the toy model of the St. Petersburg problem, the utility function grows exactly as fast as the probability function shrinks, resulting in infinite expected utility for playing the game.)

The Complete Class Theorem says that *bounded* cost/utility functions are isomorphic to posterior probabilities optimizing their expected values. In that sense, it's almost a trivial result.

In practice, this just means that we can exchange the two whenever we please: we can take a probability and get an entropy to minimize, or we can take a bounded utility/cost function and bung it through a Boltzmann Distribution.

Also: As I understand them, utility functions aren't of the form "I want to see X P often and Y 1-P often." They are more like "X has utility 200, Y has utility 150, Z has utility 24..." Maybe the form you are talking about is a special case of the form I am talking about, but I don't yet see how it could be the other way around. As I'm thinking of them, utility functions aren't about what you see at all. They are just about the world. The point is, I'm confused by your explanation & would love to read more about this.

I was speaking loosely, so "I want to see X" can be taken as, "I want X to happen". The details remain an open research problem of how the brain (or probabilistic AI) can or should cash out, "X happens" into "here are all the things I expect to observe when X happens, and I use them to gather evidence for whether X has happened, and to control whether X happens and how often".

For a metaphor of why you'd have "probabilistic" utility functions, consider it as Bayesian uncertainty: "I have degree of belief P that X should happen, and degree of belief 1-P that something else should happen."

One of the deep philosophical differences is that both Fristonian neurosci and Tenenbaumian cocosci assume that stochasticity is "real enough for government work", and so there's no point in specifying "utility functions" over "states" of the world in which *all* variables are clamped to fully determined values. After all, you yourself as a physically implemented agent have to generate waste heat, so there's inevitably going to be some stochasticity (call it uncertainty that you're mathematically required to have) about whatever physical heat bath you dumped your own waste heat into.

(That was supposed to be a reference to Eliezer's writing on minds doing thermodynamic work (which free-energy minds absolutely do!), not a poop joke.)

Actually, here's a much simpler, more intuitive way to think about probabilistically specified goals.

Visualize a probability distribution as a heat map of the possibility space. Specifying a probabilistic goal then just says, "Here's where I want the heat to concentrate", and submitting it to active inference just uses the available inferential machinery to actually squeeze the heat into that exact concentration as best you can.

When our heat-map takes the form of "heat" over *dynamical trajectories*, possible "timelines" of something that can move, "squeezing the heat into your desired concentration" means exactly "squeezing the future towards desired regions". All you're changing is how you specify desired regions: from giving them an "absolute" value (that can actually undergo any linear transformation and be isomorphic) to giving them a purely "relative" value (relative to disjoint events in your sample space).

This is fine, because after all, it's not like you could really have an "infinite" desire for something finite-sized in the first place. If you choose to think of utilities in terms of money, the "goal probabilities" are just the relative prices you're willing to pay for a certain outcome: you start with odds, the number of apples you'll trade for an orange, and convert from odds to probabilities to get your numbers. It's just using "barter" among disjoint random events instead of "currency".

I'm confused so I'll comment a dumb question hoping my cognitive algorithms are sufficiently similar to other LW:ers, such that they'll be thinking but not writing this question.

"If I value apples at 3 units and oranges at 1 unit, I don't want at 75%/25% split. I only want apples, because they're better! (I have no diminishing returns.)"

Where does this reasoning go wrong?

>"If I value apples at 3 units and oranges at 1 unit, I don't want at 75%/25% split. I only want apples, because they're better! (I have no diminishing returns.)"

I think what I'd have to ask here is: if you only want apples, why are you spending your money on oranges? If you will not actually pay me 1 unit for an orange, why do you claim you value oranges at 1 unit?

Another construal: you value oranges at 1 orange per 1 unit because if I offer you a lottery over those and let you set the odds yourself, you will choose to set them to 50/50. You're indifferent to which one you receive, so you value them equally. We do the same trick with apples and find you value them at 3 units per 1 apple.

I now offer you a lottery between receiving 3 apples and 1 orange, and I'll let you pay 3 units to tilt the odds by one expected apple. Since the starting point was 1.5 expected apples and 0.5 expected oranges, and you insist you want only 3 expected apples and 0 expected oranges, I believe I can make you end up paying more than 3 units per apple now, despite our having established that as your "price".

The lesson is, I think, don't offer to pay finite amounts of money for outcomes you want literally zero of, as someone may in fact try to take you up on it.

That was much more informative than most of the papers. Did you learn this by parsing the papers or from another better source?

Honestly, I've just had to go back and forth banging my head on Friston's free-energy papers, non-Friston free-energy papers, *and* the ordinary variational inference literature -- for the past two years, prior to which I spent three years banging my head on the Josh Tenenbaum-y computational cog-sci literature and got used to seeing probabilistic models of cognition.

I'm now really fucking glad to be in a PhD program where I can actually use that knowledge.

Oh, and btw, everyone at MIRI was exactly as confused as Scott is when I presented a bunch of free-energy stuff to them last March.

Sorry for the bold, sometimes our editor does weird things with copy-paste and bolds everything you pasted. Working on a fix for that, but it’s an external library and that’s always a bit harder than fixing our code.

Re: the "when friends and colleagues first come across this conclusion..." quote:

A world where everybody's true desire is to rest in bed as much as possible, but where they grudgingly take the actions needed to stay alive and maintain homeostasis, seems both very imaginable, and also very different from what we observe.

Agreed. 'Rest in bed as much as possible but grudgingly take the actions needed to stay alive' sounds a lot like depression, but there exist non-depressed people who need explaining.

I wonder if the conversion from mathematics to language is causing problems somewhere. The prose description you are working with is 'take actions that minimize prediction error' but the actual model is 'take actions that minimize a complicated construct called free energy'. Sitting in a dark room certainly works for the former but I don't know how to calculate it for the latter.

In the paper I linked, the free energy minimizing trolleycar does not sit in the valley and do nothing to minimize prediction error. It moves to keep itself on the dynamic escape trajectory that it was trained with and so predicts itself achieving. So if we understood why that happens we might unravel the confusion.

>I wonder if the conversion from mathematics to language is causing problems somewhere. The prose description you are working with is 'take actions that minimize prediction error' but the actual model is 'take actions that minimize a complicated construct called free energy'. Sitting in a dark room certainly works for the former but I don't know how to calculate it for the latter.

There's absolutely trouble here. "Minimizing surprise" always means, to Friston, minimizing sensory surprise *under a generative model*: . The problem is that, of course, in the course of constructing this, you had to marginalize out all the interesting variables that make up your generative model, so you're really looking at or something similar.

Mistaking "surprise" in this context for the actual self-information of the *empirical* distribution of sense-data makes the whole thing fall apart.

>In the paper I linked, the free energy minimizing trolleycar does not sit in the valley and do nothing to minimize prediction error. It moves to keep itself on the dynamic escape trajectory that it was trained with and so predicts itself achieving. So if we understood why that happens we might unravel the confusion.

If you look closely, Friston's downright cheating in that paper. First he "immerses" his car in its "statistical bath" that teaches it where to go, with only perceptual inference allowed. Then he *turns off perceptual updating*, leaving only action as a means of resolving free-energy, and points out that thusly, the car tries to climb the mountain as active inference proceeds.

It would be interesting if anyone knows of historical examples where someone had a key insight, but nonetheless fulfilled your "emperor has no clothes" criteria.

Hi,

I now work in a lab allied to both the Friston branch of neuroscience, and the probabilistic modeling branch of computational cognitive science, so I now feel even more arrogant enough to comment fluently.

I’m gonna leave a bunch of comments over the day as I get the spare time to actually respond coherently to stuff.

The first thing is that we have to situate Friston’s work in its appropriate context of Marr’s Three Levels of cognitive analysis: computational (what’s the target?), algorithmic (how do we want to hit it?), and implementational (how do we make neural hardware do it?).

Friston’s work largely takes place at the algorithmic and implementational levels. He’s answering How questions, and then claiming that they answer the What questions. This is rather like unto, as often mentioned, formulating Hamiltonian Mechanics and saying, “I’m solved physics by pointing out that you can write any physical system in terms of differential equations for its conserved quantities.” Well, now you have to actually write out a real physical system in those terms, don’t you? What you’ve invented is a rigorous *language* for talking about the things you aim to explain.

The free-energy principle should be thought of like the “supervised loss principle”: it just specifies what computational proxy you’re using for your real goal. It’s as rigorous as using probabilistic programming to model the mind (caveat: one of my advisers is a probabilistic programming expert).

Now, my seminar is about to start soon, so I’ll try to type up a really short step-by-step of how we get to active inference. Let’s assume the example where I want to eat my nice slice of pizza, and I’ll try to type something up about goals/motivations later on. Suffice to say, since “free-energy minimization” is like “supervised loss minimization” or “reward maximization”, it’s meaningless to say that motivation is specified in free-energy terms. Of course it *can be*: that’s a mathematical tautology. *Any* bounded utility/reward/cost function can be expressed as a probability, and therefore a free-energy — this is the Complete Class Theorem Friston always cites, and you can make it constructive using the Boltzmann Distribution (the simplest exponential family) for energy functions.

1) Firstly, free-energy is just the negative of the Evidence Lower Bound (ELBO) usually maximized in variational inference. You take a (a model of the world whose posterior you want to approximate), and a (a model that approximates it), and you optimize the variational parameters (the parameters with no priors or conditional densities) of by maximizing the ELBO, to get a good approximation to (probability of hypotheses, given data). This is normal and understandable and those of us who aren’t Friston do it all the time.

2) Now you add some variables to : the body’s *proprioceptive* states, its sense of where your bones are and what your muscles are doing. You add a , with some conditional to show how other senses depend on body position. This is already really helpful for pure prediction, because it helps you factor out random noise or physical forces acting on your body from your sensory predictions to arrive at a coherent picture of the world *outside* your body. You now have .

3) For having new variables in the posterior, , you now need some new variables in . Here’s where we get the interesting insight of active inference: if the old was approximated as , we can now expand to . Instead of inferring a parameter that approximates the proprioceptive state, we infer a parameter that can “compromise” with it: the actual body moves to accommodate as much as possible, while also adjusts itself to kinda suit what the body actually did.

Here’s the part where I’m really simplifying what stuff does, to use more of a planning as inference explanation than “pure” active inference. I could talk about “pure” active inference, but it’s too fucking complicated and badly-written to get a useful intuition. Friston’s “pure” active inference papers often give models that would have very different empirical content from each-other, but which all get optimized using variational inference, so he kinda pretends they’re all the same. Unfortunately, this is something most people in neuroscience or cognitive science do to simplify models enough to fit one experiment well, instead of having to invent a cognitive architecture that might fit all experiments badly.

4) So now, if I set a goal by clamping some variables in (or by imposing “goal” priors on them, clamping them to within some range of values with noise), I can’t really just optimize to fit the new clamped model. is really , and has to approximate . Instead, I can only optimize to fit . Actually doing so reaches a “Bayes-optimal” compromise between my current bodily state and really moving. Once already carries a good dynamical model (through time) of how my body and senses move (trajectories through time), changing as a function of time lets me move as I please, even assuming my actual movements may be noisy with respect to my motor commands.

That’s really all “active inference” is: variational inference with body position as a generative parameter, and motor commands as the variational parameter approximating it. You set motor commands to get the body position you want, then body position changes noisily based on motor commands. This keeps getting done until the ELBO is maximized/free-energy minimized, and now I’m eating the pizza (as a process over time).

Ok, now a post on motivation, affect, and emotion: attempting to explain sex, money, and pizza. Then I’ll try a post on some of my own theories/ideas regarding some stuff. Together, I’m hoping these two posts address the Dark Room Problem in a sufficient way. HEY SCOTT, you’ll want to read this, because I’m going to link a paper giving a better explanation of depression than I think Friston posits.

The following ideas come from one of my advisers who studies emotion. I may bungle it, because our class on the embodied neuroscience of this stuff hasn’t gotten too far.

The core of “emotion” is really this thing we call *core affect*, and it’s actually the core job of the brain, any biological brain, at all. This is: regulate the states of the internal organs (particularly the sympathetic and parasympathetic nervous systems) to keep the viscera functioning well and the organism “doing its job” (survival and reproduction).

What is “its job”? Well, that’s where we actually get programmed-in, innate “priors” that express goals. Her idea is, evolution endows organisms with some nice idea of what internal organ states are good, in terms of valence (goodness/badness) and arousal (preparedness for action or inaction, potentially: emphasis on the sympathetic or parasympathetic nervous system’s regulatory functions). You can think of arousal and sympathetic/parasympathetic as composing a spectrum between the counterposed poles of “fight or flight” and “rest, digest, reproduce”. Spending time in an arousal state affects your internal physiology, so it then affects valence. We now get one of the really useful, interesting empirical predictions to fall right out: young and healthy people like spending time in high-arousal states, while older or less healthy people prefer low-arousal states. That is, even provided you’re in a pleasurable state, young people will prefer more active pleasures (sports, video gaming, sex) while old people will prefer passive pleasures (sitting on the porch with a drink yelling at children). Since this is all physiology, basically everything impacts it: what you eat, how you socialize, how often you mate.

The brain is thus a specialized organ with a specific job: to proactively, predictively regulate those internal states (allostasis), because reactively regulating them (homeostasis) doesn’t work as well). Note that the brain how has its own metabolic demands and arousal/relaxation spectrum, giving rise to bounded rationality in the brain’s Bayesian modeling and feelings like boredom or mental tiredness. The brain’s regulation of the internal organs proceeds via closed-loop predictive control, which can be made really accurate and computationally efficient. We observe anatomically that the interoceptive (internal perception) and visceromotor (exactly what it says on the tin) networks in the brain are at the “core”, seemingly at the “highest level” of the predictive model, and basically control almost everything else in the name of keeping your physiology in the states prescribed as positive by evolution as useful proxies for survival and reproduction.

Get this wrong, however, and the brain-body system can wind up in an accidental *positive* feedback that moves it over to a new equilibrium of consistently negative valence with either consistent high arousal (anxiety) or consistent low arousal (depression). Depression and anxiety thus result from the brain continually getting the impression that the body is in shitty, low-energy, low-activity states, and then sending internal motor commands designed to correct the problem, which actually, due to brain miscalibration, make it worse. You sleep too much, you eat too much or too little, you don’t go outside, you misattribute negative valence to your friends when it’s actually your job, etc. Things like a healthy diet, exercise, and sunlight can try to bring the body closer to genuinely optimal physiological states, which helps it yell at the brain that actually you’re healthy now and it should stuff fucking shit up by misallocating physiological resources.

“Emotions” wind up being something vaguely like your “mood” (your core affect system’s assessment of your internal physiology’s valence and arousal) combined with a causal “appraisal” done by the brain using sensory data, combined with a physiological and external plan of action issued by the brain.

You’re not motivated to sit in a Dark Room because the “predictions” that your motor systems care about are internal, physiological hyperparameters which can only be revised to a very limited extent, or which can be interpreted as some form of reinforcement signalling. You go into a Dark Room and your external (exteroceptive, in neuro-speak) senses have really low surprise, but your internal senses and internal motor systems are yelling that your organs say shit’s fucked up. Since your organs say shit’s fucked up, “surprise” is now very high, and you need to go change your external sensory and motor variables to deal with that shit.

Note that you can *sometimes* seek out calming, boring external sensory states, because your brain has demanded a lot from your metabolism and physiology lately, so it’s “out of energy” and you need to “relax your mind”.

Pizza becomes positively valenced when you are hungry, especially if you’re low on fats and glucose. Sex becomes most salient when your parasympathetic nervous system is dominant: your body believes that it’s safe, and the resources available for action can now be devoted to reproduction over survival.

Note that the actual physiological details here could, once again, be very crude approximations of the truth or straight-up wrong, because our class just hasn’t gotten far enough to really hammer everything in.

Scott writes on tumblr:

I don’t think I even understand the most basic point about how a probability distribution equals a utility function. What’s the probability distribution equal to “maximize paperclips”? Is it “state of the world with lots of paperclips - 100%, state of the world with no paperclips, 0%”? How do you assign probability to states of the world with 5, 10, or 200 paperclips?

I know nothing about this discussion, but this one is easy:

The utility function U(w) corresponds to the distribution .

(i.e. , where Z is a meaningless number we choose to make the total probability add up to 1.)

Without math: every time you add one paperclip to a possible world, you make it 10% more likely. On this perspective, there is a difference between kind of wanting paperclips and really wanting paperclips--if you really want paperclips, adding one paperclip to the world makes it twice as likely. This determines how you trade off paperclips vs. other kinds of surprise.

Maximizing expected log probability under this distribution is exactly the same as maximizing the expectation of U.

You can combine the term with other facts you know about the world , by multiplying them (and then adjusting the normalization constant appropriately).

A very similar formulation is often used in inverse reinforcement learning (MaxEnt IRL).

Another part of the picture that isn't complicated is that the exact same algorithms can be used for probabilistic inference (finding good explanations for the data) and planning (finding a plan that achieves some goal). In fact this connection is useful and people in AI sometimes exploit it. It's a bit deeper than it sounds but not that deep. See planning as inference, which Eli mentions above. It seems worth understanding this simple idea before trying to understand some extremely confusing pile of ideas.

Another important distinction: there are two different algorithms one might describe as "minimizing prediction error:"

I think the more natural one is algorithm A: you adjust your beliefs to minimize prediction error (after translating your preferences into "optimistic beliefs"). Then you act according to your beliefs about how you will act. This is equivalent to independently forming beliefs and then acting to get what you want, it's just an implementation detail.

There is a much more complicated family of algorithms, call them algorithm B, where you actually plan in order to change the observations you'll make in the future, with the goal of minimizing prediction error. This is the version that would cause you to e.g. go read a textbook, or lock yourself in a dark room. This version is algorithmically way more complicated to implement, even though it maybe sounds simpler. It also has all kinds of weird implications and it's not easy to see how to turn it into something that isn't obviously wrong.

Regardless of which view you prefer, it seems important to recognize the difference between the two. In particular, evidence for the us using algorithm A shouldn't be interpreted as evidence that we use algorithm B.

It sounds like Friston intends algorithm B. This version is pretty different from anything that researchers in AI use, and I'm pretty skeptical (based on observations of humans and the surface implausibility of the story rather than any knowledge about the area).

Paul, this is very helpful! Finally I understand what this "active inference" stuff is about. I wonder whether there were any significant *theoretical* results about these methods since Rawlik et al 2012?

The utility function U(w) corresponds to the distribution P(w)∝exp(U(w)).

Not so fast.

Keep in mind that the utility function is defined up to an arbitrary positive affine transformation, while the softmax distribution is invariant only up to shifts: will be different distribution depending on the inverse temperature (the higher, the more peaked the distribution will be on the mode), while in von Neumann–Morgenstern theory of utility, and represent the same preferences for any positive .

Maximizing expected log probability under this distribution is exactly the same as maximizing the expectation of U.

It's not exactly the same.

Let's assume that there are two possible world states: 0 and 1, and two available actions: action A puts the world in state 0 with 99% probability () while action B puts the world in state 0 with 50% probability (*).*

Let

I’ve been trying to delve deeper into predictive processing theories of the brain, and I keep coming across Karl Friston’s work on “free energy”.

At first I felt bad for not understanding this. Then I realized I wasn’t alone. There’s an entire not-understanding-Karl-Friston internet fandom, complete with its own parody Twitter account and Markov blanket memes.

From the journal

Neuropsychoanalysis(which based on its name I predict is a center of expertise in not understanding things):Normally this is the point at which I give up and say “screw it”. But almost all the most interesting neuroscience of the past decade involves this guy in one way or another. He’s the most-cited living neuroscientist, invented large parts of modern brain imaging, and received of the prestigious Golden Brain Award for excellence in neuroscience, which is somehow a real thing. His Am I Autistic – An Intellectual Autobiography short essay, written in a weirdly lucid style and describing hijinks like deriving the Schrodinger equation for fun in school, is as consistent with genius as anything I’ve ever read.

As for free energy, it’s been dubbed “a unified brain theory” (Friston 2010), a key through which “nearly every aspect of [brain] anatomy and physiology starts to make sense” (Friston 2009), “[the source of] the ability of biological systems to resist a natural tendency to disorder” (Friston 2012), an explanation of how life “inevitably and emergently” arose from the primordial soup (Friston 2013), and “a real life version of Isaac Asimov’s psychohistory” (description here of Allen 2018).

I continue to hope some science journalist takes up the mantle of explaining this comprehensively. Until that happens, I’ve been working to gather as many perspectives as I can, to talk to the few neuroscientists who claim to even partially understand what’s going on, and to piece together a partial understanding. I am not at all the right person to do this, and this is not an attempt to get a gears-level understanding – just the kind of pop-science-journalism understanding that gives us a slight summary-level idea of what’s going on. My ulterior motive is to get to the point where I can understand Friston’s recent explanation of depression, relevant to my interests as a psychiatrist.

Sources include Dr. Alianna Maren’s How To Read Karl Friston (In The Original Greek), Wilson and Golonka’s Free Energy: How the F*ck Does That Work, Ecologically?, Alius Magazine’s interview with Friston, Observing Ideas, and the ominously named Wo’s Weblog.

From these I get the impression that part of the problem is that “free energy” is a complicated concept being used in a lot of different ways.

First, free energy is a specific mathematical term in certain Bayesian equations.I’m getting this from here, which goes into much more detail about the math than I can manage. What I’ve managed to extract: Bayes’ theorem, as always, is the mathematical rule for determining how much to weigh evidence. The brain is sometimes called a Bayesian machine, because it has to create a coherent picture of the world by weighing all the different data it gets – everything from millions of photoreceptors’ worth of vision, to millions of cochlear receptors worth of hearing, to all the other sense, to logical reasoning, to past experience, and so on. But actually using Bayes on all this data quickly gets computationally intractable.

Free energy is a quantity used in “variational Bayesian methods”, a specific computationally tractable way of approximating Bayes’ Theorem. Under this interpretation, Friston is claiming that the brain uses this Bayes-approximation algorithm. Minmizing the free energy quantity in this algorithm is equivalent-ish to trying to minimize prediction error, trying to minimize the amount you’re surprised by the world around you, and trying to maximize accuracy of mental models. This sounds in line with standard predictive processing theories. Under this interpretation, the brain implements predictive processing through free energy minimization.

Second, free energy minimization is an algorithm-agnostic way of saying you’re trying to approximate Bayes as accurately as possible.This comes from the same source as above. It also ends up equivalent-ish to all those other things like trying to be correct in your understanding of the world, and to standard predictive processing.

Third, free energy minimization is a claim that the fundamental psychological drive is the reduction of uncertainty.I get this claim from the Alius interview, where Friston says:

The discovery that the only human motive is uncertainty-reduction might come as a surprise to humans who feel motivated by things like money, power, sex, friendship, or altruism. But the neuroscientist I talked to about this says I am not misinterpreting the interview. The claim really is that uncertainty-reduction is the only game in town.

In a sense, it must be true that there is only one human motivation. After all, if you’re Paris of Troy, getting offered the choice between power, fame, and sex – then some mental module must convert these to a common currency so it can decide which is most attractive. If that currency is, I dunno, dopamine in the striatum, then in some reductive sense, the only human motivation is increasing striatal dopamine (don’t philosophize at me, I know this is a stupid way of framing things, but you know what I mean). Then the only weird thing about the free energy formulation is identifying the common currency with uncertainty-minimization, which is some specific thing that already has another meaning.

I think the claim (briefly mentioned eg here) is that your brain hacks eg the hunger drive by “predicting” that your mouth is full of delicious food. Then, when your mouth is not full of delicious food, it’s a “prediction error”, it sets off all sorts of alarm bells, and your brain’s predictive machinery is confused and uncertain. The only way to “resolve” this “uncertainty” is to bring reality into line with the prediction and actually fill your mouth with delicious food. On the one hand, there is a lot of basic neuroscience research that suggests something like this is going on. On the other, Wo’s writes about this further:

It’s tempting to throw this out entirely. But part of me

doesfeel like there’s a weird connection between curiosity and every other drive. For example, sex seems like it should be pretty basic and curiosity-resistant. But how often do people say that they’re attracted to someone “because he’s mysterious”? And what about the Coolidge Effect (known in the polyamory community as “new relationship energy”)? After a while with the same partner, sex and romance lose their magic – only to reappear if the animal/person hooks up with a new partner. Doesn’t this point to some kind of connection between sexuality and curiosity?What about the typical complaint of porn addicts – that they start off watching softcorn porn, find after a while that it’s no longer titillating, move on to harder porn, and eventually have to get into really perverted stuff just to feel anything at all? Is this a sort of uncertainty reduction?

The only problem is that this is a really specific kind of uncertainty reduction. Why should “uncertainty about what it would be like to be in a relationship with that particular attractive person” be so much more compelling than “uncertainty about what the middle letter of the Bible is”, a question which almost no one feels the slightest inclination to resolve? The interviewers ask Friston something sort of similar, referring to some experiments where people are happiest not when given easy things with no uncertainty, nor confusing things with unresolvable uncertainty, but

puzzles– things that seem confusing at first, but actually have a lot of hidden order within them. They ask Friston whether he might want to switch teams to support a u-shaped theory where people like being in the middle between too little uncertainty or too much uncertainty. Friston…does not want to switch teams.The only thing at all I am able to gather from this paragraph – besides the fact that apparently Karl Friston cites himself in conversation – is the Schmidhuber reference, which is actually really helpful. Schmidhuber is the guy behind eg the Formal Theory Of Fun & Creativity Explains Science, Art, Music, Humor, in which all of these are some form of taking a seemingly complex domain (in the mathematical sense of complexity) and reducing it to something simple (discovering a hidden order that makes it more compressible). I think Friston might be trying to hint that free energy minimization works in a Schmidhuberian sense where it applies to learning things that suddenly make large parts of our experience more comprehensible at once, rather than just “Here are some numbers: 1, 5, 7, 21 – now you have less uncertainty over what numbers I was about to tell you, isn’t that great?”

I agree this is one of life’s great joys, though maybe me and Karl Friston are not a 100% typical subset of humanity here. Also, I have trouble figuring out how to conceptualize other human drives like sex as this same kind complexity-reduction joy.

One more concern here – a lot of the things I read about this equivocate between “model accuracy maximization” and “surprise minimization”. These end really differently. Model accuracy maximization sounds like curiosity – you go out and explore as much of the world as possible to get a model that precisely matches reality. Surprise minimization sounds like locking yourself in a dark room with no stimuli, then predicting that you will be in a dark room with no stimuli, and never being surprised when your prediction turns out to be right. I understand Friston has written about the so-called “dark room problem”, but I haven’t had a chance to look into it as much as I should, and I can’t find anything that takes one or the other horn of the equivocation and says “definitely this one”.

Fourth, okay, all of this is pretty neat, but how does it explain all biological systems? How does it explain abiogenesis? And when do we get to the real-world version of psychohistory? In his Alius interview, Friston writes:How do the wood lice have anything to do with any of the rest of this?

As best I can understand (and I’m drawing from here and here again), this is an ultimate meaning of “free energy” which is sort of like a formalization of homeostasis. It goes like this: consider a probability distribution of all the states an organism can be in. For example, your body can be at (90 degrees F, heart rate 10), (90 degrees F, heart rate 70), (98 degrees F, heart rate 10), (98 degrees F, heart rate 70), or any of a trillion other different combinations of possible parameters. But in fact, living systems successfully restrict themselves to tiny fractions of this space – if you go too far away from (98 degrees F, heart rate 70), you die. So you have two probability distributions – the maximum-entropy one where you could have any combination of heart rate and body temperature, and the one your body is aiming for with a life-compatible combination of heart rate and body temperature. Whenever you have a system trying to convert one probability distribution into another probability distribution, you can think of it as doing Bayesian work and following free energy principles. So free energy seems to be something like just a formal explanation of how certain systems display goal-directed behavior, without having to bring in an anthropomorphic or teleological concept of “goal-directedness”.

Friston mentions many times that free energy is “almost tautological”, and one of the neuroscientists I talked to who claimed to half-understand it said it should be viewed more as an elegant way of looking at things than as a scientific theory per se. From the Alius interview:

So we

haven’tgot a real-life version of Asimov’s psychohistory, is what you’re saying?But also:

So maybe the free energy principle is the unification of predictive coding of internal models, with the “action in the world is just another form of prediction” thesis mentioned above? I guess I thought that was part of the standard predictive coding story, but maybe I’m wrong?

Overall, the best I can do here is this: the free energy principle seems like an attempt to unify perception, cognition, homeostasis, and action.“Free energy” is a mathematical concept that represents the failure of some things to match other things they’re supposed to be predicting.

The brain tries to minimize its free energy with respect to the world, ie minimize the difference between its models and reality. Sometimes it does that by updating its models of the world. Other times it does that by changing the world to better match its models.

Perception and cognition are both attempts to create accurate models that match the world, thus minimizing free energy.

Homeostasis and action are both attempts to make reality match mental models. Action tries to get the organism’s external state to match a mental model. Homeostasis tries to get the organism’s internal state to match a mental model. Since even bacteria are doing something homeostasis-like, all life shares the principle of being free energy minimizers.

So life isn’t doing four things – perceiving, thinking, acting, and maintaining homeostasis. It’s really just doing one thing – minimizing free energy – in four different ways – with the particular way it implements this in any given situation depending on which free energy minimization opportunities are most convenient. Or something. All of this might be a useful thing to know, or it might just be a cool philosophical way of looking at things, I’m still not sure.

Or something like this? Maybe? Somebody please help?

Discussion question for those of you on the subreddit – if the free energy principle were right, would it disprove the orthogonality thesis? Might it be impossible to design a working brain with any goal besides free energy reduction? Would anything – even a paperclip maximizer – have to start by minimizing uncertainty, and then add paperclip maximization in later as a hack? Would it change anything if it did?