Eli Sennesh
Eli Sennesh has not written any posts yet.

Eli Sennesh has not written any posts yet.

So we could quibble over the details of Friston 2009, *buuuuut*...
I don't find it useful to take Friston at 110% of his word. I find it more useful to read him like I read all other cognitive modelers: as establishing a language and a set of techniques whose scientific rigor he demonstrates via their application to novel experiments and known data.
He's no more an absolute gold-standard than, say, Dennett, but his techniques have a certain theoretical elegance in terms of positing that the brain is built out of very few, very efficient core mechanisms, applied to abundant embodied training data, instead of very many mechanisms with relatively little training or processing power for each one.
Rather than quibble over him, I think that this morning in the shower I got what he means on a slightly deeper level, and now I seriously want to write a parody entitled, "So You Want to Write a Friston Paper".
Oh hey, so that's the original KL control paper. Saved!
Oh, I wasn't really trying at all to talk about what prediction-error minimization "really does" there, more to point out that it changes radically depending on your modeling assumptions.
The "distal causes" bit is also something I really want to find the time and expertise to formalize. There are studies of causal judgements grounding moral responsibility of agents and I'd really like to see if we can use the notion of distal causation to generalize from there to how people learn causal models that capture action-affordances.
>But this definitely seems like the better website to talk to Eli Sennesh on :)
Somewhat honored, though I'm not sure we've met before :-).
I'm posting here mostly by now, because I'm... somewhat disappointed with people saying things like, "it's bullshit" or "the mathematical parts of this model are pulled directly from the posterior".
IMHO, there's a lot to the strictly neuroscientific, biological aspects of the free-energy theory, and it integrates well with physics (good prediction resists disorder, "Thermodynamics of Prediction") and with evolution (predictive regulation being the unique contribution of the brain).
Mathematically, well, I'm sure that a purely theoretical probabilist or analyst can pick everything up quickly.
Computationally and psychologically, it's a hot mess.... (read 501 more words →)
>I wonder if the conversion from mathematics to language is causing problems somewhere. The prose description you are working with is 'take actions that minimize prediction error' but the actual model is 'take actions that minimize a complicated construct called free energy'. Sitting in a dark room certainly works for the former but I don't know how to calculate it for the latter.
There's absolutely trouble here. "Minimizing surprise" always means, to Friston, minimizing sensory surprise under a generative model: . The problem is that, of course, in the course of constructing this, you had to marginalize out all the interesting variables that make... (read more)
Ok, now a post on motivation, affect, and emotion: attempting to explain sex, money, and pizza. Then I’ll try a post on some of my own theories/ideas regarding some stuff. Together, I’m hoping these two posts address the Dark Room Problem in a sufficient way. HEY SCOTT, you’ll want to read this, because I’m going to link a paper giving a better explanation of depression than I think Friston posits.
The following ideas come from one of my advisers who studies emotion. I may bungle it, because our class on the embodied neuroscience of this stuff hasn’t gotten too far.
The core of... (read 742 more words →)
Ok, now the post where I go into my own theory on how to avoid the Dark Room Problem, even without physiological goals.
The brain isn’t just configured to learn any old predictive or causal model of the world. It has to learn the distal causes of its sensory stimuli: the ones that reliably cause the same thing, over and over again, which can be modeled in a tractable way.
If I see a sandwich (which I do right now, it’s lunchtime), one of the important causes is that photons are bouncing off the sandwich, hitting my eyes, and stimulating my retina. However, most photons don’t make... (read 420 more words →)
Hi,
I now work in a lab allied to both the Friston branch of neuroscience, and the probabilistic modeling branch of computational cognitive science, so I now feel even more arrogant enough to comment fluently.
I’m gonna leave a bunch of comments over the day as I get the spare time to actually respond coherently to stuff.
The first thing is that we have to situate Friston’s work in its appropriate context of Marr’s Three Levels of cognitive analysis: computational (what’s the target?), algorithmic (how do we want to hit it?), and implementational (how do we make neural hardware do it?).
Friston’s work largely takes place at the algorithmic and implementational levels.... (read 894 more words →)
Actually, here's a much simpler, more intuitive way to think about probabilistically specified goals.
Visualize a probability distribution as a heat map of the possibility space. Specifying a probabilistic goal then just says, "Here's where I want the heat to concentrate", and submitting it to active inference just uses the available inferential machinery to actually squeeze the heat into that exact concentration as best you can.
When our heat-map takes the form of "heat" over dynamical trajectories, possible "timelines" of something that can move, "squeezing the heat into your desired concentration" means exactly "squeezing the future towards desired regions". All you're changing is how you specify desired regions: from giving them an... (read more)
>"If I value apples at 3 units and oranges at 1 unit, I don't want at 75%/25% split. I only want apples, because they're better! (I have no diminishing returns.)"
I think what I'd have to ask here is: if you only want apples, why are you spending your money on oranges? If you will not actually pay me 1 unit for an orange, why do you claim you value oranges at 1 unit?
Another construal: you value oranges at 1 orange per 1 unit because if I offer you a lottery over those and let you set the odds yourself, you will choose to set them to 50/50. You're... (read more)