Hear ye, dear peoples of the internets: I was born a physicist, I am paid to be a theoretical ecologist, and I spend much of my free time in a science commune in the French Pyrenees (

Wiki Contributions


Thanks for your thoughts and for the link! I definitely agree that we are very far from practical category-inspired improvements at this stage;  I simply wonder whether there isn't something fundamentally as simple and novel as differential equations waiting around the corner and that we are taking a very circuitous route toward through very deep metamathematics! (Baez's rosetta stone paper and work by Abramsky and Coeck on quantum logic have convinced me that we need something like "not being in a Cartesian category" to account for notions like context and meaning, but that quantum stuff is only one step removed from the most Cartesian classical logic/physics and we probably need to go to the other extreme to find a different kind of simplicity)

Sorry for the late reply! Do you mind sharing a ref for Critch's new work? I have tried to find something about boundaries but was unsuccessful.

As for the historical accident, I would situate it more around the 17th century, when the theory of mechanics was roughly as advanced as that of agency. I don't feel that goals and values require much more advanced math, only math as new as differential calculus was at the time. 

Though we now have many pieces that seem to aim in the right direction (variational calculus in general, John Baez and colleagues' investigations of blackboxing via category theory...), it seems more by chance than by concerted, literature-wide effort. But I do hope to build on these pieces.

The problem I have with all these candidates is that they treat all social information as quantitative, a single number like "status"; they do not have semantic content.  Praise/shame and reward/punishment do not specify what they are inculcating, so they are just as likely to push people to invent tactical means to avoid said shaming and ostracism.

As for the quote, sorry there was an ambiguity - I meant that rituals as a phenomenon are as present and important in every culture. But it is my contention that I, a completely atheistic professional scientist in a very secular and non-patriotic country, spend as much time on rituals as a Hasidic rabbi, though mine have nothing to do with patriotism or religion. I will soon write a post to clarify that point!

I have the hardest time imagining a conceptual link between p-zombies and predictive processing, but if you don't like it, you don't like it, I guess! 

Personally, the ambiguity between belief and action in this framework is the only half-reasonable explanation I have encountered so far for why the study of values and rituals is so hopelessly confused at a basic conceptual level (far more than even your typical social science question)

Much appreciated! I am working on a few "case studies" but I should probably add one or a few here already.

You homed in exactly on the point where I have theoretical doubts (I need to better think through predictive theories and what they really imply) but I can tell you where I stand as of now. 

My current idea to resolve this (and I will amend the main text, either to commit to this or to at least avoid contradictory phrasing) is to invoke multiagent models of the mind:

  • An agent must indeed have immutable goals to function as an agent
  • Our mind, on the other hand, is probably better modelled not as an agent but an agora of agents with all sorts of different goals (the usual picture is a competition or a market, but why not cooperation and other interactions as well)
  • This agora needs to pretend that it is a single agent in order to actually act sometimes. Thus, mind-wide goals are immutable for the duration of an "agentic burst", for as long as a given agent is singled out at the agora -- which could be the duration of a single gesture for very low-level goals, or the typical time span of a coherent self-image for the most high-level ones. 
  • The way that mind-wide goals are changed is not by modifying an agent, but by (1) adding another agent to the agora, typically a predictive model of other people in a certain setting, and (2) providing evidence that this one is a better model of "myself", at least in the current situation.

As for biological drives, I'll concede that the word "all" is probably untrue and I wil retract it, though I do mean "the overwhelming majority as soon as the cultural learning machine kicks in". This may be overcorrection in response to sociobiology (which itself was overcorrection in response to blank slate cultural relativism), but I want to try to commit to this and see how far it goes!

Thanks a lot for the suggestion! I do not know anything about this tradition and I would be very happy to learn about it, especially from a perspective that could generate analyses such as the one you paraphrase here.

Your paraphrase from Schmemann resonates a lot with my understanding of Sperber's argument in Rethinking Symbolism, so you may enjoy that book. He devotes the first fraction of the book deconstructing this assumption that symbolism signifies like a language, i.e. as you put it, that "symbolic action must relate in some obviously analogical or didactic way to the thing being represented". And then he tries to offer an alternative theory which I find elegant.

I take it as a good sign that this generated a response, even if that response is "what the heck" (at the very least, rest assured this is a non-smoking endeavor)

I'll rewrite the post a bit within a few days to address your comments and kithpendragon's -- that was a big part of why I wanted to put it on lesswrong, to have some incentive to rectify loose language and loose thinking. 

Some clarifications already:

Here is a person doing something. What would you need to observe, to decide whether you are or are not looking at an example of the category you name "ritual"? What would you be telling me about it, by telling me that it is or is not a "ritual"?

I tried to leave some space for readers to build their own sense of it, but to summarize, I would claim that an action can be both ritual and non-ritual: 

  • non-ritual inasmuch as it is a tactical means to an end, trying to achieve a goal (everything that relates to decision theory)
  • ritual inasmuch as it serves to instill or maintain a goal (make it "real" and felt, not just verbally known) in the mind of the person performing it (everything that decision theory cannot really represent, i.e. changes of preferences)

Some actions are much more ritual than not, others are almost entirely non-ritual. Displays on social media have a non-ritual dimension in that they might allow to gain some practical benefits, along the classic decision theoretic arguments of social signalling and all that, but I want to claim, and that will be for another post, that they are primarily a ritual action.

I also want to claim that this definition of "ritual" is not just a weird repurposing of the word and that any set of actions that serves to instill a goal will have to have many of the peculiarities and quirks that we assign to e.g. religious rituals, while any religious ritual can be analyzed in this way and makes more sense than if analyzed some other way. The burden of proof is mine, of course.

What does it mean to "hold a goal as true"? A goal is not a proposition, that can be true or false. Neither is a desire.

I agree I could write this more carefully. The point is, in theories like" predictive processing (or my understanding of them), there is not much difference between perceiving/knowing and acting, both are part of the same predictive system, with the difference that goals are encoded as beliefs that cannot be challenged, at least for as long as the action is ongoing. As I mention, action is a particular form of suspension of disbelief. 

To quote from the Slatestarcodex review of Surfing uncertainty:

It’s predicting action, which causes the action to happen.

This part is almost funny. Remember, the brain really hates prediction error and does its best to minimize it. With failed predictions about eg vision, there’s not much you can do except change your models and try to predict better next time. But with predictions about proprioceptive sense data (ie your sense of where your joints are), there’s an easy way to resolve prediction error: just move your joints so they match the prediction. So (and I’m asserting this, but see Chapters 4 and 5 of the book to hear the scientific case for this position) if you want to lift your arm, your brain just predicts really really strongly that your arm has been lifted, and then lets the lower levels’ drive to minimize prediction error do the rest.

Under this model, the “prediction” of a movement isn’t just the idle thought that a movement might occur, it’s the actual motor program.

These theories try to be all-encompassing, from simple motor behavior all the way to cognition according to the same principles, so a visceral desire or conscious goal are just extensions of the same idea.

I will try to incorporate a clearer presentation of this point as it is quite key.