Here's an extension of what you said in terms of dullness and sharpness within attention based practices. (Partly to check that I understand)
Dullness = subcriticality and distance in cascading below the criticality line
Monkey mind = supercriticality and cascading above the criticality line (activates for whatever shows up)
If we look at the 10 stages of TMI (9-stage Elephant path), the progression goes something like distracted mind -> subcriticality (stage 2-3) -> practices to increase cascading of brain (4-5) -> practices for the attention to calibrate around the criticality line (6-10)
Also this is why the tip to meet your meditation freshly wherever it is appearing is important because it is a criticality tuning process that is different for everyone?
(I very much like this way of thinking about this, nice!)
Also this is why the tip to meet your meditation freshly wherever it is appearing is important because it is a criticality tuning process…?
Yep.
I am rather confused.
What am I missing or misunderstanding here?
This is indeed confusing, because I was writing about dynamical order/disorder, which is different from thermodynamic order/disorder.
Sub/supercriticality isn't just about order vs entropy (in the thermodynamic sense). For example, thermodynamic noise (which is about entropy) in metal has high disorder but is also subcritical. Sup/supercriticality is about gain and coupling. Supercritical systems are often chaotic, but this is not a definitional characteristic—the chaotic behavior is downstream of the gain. A linear amplifier, for example, is supercritical but not chaotic.
It is possible for jhana to decrease entropy while going in the direction of criticality, because these are different axes.
(I don't think anything I said assumed you were referring to thermodynamic order/disorder.)
It sounds as if some of your definitions may want adjusting.
Dynamical systems can be described on a continuum with ordered on one end and disordered on the other end. [...] A disordered system has chaotic, turbulent, or equivalent behavior. [...] Systems more disordered than the critical point can be described as supercritical. Systems less disordered than the critical point can be described as subcritical.
Doesn't all of this explicitly say that moving in the sub->super direction means becoming more disordered, which means becoming more chaotic?
Perhaps what you actually mean to say is of the following form?
Dynamical systems can be described on a continuum whose actual definition is too complicated to give here but in some situations can be handwavily approximated to "less versus more sensitive to small changes", which in turn can in some situations be handwavily approximated to "more ordered versus more disordered".
A particular point along that continuum goes by the name of "criticality", and the dynamics of a critical system are often particularly interesting; in particular, they maximize a quantity called complexity which is a measure of entropy expressed across a variety of time scales. Systems on the less-sensitive/more-ordered side of criticality are called subcritical and systems on the more-sensitive/less-ordered side are called supercritical.
(Is there actually a proper term for the thing that increases as you move from subcritical to supercritical? I keep finding that I need ugly circumlocutions for want of one.)
And then the situation described in the article (where a certain change, in this case from mindfullness to jhana, moves in the sub-to-super direction -- which would normally mean more sensitivity, hence more tendency to chaos in the mathematical sense, hence typically more disorder -- but somehow also involves a reduction in chaoticity) could be explained by this system not having the usual relationship between the sub-to-super parameter and chaoticity.
But I think I'm still confused, because (as I mentioned before) the article very much doesn't present that combination as somehow an unusual one. It says that jhana is characterized by a smaller max Lyapunov exponent, hence less chaoticity ... but isn't Lyapunov exponent much the same thing as you're calling "gain"? Wouldn't we normally expect reducing the Lyapunov exponent to move in the direction of subcriticality? Or am I, indeed, just still confused? The article says "Jhana decreases brain chaoticity relative to mindfulness, indicating brain dynamics closer to criticality" (italics mine), which to me seems like they're saying that in general we should expect closer-to-criticality dynamics to come along with less chaos, which is the exact opposite of what it feels like we should expect.
I've had a bit of a look for a nice clear explanation of the actual mathematics here, but it seems that there are (1) things about dynamical systems generally, written by mathematicians, which talk about e.g., subcritical or supercritical bifurcations and have nice clean definitions for those, and (2) things about Complex Systems, often specifically about brains, which talk about whole systems being "subcritical" or "critical" or "supercritical" but never seem to give actual explicit definitions of the things they are talking about. Probably I have just not found the right things to read.
Thanks for getting into the details here. I'm brand new to this field of mathematics and this conversation is helping me get a much better handle on what's going on.
[Disclaimer: I am relying very heavily on ChatGPT to work my way through this stuff. I'm mostly using it to learn the math, sort through research papers and check my writing for errors. (Ironically, the reason my writings here contain mistakes is because I'm mostly writing it myself rather than letting the AI take over.) I just want to be upfront about this; I get the impression that you're using LLM-assisted research much less—if at all.]
I don't disagree with your blockquote rewrite in any substantive way applicable to the special case of biological neural networks.
You didn't use thermodynamic entropy anywhere. Personally, I come from a physics background, so my understanding of signal processing—especially in the context of physical systems—uses a lot of thermodynamic metaphors. Consequently, I end up thinking in mixed metaphors, which is bad. To fix this problem, I'm going to stop using the term "entropy" in this thread. (Perhaps I should stop using the word "chaotic" too.)
(Is there actually a proper term for the thing that increases as you move from subcritical to supercritical? I keep finding that I need ugly circumlocutions for want of one.)
Universally? No. But if I were to rewrite this post I would use "gain", since it works fine
but isn't Lyapunov exponent much the same thing as you're calling "gain"?…
Yes.
While "gain" can indeed be handwaved into Lyapunov exponent, jhana isn't just about gain. It's also about noise, which is an orthogonal axis.
What I think is going on is that there's two important factors: noise and gain. Jhana increases gain but decreases noise. In this way a jhanic state is more "ordered" in the lower noise sense. Jhana is closer to critical, because it has higher gain. In this sense it is more sensitive in the dynamical systems sense that small perturbations can get amplified into large-scale patterns.
Consider a leftover warhead from WWII. There are two things that could make it explode. One is if the bomb is sensitive (higher gain). The other one is if the whole room is shaking (higher noise).
(2) things about Complex Systems…never seem to give actual explicit definitions of the things they are talking about. Probably I have just not found the right things to read.
The original paper that led me down this rabbit hole in the first place used "DFA and the ratio".
PS: This is the first time you've commented on my posts where I don't want to crawl into a cave and die. My writing is improving! 🎉 I still need to do a re-write of this article that credits you at the end, but at least I won't have to throw the entire thing away.
I'm brand new to this field of mathematics
Me too, mostly. I took an undergraduate course on dynamical systems many years ago but I've forgotten most of what was in it and in any case it seems like this complex-systems stuff uses the language of dynamical systems but not always in ways I can see how to connect with the mathematics I kinda-sorta know.
I get the impression that you're using LLM-assisted research much less -- if at all
I make almost no use of LLMs. (I am not at all claiming that this is a good thing, just validating your impression :-).)
jhana isn't just about gain. It's also about noise
If we're thinking about the brain as a dynamical system, how is this noise being represented? Maybe as arising from inputs coming in from outside. If jhana reduces sensitivity to those (which might fit with "pronounced self-reported sensory fading", as described in the article) then that could reduce the overall amount of noise in the system.
But I still can't quite make sense of this. (1) I haven't read the article closely but it doesn't look like it attributes their observations about jhana to reduced effects of noise. (2) The article specifically claims that jhana is associated with a lower max Lyapunov exponent -- that's the basis for its claim of "reduced chaoticity". Doesn't that mean, in your terms, that the article is claiming that jhana puts the brain in a state where the "gain" is lower, not higher?
The original paper that led me down this rabbit hole
Thanks -- I'll take a look. At first glance it seems to be very specifically about brains; what I'd really like to find is something that explains the general principles in terms that in principle I could apply to domains other than brains, and with enough precision and explicitness that I can see how to do mathematics to it.
The DFA exponent and so-called "fE/I" are both properties, if I am understanding correctly, of arbitrary time series (and the hope is that when the time series is derived from a dynamical system it tells you something interesting about the structure of that system). That's good, in that they are nice and general and well defined and I can understand what they are. But if we're talking about properties of a dynamical system rather than of some set of signals captured from it, I'd like to understand what properties are in question. Handwavily I understand that we're looking at something along the lines of "coefficient in an exponential dependence" where <0 means things decay and >0 means things explode and interesting stuff might happen at 0. (And presumably that exponential dependence arises from something like a differential equation where again we're looking at something like the eigenvalues in the matrix you get by linearizing the d.e.) But I don't get the impression that people talking about subcriticality and supercriticality are actually working with concrete precisely-specified mathematical systems for which they could define those terms precisely; it seems (perhaps unfairly) more as if they are defining "supercritical" to mean something like "if we go looking for instabilities or exponential divergences, we can find things that look like that" and "subcritical" to mean the reverse, and it's all kinda phenomenological, looking at the outputs of the system rather than at the system itself.
Which may very well be the best one can do with a brain, but it's all a bit frustrating when trying to understand exactly what's going on.
This is the first time you've commented on my posts where I don't want to crawl into a cave and die.
Ouch!
I was going to say "I hope that indicates only that you feel very bad when someone points out issues with what you've written, rather than that I am incredibly tactless" ... but maybe it's actually better overall for one person to be very tactless than for one person to be painfully sensitive to criticism. Anyway, to whatever extent your past pain is the result of my tactlessness, I'm sorry.
[Terminology note: "samatha", "jhana", "insight", "stream entry", "homunculus" and "non-local time" are technical jargon defined in Cyberbuddhist Jargon 1.0]
To understand how meditation affects the brain from an outside (neuroscientific) vantage point, it is necessary to understand criticality. Criticality comes from the mathematical study of dynamical systems. Dynamical systems are systems in which a point moves through space. Dynamical systems can be described on a continuum with ordered on one end and disordered on the other end.
On the threshold between ordered and disordered is the critical point. Systems more disordered than the critical point can be described as supercritical. Systems less disordered than the critical point can be described as subcritical. Systems at the critical point maximize complexity, which is a measure of entropy expressed across a variety of time scales.
With that mathematical terminology out of the way, let's get into the neuroscience.
EEG scans have shown that the human brain exhibits scale-free temporal statistics and behavior, which implies it is operating near criticality. The current theory is that resting-state brain networks hover around criticality. Focused attention tasks temporarily drive the brain more subcritical. Strong emotional states, creative tasks and psychedelics temporarily drive the brain more supercritical. Quiet alertness requires near-criticality.
Why do these different tasks rely on different network dynamics? Well, if you want to pay stable attention to something then your brain's network activity needs to be stabilized, which means it should be in a relatively subcritical mode. If you want your brain to think in new ways then it should be open to many different possibilities, which means it should be in a relatively supercritical mode. And if you want to notice the finest sensory signals coming in, then your brain should be in a relatively critical mode because that's where small signals propagate best across time scales.
Note the "relatively" qualifiers. Remember when I said that the resting brain operates near criticality? To be more precise, it actually operates at subcriticality. The brain going too far in the supercritical direction can cause effects like seizures, psychosis, or psilocybin-associated behavior. These behaviors are…maladaptive (though vision quests can produce behavioral improvements as an aftereffect).
If you're a meditator, then the phrases "focused attention" and "quiet alertness" probably got your attention. That's because samatha (jhanic) meditation is all about focused attention and Zen (insight-ish) meditation is all about quiet alertness.
What happens when we look for connections between meditation and criticality-related measures? Deep jhana reduces chaoticity and moves dynamics toward criticality.
The fact that meditation reduces chaoticity should be no surprise to anyone who has calmed their mind by sitting quietly and paying attention to the breath. The fact that insight meditation nudges the dynamics toward criticality should be unsurprising to anyone who has experienced stream entry. And the fact that insight meditation moves the brain in the direction of supercriticality should be no surprise to anyone who has experienced vipassana sickness, especially if you have experienced a meditation-related psychotic break.
What's really cool about the Criticality Theory of Meditation is that it provides a mathematical foundation for understanding how things like the homunculus and non-local time get dissolved by insight practice. These are just attractors. If your network activity moves toward supercriticality, then the attractors disappear. This is how psychedelics cause temporary ego death (by temporarily making your neural activity more chaotic) and how Zen causes permanent ego death (by permanently moving your set point in the direction of supercriticality).