Hi. I'm Gareth McCaughan. I've been a consistent reader and occasional commenter since the Overcoming Bias days. My LW username is "gjm" (not "Gjm" despite the wiki software's preference for that capitalization). Elsewehere I generally go by one of "g", "gjm", or "gjm11". The URL listed here is for my website and blog, neither of which has been substantially updated for several years. I live near Cambridge (UK) and work for Hewlett-Packard (who acquired the company that acquired what remained of the small company I used to work for, after they were acquired by someone else). My business cards say "mathematician" but in practice my work is a mixture of simulation, data analysis, algorithm design, software development, problem-solving, and whatever random engineering no one else is doing. I am married and have a daughter born in mid-2006. The best way to contact me is by email: firstname dot lastname at pobox dot com. I am happy to be emailed out of the blue by interesting people. If you are an LW regular you are probably an interesting person in the relevant sense even if you think you aren't.
If you're wondering why some of my very old posts and comments are at surprisingly negative scores, it's because for some time I was the favourite target of old-LW's resident neoreactionary troll, sockpuppeteer and mass-downvoter.
Is it actually true that egoism, in the sense of "some degree of egoism" or "at least mildly egoistic", has a bad reputation?
My impression is that (1) almost everyone cares at least a bit about random other people but cares a lot more about themself, and (2) almost everyone is aware of #1 and doesn't see it as particularly bad. If you call someone altruistic you aren't generally claiming that they don't give higher priority to their own interests than others' at all, only that they care more about others relative to themselves than is usual.
I agree that it's more broadly accepted when companies care scarcely at all about the interests of others --we largely accept corporate behaviour that would be universally regarded as sociopathic if done by an individual -- and that this is a bad thing.
I hope I will return to this when I have time to read it properly and think about it properly, but for now I'll just drop in two things at the meta-level: (1) I don't know how comprehensible I'd have found something more in your usual concise style, but the above certainly seems nice and clear so it seems like you probably made a good choice. (2) I'm glad to hear that I'm perfectly tactful but now I'm worried about a different issue, namely that maybe I never say anything unless I have something mean^H^H^H^Hcritical to say, which I'm aware is the exact opposite of what generations of parents have been teaching their children to do :-). (I definitely do lean in that direction, and I'm somewhat prepared to defend it in that offering hopefully-informative criticism is arguably more useful than offering compliments, but it's still probably suboptimal.)
I'm brand new to this field of mathematics
Me too, mostly. I took an undergraduate course on dynamical systems many years ago but I've forgotten most of what was in it and in any case it seems like this complex-systems stuff uses the language of dynamical systems but not always in ways I can see how to connect with the mathematics I kinda-sorta know.
I get the impression that you're using LLM-assisted research much less -- if at all
I make almost no use of LLMs. (I am not at all claiming that this is a good thing, just validating your impression :-).)
jhana isn't just about gain. It's also about noise
If we're thinking about the brain as a dynamical system, how is this noise being represented? Maybe as arising from inputs coming in from outside. If jhana reduces sensitivity to those (which might fit with "pronounced self-reported sensory fading", as described in the article) then that could reduce the overall amount of noise in the system.
But I still can't quite make sense of this. (1) I haven't read the article closely but it doesn't look like it attributes their observations about jhana to reduced effects of noise. (2) The article specifically claims that jhana is associated with a lower max Lyapunov exponent -- that's the basis for its claim of "reduced chaoticity". Doesn't that mean, in your terms, that the article is claiming that jhana puts the brain in a state where the "gain" is lower, not higher?
The original paper that led me down this rabbit hole
Thanks -- I'll take a look. At first glance it seems to be very specifically about brains; what I'd really like to find is something that explains the general principles in terms that in principle I could apply to domains other than brains, and with enough precision and explicitness that I can see how to do mathematics to it.
The DFA exponent and so-called "fE/I" are both properties, if I am understanding correctly, of arbitrary time series (and the hope is that when the time series is derived from a dynamical system it tells you something interesting about the structure of that system). That's good, in that they are nice and general and well defined and I can understand what they are. But if we're talking about properties of a dynamical system rather than of some set of signals captured from it, I'd like to understand what properties are in question. Handwavily I understand that we're looking at something along the lines of "coefficient in an exponential dependence" where <0 means things decay and >0 means things explode and interesting stuff might happen at 0. (And presumably that exponential dependence arises from something like a differential equation where again we're looking at something like the eigenvalues in the matrix you get by linearizing the d.e.) But I don't get the impression that people talking about subcriticality and supercriticality are actually working with concrete precisely-specified mathematical systems for which they could define those terms precisely; it seems (perhaps unfairly) more as if they are defining "supercritical" to mean something like "if we go looking for instabilities or exponential divergences, we can find things that look like that" and "subcritical" to mean the reverse, and it's all kinda phenomenological, looking at the outputs of the system rather than at the system itself.
Which may very well be the best one can do with a brain, but it's all a bit frustrating when trying to understand exactly what's going on.
This is the first time you've commented on my posts where I don't want to crawl into a cave and die.
Ouch!
I was going to say "I hope that indicates only that you feel very bad when someone points out issues with what you've written, rather than that I am incredibly tactless" ... but maybe it's actually better overall for one person to be very tactless than for one person to be painfully sensitive to criticism. Anyway, to whatever extent your past pain is the result of my tactlessness, I'm sorry.
(I don't think anything I said assumed you were referring to thermodynamic order/disorder.)
It sounds as if some of your definitions may want adjusting.
Dynamical systems can be described on a continuum with ordered on one end and disordered on the other end. [...] A disordered system has chaotic, turbulent, or equivalent behavior. [...] Systems more disordered than the critical point can be described as supercritical. Systems less disordered than the critical point can be described as subcritical.
Doesn't all of this explicitly say that moving in the sub->super direction means becoming more disordered, which means becoming more chaotic?
Perhaps what you actually mean to say is of the following form?
Dynamical systems can be described on a continuum whose actual definition is too complicated to give here but in some situations can be handwavily approximated to "less versus more sensitive to small changes", which in turn can in some situations be handwavily approximated to "more ordered versus more disordered".
A particular point along that continuum goes by the name of "criticality", and the dynamics of a critical system are often particularly interesting; in particular, they maximize a quantity called complexity which is a measure of entropy expressed across a variety of time scales. Systems on the less-sensitive/more-ordered side of criticality are called subcritical and systems on the more-sensitive/less-ordered side are called supercritical.
(Is there actually a proper term for the thing that increases as you move from subcritical to supercritical? I keep finding that I need ugly circumlocutions for want of one.)
And then the situation described in the article (where a certain change, in this case from mindfullness to jhana, moves in the sub-to-super direction -- which would normally mean more sensitivity, hence more tendency to chaos in the mathematical sense, hence typically more disorder -- but somehow also involves a reduction in chaoticity) could be explained by this system not having the usual relationship between the sub-to-super parameter and chaoticity.
But I think I'm still confused, because (as I mentioned before) the article very much doesn't present that combination as somehow an unusual one. It says that jhana is characterized by a smaller max Lyapunov exponent, hence less chaoticity ... but isn't Lyapunov exponent much the same thing as you're calling "gain"? Wouldn't we normally expect reducing the Lyapunov exponent to move in the direction of subcriticality? Or am I, indeed, just still confused? The article says "Jhana decreases brain chaoticity relative to mindfulness, indicating brain dynamics closer to criticality" (italics mine), which to me seems like they're saying that in general we should expect closer-to-criticality dynamics to come along with less chaos, which is the exact opposite of what it feels like we should expect.
I've had a bit of a look for a nice clear explanation of the actual mathematics here, but it seems that there are (1) things about dynamical systems generally, written by mathematicians, which talk about e.g., subcritical or supercritical bifurcations and have nice clean definitions for those, and (2) things about Complex Systems, often specifically about brains, which talk about whole systems being "subcritical" or "critical" or "supercritical" but never seem to give actual explicit definitions of the things they are talking about. Probably I have just not found the right things to read.
I am rather confused.
What am I missing or misunderstanding here?
Noted. But it seems to me that if the trajectory was excessively altruistic -> obnoxious Objectivist -> something reasonable, it's pretty plausible that without reading Rand you might just have gone straight from "excessively altruistic" to "something reasonable".
(But of course you may well have a better sense of that having been through it from the inside.)
Does Lewis really advocate for extreme altruism, as such? Of course he advocates for Christianity, and some versions of Christianity advocate extreme altruism, but Lewis's sort was mostly pretty moderate.
This has very little to do with the actual high-level topic at issue, but it's something I've seen elsewhere in rationalist discourse and I recently realised that I think it's probably nonsense.
I still think a lot of you all need to sit down with Atlas Shrugged to get nudged in a usefully more selfish direction.
I am pretty sure it scarcely ever happens that someone who is too altruistic reads Atlas Shrugged and comes away with their altruism moderated a bit, or that someone who is too selfish reads, er, the Communist Manifesto or the Sermon on the Mount or something[1], and comes away with their selfishness moderated a bit.
[1] I don't know whether it's Highly Significant somehow that I can't come up with a good symmetrical example of something advocating for extreme altruism as AS advocates for extreme selfishness.
I think what actually happens is that (usually) they say to themselves "wow, that was a load of pernicious nonsense, I resent having wasted my time reading it, and will now be even more zealous in opposing that sort of thing" and if anything have their original position reinforced, or (occasionally) they feel like the scales have fallen from their eyes and become a full-blown Objectivist or Marxist or Christian or whatever.
If I thought altruism was bullshit and everyone ought to be a Randian egoist then I might be all for giving copies of Atlas Shrugged to very altruistic people. But if what I wanted was more-moderately-altruistic people, I don't think that would be a good strategy.
I should in fairness say that I don't have any actual evidence for what happens when extreme altruists read Atlas Shrugged. Maybe (either in general, or specifically when they are rationalist extreme altruists) they do tend to emerge with their views moderated. But I don't think it's the way I'd bet.
I think the information actually conveyed by this "unreasonably effective writing advice" is the fact that such-and-such a section of what you wrote prompts that question and suspect that saying "this bit isn't clear" would be almost as effective as asking "what did you mean here?" and then saying "well, write that then".
(It's like the old joke about the consultant whose invoice charges $1 for hitting the machine with a wrench and $9,999 for knowing where to hit it.)
So far as I can tell, the most plausible way for the universe to be deterministic is something along the lines of "many worlds" where Reality is a vast superposition of what-look-to-us-like-realities, and if the future of AI is determined what that means is more like "15% of the future has AI destroying all human value, 10% has AI ushering in a utopia for humans, 20% has it producing a mundane dystopia where all the power and wealth is in a few not-very-benevolent hands, 20% has it improving the world in mundane ways, and 35% has it fizzling out and never making much more change than it already has done" than like "it's already determined that AI will/won't kill us all".
(For the avoidance of doubt, those percentages are not serious attempts at estimating the probabilities. Maybe some of them are more like 0.01% or 99.99%.)