I suspect there’s a basic reason why futility claims are often successful in therapy/coaching: by claiming (and succeeding in convincing the client) that something can’t be changed, you reduce the client’s shame in not changing the thing. Now the client is without shame, and that’s a state of mind that makes it a priory easier to change, and focusing the change on aspects the client didn’t fail on in the past additionally increases the chance of succeeding since there’s no evidence of not succeeding on those aspects.
However, I also really care about truth, and so I really dislike such futility claims.
Immutability: Either, the given property is truly entirely fixed, and cannot be changed at all.
Big difference between "cannot be changed at all" and "the distribution is fixed, but with day to day variation."
Would you say that fixed distributions with day to day variation are a common phenomenon? Of course, it depends on where we sample from, but intuitively I would guess that "most things" that have variation can also be influenced. Then again, "most things" is not very meaningful without cleaner definitions of all the terms.
Maybe instead of "truly entirely fixed", I should say something like "truly resistant to targeted intervention".
I am not sure if immutability or optimality is the main thing behind people giving you advice in these cases. It could be that:
Optimality: Or, the property can be changed, but only negatively, i.e. it is already so close to the optimum (or marginal improvements are so costly) that no further improvements are practical.
It may be the case that expressions of futility for individual improvement capture the fact that many properties can indeed be improved in isolation, but the local improvement leads a regionally/globally sub-optimal result.
However, for instances of group expressions of futility, I anticipate that this is basically another manifestation of the coordination problem.
This is something we all need to hear time to time. That feeling of futility can be such a weight. I think not only should we personally acknowledge that we can accomplish more than we give ourselves credit for, but we should help others feel that way. Set people up for success, rack up wins and build momentum. The act of doing has benefits beyond the finished product and the lack of doing does harm beyond the opportunity cost.
This easily leads to the impression that “retention is bad everywhere”, because all people hear from other group organizers are complaints about low retention. But this not only involves some reporting bias – groups with better retention rates usually just don’t talk about it much, as it’s not a problem for them.
Implied narrative is that we don't hear about successful groups, which is obviously false. Alternative model: most groups, products, etc just don't have much demand/have too much competition. Group founders don't want to just achieve "growth," they want a very specific kind of growth that fits their vision for the group they set out to found. What makes you think there's typically a way to keep the failing group the same on the important traits while improving retention? And if such strategies exist in theory, why do you think that any given group founder should expect they can put them into practice?
Implied narrative is that we don't hear about successful groups, which is obviously false.
I wasn't meaning to equate "low retention" with "not successful". I've also heard organizers of groups I'd deem "successful" complain about retention being lower than they'd like. Of course there's a strong correlation here (and "failing" groups are much more likely to be affected by and complain about low retention), but still, I've never heard a group explicitly claim that they're happy with their retention rate (although I'm sure such groups exist). The topic just asymmetrically comes up for groups who are unhappy about it.
What makes you think there's typically a way to keep the failing group the same on the important traits while improving retention? And if such strategies exist in theory, why do you think that any given group founder should expect they can put them into practice?
Basically the two criteria I mentioned: retention clearly is not fixed, as you can easily think of strategies to make it worse. So, is there any reason to assume that what a random group is doing is close to optimal wrt retention, particularly if they have not invested much effort into the question before? It may indeed involve trade-offs, some of which may be more acceptable to the group than others. But there are so many degrees of freedom, from what types of events you run, to what crowd you attract with your public communication, to what venue you meet in, to how you treat new (and old) people, to how much jargon you use, to how you're ending your events. To me, it would be very surprising if on all these dimensions the group is acting optimally by default, and there are not some valuable trade-offs lying around that would increase retention without compromising other traits significantly.
This can particularly make sense in cases where we have already invested a lot of effort into something. But if we haven’t – as is the case to varying degrees in these examples – then it would, typically, be really surprising if we just ended up close to the optimum by default.
Who is "we?" You, personally? All society? Your ancestral lineage going back to LUCA? Selection effects, cultural transmission of knowledge, and instinct all provide ways activities can be optimized without conscious personal effort. In many domains, assuming approximate optimality by default should absolutely be your baseline assumption. And then there's the metalevel to consider, on which your default assumptions about approximate optimality for any domain you might consider are also optimized by default. Perhaps your prior should be that your optimality assumptions are roughly optimal, then reason from that starting point! If not, why not?
Who is "we?" You, personally? All society? Your ancestral lineage going back to LUCA?
Well, depends on the case. When speaking of a person's productivity or sleep, it's primarily the person. When speaking of information flow within a company, it's the company. When speaking of the education system within a country (or whatever the most suitable legislative level is), it's those who have built the education system in its current form.
But the influence of cultural and evolutionary influences indeed is an important point. It may indeed be that sleep tends to be close to optimal for most people for such reasons. But even then: if there are easy ways to make it worse, it may at the very least be worth checking if you aren't accidentally doing these preventable things (such as exposing yourself to bright displays in the evening, or consuming caffeine in the afternoon/evening).
Perhaps your prior should be that your optimality assumptions are roughly optimal, then reason from that starting point! If not, why not?
I agree I haven't really argued in the post for why and when this shouldn't be the case. A slightly weaker form of what I'm claiming in the post may just be: it's worth checking if optimality is actually plausible in any given case. And then it doesn't matter that much which prior you're starting from. Maybe you assume your intuition about optimality is usually right, but it can still be worth checking individual cases rather than following the gut instinct of "this thing is probably optimal because that's what my intuition says and hence I won't bother trying to improve it".
The question how many things are optimal and how well calibrated your intuition is really comes down to the underlying distributions, and in context to what type of thing any given person typically has (and might notice) futility assumptions. What I was getting at in the post is basically some form of "instead of dismissing some thing as futile-to-improve directly, maybe catch yourself and occasionally spend a few seconds thinking whether this is really plausible". I think the cost of that action is really low[1], even if it turns out that 90% of things of this type you encounter happen to be already optimal (and I don't think that's what people will find!).
The cost may end up being higher if this causes you to waste time on trying to improve things that end up being futile or optimal already. But that's imho beyond this post. I'm not talking about how to accurately evaluate these things, just that our snap judgments are not perfect, and we should catch ourselves when applying them carelessly.
…or the it doesn’t make a difference anyway fallacy.
I once had a coaching call on some generic productivity topic along the lines of “I’m not getting done as much as I’d like to”. My hope was that we might identify ways for me to become more productive and get more done. The coach, however, very quickly narrowed in on figuring out what I typically work on in order to eliminate the least valuable things – also a good idea for sure, but this approach seemed a bit disappointing to me. I had the impression I already had a good selection of high-value things, and really only wanted to do more of them, rather than dropping some in favor of others. When I asked about this, he seemed to have a strong conviction that “getting more done” is futile – you can’t just do that, or if so, then not sustainably. Instead, you should always focus on doing the right things.
Now, I think there is some wisdom in that. And perhaps it even was a good strategy in my case. However, I still believe there’s a bit of a fallacy involved in his assessment: the assumption that some malleable quantity is somehow unimprovable. That how much I can get done is somehow constant, or that trying to change it is not worth the effort.
It’s what I like to call futility illusions, and I think they’re pretty common.
To name two more examples that I’ve encountered before:
The recurring theme in all these examples is that someone has a strong belief that some particular quantity is basically fixed and you can’t realistically improve it.
But the assumption that there’s no way to improve upon a given quantity is often a rather bold one, because it implies one of two things:
Condition 1 seems to be false in almost all cases of interest. Looking at our three examples, we can at least always find obvious ways to make them worse:
So, clearly, none of these metrics are immutable, the quantities can be changed. This leaves the second condition: if they can be changed, and yet you assume they can’t be improved, then this means that they have some upper bound, and we are very close to it (or that we’re so deep into diminishing return land already that further improvements are not practically achievable). This can particularly make sense in cases where we have already invested a lot of effort into something. But if we haven’t – as is the case to varying degrees in these examples – then it would, typically, be really surprising if we just ended up close to the optimum by default.
The optimality assumption at least often appears more reasonable than immutability. But typically, when I encounter futility arguments in the wild, the people making them don’t inquire about optimality beforehand, they seem to just assume it for whatever reason.
Let’s take the group retention example: I have no actual data on this, but I’m sure retention rate of, say, rationality meetup groups varies a lot. Let’s, for instance, suppose it’s a skewed distribution that for almost all groups ranges from 5% to 50%, with the average around 20% or so (for some sensible operationalization of “retention”).
And maybe the organizers from the lower half of this distribution tend to complain to their peers about their group’s retention rate being frustratingly low. This easily leads to the impression that “retention is bad everywhere”, because all people hear from other group organizers are complaints about low retention. But this not only involves some reporting bias – groups with better retention rates usually just don’t talk about it much, as it’s not a problem for them. What’s more, even among those that do complain, the retention rate may still vary by a factor of more than 3! So, in this case, it seems very likely to me that there are ways to improve retention for many such groups, it’s just not immediately obvious how to do so.[2]
Well, some things may be truly futile (or optimal!) after all, at least given our current state of knowledge and technology, such as:
But on the other end, there are many things that may seem futile at first, but upon closer inspection probably don’t fulfill the conditions of immutability or optimality:
It’s probably the case that futility is usually earned: many things naturally become more futile the more maxed out they get, such as in the case of data compression or solar panel efficiency – after decades of work and innovation, we’re possibly closing in on fundamental limits to a degree that further major improvements seem unlikely (or impossible).
For individuals, it naturally differs how much effort they’ve already invested into their sleep, or productivity, or expected lifespan. But if you haven’t put meaningful effort into some malleable quantity, then it’s often unlikely you just happen to be close to the optimum by default.
Perhaps a reasonable "5-seconds version" of this post is something like:
Whenever you suspect (or somebody claims) that some desirable property cannot be improved further, think briefly about a) whether that property can be changed at all - e.g. can you think of easy ways to make it worse? - and b) if it can be changed, is there really reason to assume it’s already close to its optimum? If it can be changed and is not close to its optimum, then arguing about its futility may be misguided.
There are definitely cases where further improvements are futile or not worth the effort. But before cutting conversations short or dismissing ideas due to assumptions of futility, we should make sure we’re not just falling for a fallacy.
While at it, I'd like to mention that "responsibility" is a tricky term anyway.
Yeah, okay, maybe my argument hinges a bit on fake data. But come on, do you really think retention is about equal in every group, and the group’s culture and behavior have no meaningful influence?