Wiki Contributions

Comments

Just came to my mind that these are things I tend to think of under the heading "considerateness" rather than kindness

Guess I'd agree. Maybe I was anchored a bit here by the existing term of computational kindness. :)

Fair point. Maybe if I knew you personally I would take you to be the kind of person that doesn't need such careful communication, and hence I would not act in that way. But even besides that, one could make the point that your wondering about my communication style is still a better outcome than somebody else being put into an uncomfortable situation against their will.

I should also note I generally have less confidence in my proposed mitigation strategies than in the phenomena themselves. 

Thanks for the example! It reminds me of how I once was a very active Duolingo user, but then they published some update that changed the color scheme. Suddenly the duolingo interface was brighter and lower contrast, which just gave me a headache. At that point I basically instantly stopped using the app, as I found no setting to change it back to higher contrast. It's not quite the same of course, but probably also something that would be surprising to some product designers -- "if people want to learn a language, surely something so banal as a brightening up the font color a bit would not make them stop using our app".

Another operationalization for the mental model behind this post: let's assume we have two people, Zero-Zoe and Nonzero-Nadia. They are employed by two big sports clubs and are responsible for the living and training conditions of the athletes. Zero-Zoe strictly follows study results that had significant results (and no failed replications) in her decisions. Nonzero-Nadia lets herself be informed by studies in a similar manner, but also takes priors into account for decisions that have little scientific backing, following a "causality is everywhere and effects are (almost) never truly 0" world view, and goes for many speculative but cheap interventions, that are (if indeed non-zero) more likely to be beneficial rather than detrimental.

One view is that Nonzero-Nadia is wasting her time and focuses on too many inconsequential considerations, so will overall do a worse job than Zero-Zoe as she's distracted from where the real benefits can be found.

Another view, and the one I find more likely, is that Nonzero-Nadia can overall achieve better results (in expectation), because she too will follow the most important scientific findings, but on top of that will apply all kinds of small positive effects that Zero-Zoe is missing out on.

(A third view would of course be "it doesn't make any difference at all and they will achieve completely identical results in expectation", but come on, even an "a non-negligible subset of effect sizes is indeed 0"-person would not make that prediction, right?)

You're right of course - in the quoted part I link to the wikipedia article for "almost surely" (as the analogous opposite case of "almost 0"), so yes indeed it can happen that the effect is actually 0, but this is so extremely rare on a continuum of numbers that it doesn't make much sense to highlight that particular hypothesis. 

For many such questions it's indeed impossible to say. But I think there are also many, particularly the types of questions we often tend to ask as humans, where you have reasons to assume that the causal connections collectively point in one direction, even if you can't measure it.

Let's take the question whether improving air quality at someone's home improves their recovery time after exercise. I'd say that this is very likely. But I'd also be a bit surprised if studies were able to show such an effect, because it's probably small, and it's probably hard to get precise measurements. But improving air quality is just an intervention that is generally "good", and will have small but positive effects on all kinds of properties in our lives, and negative effects on much fewer properties. And if we accept that the effect on exercise recovery will not be zero, then I'd say there's a chance of something like 90% that this effect will be beneficial rather than detrimental.

Similarly, with many interventions that are supposed to affect behavior of humans, one relevant question that is often answerable is whether the intervention increases or reduces friction. And if we expect no other causal effect that may dominate that one, then often the effect on friction may predict the overall outcome of that intervention.

A basic operationalization of "causality is everywhere" is "if we ran an RCT on some effect with sufficiently many subjects, we'd always reach statistical significance" - which is an empirical claim that I think is true in "almost" all cases. Even for "if I clap today, will it change the temperature in Tokyo tomorrow?". I think I get what you mean by "if causality is everywhere, it is nowhere" (similar to "a theory that can explain everything has no predictive power"), but my "causality is everyhwere" claim is an at least in theory verifiable/falsifiable factual claim about the world.

Of course "two things are causally connected" is not at all the same as "the causal connection is relevant and we should measure it / utilize it / whatever". My basic point is that assuming that something has no causal connection is almost always wrong. Maybe this happens to yield appropriate results, because the effect is indeed so small that you can simply act as if there was no causal connection. But I also believe that the "I believe X and Y have no causal connection at all" world view leads to many errors in judgment, and makes us overlook many relevant effects as well.

Indeed, I fully agree with this. Yet when deciding that something is so small that it's not relevant, it's (in my view anyway) important to be mindful of that, and to be transparent about your "relevance threshold", as other people may disagree about it.

Personally I think it's perfectly fine for people to consciously say "the effect size of this is likely so close to 0 we can ignore it" rather than "there is no effect", because the former may well be completely true, while the latter hints at a level of ignorance that leaves the door for conceptual mistakes wide open.

Just to note I wrote a separate post focusing on pretty much that last point:


Personally I have a very strong prior that nudging must have an effect > 0 - it would just be extremely surprising to me if the effect of an intervention that clearly points in one direction would be exactly 0. This may however still be compatible with the effects in many cases being too small to be worth to put the spotlight on, and I suspect it just strongly depends on the individual case and intervention.

Interesting, hadn't heard of this! Haven't fully grasped the "No evidence for nudging after adjusting for publication bias" study yet, but at first glance it looks to me as if it is rather evidence for small effect sizes than for no effect at all? Generally, when people say "nudging doesn't work", this can mean a lot of things, from "there's no effect at all" to "there often is an effect, but it's not very large, and it's not worth it to focus on this in policy debates", to "it has a significant effect, but it will never solve a problem fully because it only affects the behavior of a minority of subjects". 

There's also this article making some similar points, overall defending the effectiveness of nudging while also pushing for more nuance in the debate. They cite one very large study in particular that showed significant effects while avoiding publication bias (emphasis mine):

The study was unique because these organizations had provided access to the full universe of their trials—not just ones selected for publication. Across 165 trials testing 349 interventions, reaching more than 24 million people, the analysis shows a clear, positive effect from the interventions. On average, the projects produced an average improvement of 8.1 percent on a range of policy outcomes. The authors call this “sizable and highly statistically significant,” and point out that the studies had better statistical power than comparable academic studies. So real-world interventions do have an effect, independent of publication bias.
(...)
We can start to see the bigger problem here. We have a simplistic and binary “works” versus “does not work” debate. But this is based on lumping together a massive range of different things under the “nudge” label, and then attaching a single effect size to that label.

Personally I have a very strong prior that nudging must have an effect > 0 - it would just be extremely surprising to me if the effect of an intervention that clearly points in one direction would be exactly 0. This may however still be compatible with the effects in many cases being too small to be worth to put the spotlight on, and I suspect it just strongly depends on the individual case and intervention.

Load More