It was published this evening. Here is a link to the letter, and here is the announcement on Twitter.
Yes, here: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=3gMWA8PjoCnzsS7bB
Zoe - I don’t know if you want me to respond to you or not, and there’s a lot I need to consider, but I do want to say that I’m so, so sorry. However this turns out for me or Leverage, I think it was good that you wrote this essay and spoke out about your experience.
It’s going to take me a while to figure out everything that went wrong, and what I did wrong, because clearly something really bad happened to you, and it is in some way my fault. In terms of what went wrong on the project, one throughline I can see was arrogance, especially my arrogance, which affected so many of the choices we made. We dismissed a lot of the actually useful advice and tools and methods from more typical sources, and it seems that blocking out society made room for extreme and harmful narratives that should have been tempered by a lot more reality. It’s terrible that you felt like your funding, or ability to rest, or take time off, or choose how to interact with your own mind were compromised by Leverage’s narratives, including my own. I totally did not expect this, or the negative effects you experienced after leaving, though maybe I would have, had I not narrowed my attention and basically gotten way too stuck in theoryland.
I agree with you that we shouldn’t skip steps. I’ve updated accordingly. Again I’m truly sorry. I really wanted your experience on the project to be good.
Hi everyone. I wanted to post a note to say first, I find it distressing and am deeply sorry that anyone had such bad experiences. I did not want or intend this at all.
I know these events can be confusing or very taxing for all the people involved, and that includes me. They draw a lot of attention, both from those with deep interest in the matter, where there may be very high stakes, and from onlookers with lower context or less interest in the situation. To hopefully reduce some of the uncertainty and stress, I wanted to share how I will respond.
My current plan (after writing this note) is to post a comment about the above-linked post. I have to think about what to write, but I can say now that it will be brief and positive. I’m not planning to push back or defend. I think the post is basically honest and took incredible courage to write. It deserves to be read.
Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.
It may be useful to address the Leverage/Rationality relation or the Leverage/EA relation as well, but discussion of that might distract us from what is most important right now.
Author of the post here. I edited the post by:(1) adding an introduction — for context, and to make the example in Part I less abrupt(2) editing the last section — the original version was centered on my conversations with Rationalists in 2011-2014; I changed it to be a more general discussion, so as to broaden the post's applicability and make the post more accessible
Good point. I think they are prima facie orthogonal. Empirically, though, my current take is that many deep psychological distortions affect attention in a way that makes trying to manage them primarily on short time scales extremely difficult compared to managing them on longer time scales.
Imagine, for instance, that you have underlying resignation that causes your S1 to put 5x the search power into generating plausible failure scenarios than plausible success scenarios. This might be really hard to detect on the 5 second level, especially if you don't have a good estimate of the actual prevalence of plausible failure or success scenarios (or, a good estimate of the actual prevalence of plausible failure or success scenarios, as accessible by your own style of thinking). But on longer time scales, you can see yourself potentially bending too pessimistic and start to investigate why. That might then turn up the resignation.
I think I'm willing to concede that there is something of an empirical question about what works best for truth-seeking, as much as that feels like a dangerous statement to acknowledge. Though seemingly true, it feels like it's something that people who try to get you commit bad epistemic moves like to raise .
There's a tricky balance to maintain here. On one hand, we don't want to commit bad epistemic moves. On the other hand, failing to acknowledge the empirical basis of something when the evidence of its being empirical is presented is itself a bad epistemic move.
With epistemic dangers, I think there is a choice between "confront" and "evade". Both are dangerous. Confronting the danger might harm you epistemically, and is frequently the wrong idea — like "confronting" radiation. But evading the danger might harm you epistemically, and is also frequently wrong — like "evading" a treatable illness. Ultimately, whether to confront or evade is an empirical question.
Allowing questions of motivation to factor into one's truth-seeking process feels most perilous to me, mostly as it seems too easy to claim one's motivation will be affected adversely to justify any desired behavior. I don't deny certain moves might destroy motivation, but it seems the risks of allowing such a fear to be a justification for changing behavior are much worse. Granted, that's an empirical claim I'm making.
One good test here might be: Is a person willing to take hits to their morale for the sake of acquiring the truth? If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics. Another good test might be: If the person avoids useful behavior X in order to maintain their motivation, do they have a plan to get to a state where they won't have to avoid behavior X forever? If not, that might be a cause for concern.
I currently think we are in a world where a lot of discussion of near-guesses, mildly informed conjectures, probably-wrong speculation, and so forth is extremely helpful, at least in contexts where one is trying to discover new truths.
My primary solution to this has been (1) epistemic tagging, including coarse-grained/qualitative tags, plus (2) a study of what the different tags actually amount to empirically. So person X can say something and tag it as "probably wrong, just an idea", and you can know that when person X uses that tag, the idea is, e.g., usually correct or usually very illuminating. Then over time you can try to get people to sync up on the use of tags and an understanding of what the tags mean.
In cases where it looks like people irrationally update on a proposition, even with appropriate tags, it might be better to not discuss that proposition (or discuss in a smaller, safer group) until it has achieved adequately good epistemic status.
Hi everyone! For those who don’t know me, I’m Geoff Anders. I’ve been the leader of a community adjacent to the rationalist community for many years, a community centered around my research organization Leverage Research. I engaged mostly with the rationalist community in 2011-2014. I visited SingInst in March 2011, taught at the Rationality Boot Camp in June and July 2011, attended the July 2012 CFAR workshop, and then was a guest instructor at CFAR from 2012-2014.
For the past many years, I’ve been primarily focused on research. Leverage has now undergone a large change, and as part of that I’m switching to substantially more public engagement. I’m planning to write up a retrospective on the first eight and a half years of Leverage’s work and put that on my personal blog.
In the meantime, I thought it would be good to start engaging with people more and I thought the rationalist community and LessWrong was a good place to start. As part of my own pursuit of truth, I’ve developed methods, techniques, and attitudes that could be thought of as an approach to “rationality”. These techniques, methods, etc., differ from those I’ve seen promulgated by rationalists, so hopefully there’s room for a good discussion, and maybe we can bridge some inferential distance :).
Also, I’m mindful that I’m coming in from a different intellectual culture, so please let me know if I accidentally violate any community norms, it’s not intentional.
Here are instructions for setting up the defaults the way some people have found helpful:
a. Click the white background to create a box.
b. Click a box and drag to create an arrow.
c. Click an already existing box to select it. Once selected, click and drag to move it.
d. Doubleclick an already existing box to edit its label.