Geoff_Anders
Geoff_Anders has not written any posts yet.

Geoff_Anders has not written any posts yet.

Yes, here: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=3gMWA8PjoCnzsS7bB
Zoe - I don’t know if you want me to respond to you or not, and there’s a lot I need to consider, but I do want to say that I’m so, so sorry. However this turns out for me or Leverage, I think it was good that you wrote this essay and spoke out about your experience.
It’s going to take me a while to figure out everything that went wrong, and what I did wrong, because clearly something really bad happened to you, and it is in some way my fault. In terms of what went wrong on the project, one throughline I can see was arrogance, especially my arrogance, which... (read more)
Hi everyone. I wanted to post a note to say first, I find it distressing and am deeply sorry that anyone had such bad experiences. I did not want or intend this at all.
I know these events can be confusing or very taxing for all the people involved, and that includes me. They draw a lot of attention, both from those with deep interest in the matter, where there may be very high stakes, and from onlookers with lower context or less interest in the situation. To hopefully reduce some of the uncertainty and stress, I wanted to share how I will respond.
My current plan (after writing this note) is to post... (read more)
Author of the post here. I edited the post by:
(1) adding an introduction — for context, and to make the example in Part I less abrupt
(2) editing the last section — the original version was centered on my conversations with Rationalists in 2011-2014; I changed it to be a more general discussion, so as to broaden the post's applicability and make the post more accessible
Good point. I think they are prima facie orthogonal. Empirically, though, my current take is that many deep psychological distortions affect attention in a way that makes trying to manage them primarily on short time scales extremely difficult compared to managing them on longer time scales.
Imagine, for instance, that you have underlying resignation that causes your S1 to put 5x the search power into generating plausible failure scenarios than plausible success scenarios. This might be really hard to detect on the 5 second level, especially if you don't have a good estimate of the actual prevalence of plausible failure or success scenarios (or, a good estimate of the actual prevalence of plausible failure or success scenarios, as accessible by your own style of thinking). But on longer time scales, you can see yourself potentially bending too pessimistic and start to investigate why. That might then turn up the resignation.
I think I'm willing to concede that there is something of an empirical question about what works best for truth-seeking, as much as that feels like a dangerous statement to acknowledge. Though seemingly true, it feels like it's something that people who try to get you commit bad epistemic moves like to raise [1].
There's a tricky balance to maintain here. On one hand, we don't want to commit bad epistemic moves. On the other hand, failing to acknowledge the empirical basis of something when the evidence of its being empirical is presented is itself a bad epistemic move.
With epistemic dangers, I think there is a choice between "confront" and "evade". Both are... (read more)
I currently think we are in a world where a lot of discussion of near-guesses, mildly informed conjectures, probably-wrong speculation, and so forth is extremely helpful, at least in contexts where one is trying to discover new truths.
My primary solution to this has been (1) epistemic tagging, including coarse-grained/qualitative tags, plus (2) a study of what the different tags actually amount to empirically. So person X can say something and tag it as "probably wrong, just an idea", and you can know that when person X uses that tag, the idea is, e.g., usually correct or usually very illuminating. Then over time you can try to get people to sync up on the use of tags and an understanding of what the tags mean.
In cases where it looks like people irrationally update on a proposition, even with appropriate tags, it might be better to not discuss that proposition (or discuss in a smaller, safer group) until it has achieved adequately good epistemic status.
Hi everyone! For those who don’t know me, I’m Geoff Anders. I’ve been the leader of a community adjacent to the rationalist community for many years, a community centered around my research organization Leverage Research. I engaged mostly with the rationalist community in 2011-2014. I visited SingInst in March 2011, taught at the Rationality Boot Camp in June and July 2011, attended the July 2012 CFAR workshop, and then was a guest instructor at CFAR from 2012-2014.
For the past many years, I’ve been primarily focused on research. Leverage has now undergone a large change, and as part of that I’m switching to substantially more public engagement. I’m planning to write up a... (read more)
Here are instructions for setting up the defaults the way some people have found helpful:
It was published this evening. Here is a link to the letter, and here is the announcement on Twitter.