Geoff_Anders

Wiki Contributions

Comments

Zoe Curzi's Experience with Leverage Research

It was published this evening. Here is a link to the letter, and here is the announcement on Twitter.

Zoe Curzi's Experience with Leverage Research

Yes, here: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=3gMWA8PjoCnzsS7bB

Zoe Curzi's Experience with Leverage Research

Zoe - I don’t know if you want me to respond to you or not, and there’s a lot I need to consider, but I do want to say that I’m so, so sorry. However this turns out for me or Leverage, I think it was good that you wrote this essay and spoke out about your experience.

It’s going to take me a while to figure out everything that went wrong, and what I did wrong, because clearly something really bad happened to you, and it is in some way my fault. In terms of what went wrong on the project, one throughline I can see was arrogance, especially my arrogance, which affected so many of the choices we made. We dismissed a lot of the actually useful advice and tools and methods from more typical sources, and it seems that blocking out society made room for extreme and harmful narratives that should have been tempered by a lot more reality. It’s terrible that you felt like your funding, or ability to rest, or take time off, or choose how to interact with your own mind were compromised by Leverage’s narratives, including my own. I totally did not expect this, or the negative effects you experienced after leaving, though maybe I would have, had I not narrowed my attention and basically gotten way too stuck in theoryland.

I agree with you that we shouldn’t skip steps. I’ve updated accordingly. Again I’m truly sorry. I really wanted your experience on the project to be good.

Zoe Curzi's Experience with Leverage Research

Hi everyone. I wanted to post a note to say first, I find it distressing and am deeply sorry that anyone had such bad experiences. I did not want or intend this at all.

I know these events can be confusing or very taxing for all the people involved, and that includes me. They draw a lot of attention, both from those with deep interest in the matter, where there may be very high stakes, and from onlookers with lower context or less interest in the situation. To hopefully reduce some of the uncertainty and stress, I wanted to share how I will respond.

My current plan (after writing this note) is to post a comment about the above-linked post. I have to think about what to write, but I can say now that it will be brief and positive. I’m not planning to push back or defend. I think the post is basically honest and took incredible courage to write. It deserves to be read.

Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.

It may be useful to address the Leverage/Rationality relation or the Leverage/EA relation as well, but discussion of that might distract us from what is most important right now.

Rationality, Levels of Intervention, and Empiricism

Author of the post here. I edited the post by:

(1) adding an introduction — for context, and to make the example in Part I less abrupt

(2) editing the last section — the original version was centered on my conversations with Rationalists in 2011-2014; I changed it to be a more general discussion, so as to broaden the post's applicability and make the post more accessible

Rationality, Levels of Intervention, and Empiricism

Good point. I think they are prima facie orthogonal. Empirically, though, my current take is that many deep psychological distortions affect attention in a way that makes trying to manage them primarily on short time scales extremely difficult compared to managing them on longer time scales.

Imagine, for instance, that you have underlying resignation that causes your S1 to put 5x the search power into generating plausible failure scenarios than plausible success scenarios. This might be really hard to detect on the 5 second level, especially if you don't have a good estimate of the actual prevalence of plausible failure or success scenarios (or, a good estimate of the actual prevalence of plausible failure or success scenarios, as accessible by your own style of thinking). But on longer time scales, you can see yourself potentially bending too pessimistic and start to investigate why. That might then turn up the resignation.

Rationality, Levels of Intervention, and Empiricism
I think I'm willing to concede that there is something of an empirical question about what works best for truth-seeking, as much as that feels like a dangerous statement to acknowledge. Though seemingly true, it feels like it's something that people who try to get you commit bad epistemic moves like to raise [1].

There's a tricky balance to maintain here. On one hand, we don't want to commit bad epistemic moves. On the other hand, failing to acknowledge the empirical basis of something when the evidence of its being empirical is presented is itself a bad epistemic move.

With epistemic dangers, I think there is a choice between "confront" and "evade". Both are dangerous. Confronting the danger might harm you epistemically, and is frequently the wrong idea — like "confronting" radiation. But evading the danger might harm you epistemically, and is also frequently wrong — like "evading" a treatable illness. Ultimately, whether to confront or evade is an empirical question.

Allowing questions of motivation to factor into one's truth-seeking process feels most perilous to me, mostly as it seems too easy to claim one's motivation will be affected adversely to justify any desired behavior. I don't deny certain moves might destroy motivation, but it seems the risks of allowing such a fear to be a justification for changing behavior are much worse. Granted, that's an empirical claim I'm making.

One good test here might be: Is a person willing to take hits to their morale for the sake of acquiring the truth? If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics. Another good test might be: If the person avoids useful behavior X in order to maintain their motivation, do they have a plan to get to a state where they won't have to avoid behavior X forever? If not, that might be a cause for concern.

Rationality, Levels of Intervention, and Empiricism

I currently think we are in a world where a lot of discussion of near-guesses, mildly informed conjectures, probably-wrong speculation, and so forth is extremely helpful, at least in contexts where one is trying to discover new truths.

My primary solution to this has been (1) epistemic tagging, including coarse-grained/qualitative tags, plus (2) a study of what the different tags actually amount to empirically. So person X can say something and tag it as "probably wrong, just an idea", and you can know that when person X uses that tag, the idea is, e.g., usually correct or usually very illuminating. Then over time you can try to get people to sync up on the use of tags and an understanding of what the tags mean.

In cases where it looks like people irrationally update on a proposition, even with appropriate tags, it might be better to not discuss that proposition (or discuss in a smaller, safer group) until it has achieved adequately good epistemic status.

Open & Welcome Thread - September 2019

Hi everyone! For those who don’t know me, I’m Geoff Anders. I’ve been the leader of a community adjacent to the rationalist community for many years, a community centered around my research organization Leverage Research. I engaged mostly with the rationalist community in 2011-2014. I visited SingInst in March 2011, taught at the Rationality Boot Camp in June and July 2011, attended the July 2012 CFAR workshop, and then was a guest instructor at CFAR from 2012-2014.

For the past many years, I’ve been primarily focused on research. Leverage has now undergone a large change, and as part of that I’m switching to substantially more public engagement. I’m planning to write up a retrospective on the first eight and a half years of Leverage’s work and put that on my personal blog.

In the meantime, I thought it would be good to start engaging with people more and I thought the rationalist community and LessWrong was a good place to start. As part of my own pursuit of truth, I’ve developed methods, techniques, and attitudes that could be thought of as an approach to “rationality”. These techniques, methods, etc., differ from those I’ve seen promulgated by rationalists, so hopefully there’s room for a good discussion, and maybe we can bridge some inferential distance :).

Also, I’m mindful that I’m coming in from a different intellectual culture, so please let me know if I accidentally violate any community norms, it’s not intentional.

Best causal/dependency diagram software for fluid capture?

Here are instructions for setting up the defaults the way some people have found helpful:

  1. Open yEd.
  2. Create a new document.
  3. Click the white background; a small yellow square should appear on the canvas.
  4. Click the small yellow square so as to select it.
  5. Click and drag one of the corners of the yellow square to resize it. Make it the default size you'd like your text boxes to be. You will be able to change this later.
  6. Make sure the yellow square is still selected.
  7. Look at the menu in the lower right. It is called "Properties View". It will show you information about the yellow square.
  8. Click the small yellow square in the menu next to the words "Fill Color".
  9. Select the color white for the Fill Color.
  10. Lower in the menu, under "Label", there is an item called "Placement". Find it. Change Placement to "Internal" and "Center".
  11. Right below Placement in the menu is "Size". Find it. Change Size to "Fit Node Width".
  12. Right below Size is "Configuration". Find it. Change Configuration to "Cropping".
  13. Right below Configuration is "Alignment". Find it. Ensure that Alignment is "Center".
  14. In the upper toolbar, click "File" then "Preferences".
  15. A menu will come up. Click the "Editor" tab.
  16. You will see a list of checkboxes. "Edit Label on Create Node" will be unchecked. Check it.
  17. Click Apply.
  18. In the upper toolbar, click "Edit" then "Manage Palette".
  19. A menu will come up. In the upper left there will be a button called "New Section". Click it.
  20. Name the new section after yourself.
  21. Verify that the new section has been created by locating it in the righthand list of "Displayed Palette Selections".
  22. Close the Palette Manager menu.
  23. Doubleclick your white textbox to edit its label.
  24. Put in something suitably generic to indicate a default textbox. I use "[text]" (without the quotes).
  25. Select your white textbox. Be sure that you have selected it, but are not now editing the label.
  26. Right click the white textbox. A menu will appear.
  27. On the menu, mouse over "Add to Palette", then select the palette you named after yourself.
  28. On the righthand side of the screen, there will be a menu at the top called "Palette". Find it.
  29. Scroll through the palettes in the Palette menu until you find the palette you named after yourself. Expand it.
  30. You will see your white textbox in the palette you have named after yourself. Click it to select it.
  31. Right click the white textbox in the palette. Select "Use as Default".
  32. To check that you have done everything properly, click on the white background canvas. Did it create a white textbox like your original, and then automatically allow you to edit the label? If so, you're done.

Then... a. Click the white background to create a box. b. Click a box and drag to create an arrow. c. Click an already existing box to select it. Once selected, click and drag to move it. d. Doubleclick an already existing box to edit its label.

Enjoy!

Load More