Geoff_Anders

Comments

Rationality and Levels of Intervention

Good point. I think they are prima facie orthogonal. Empirically, though, my current take is that many deep psychological distortions affect attention in a way that makes trying to manage them primarily on short time scales extremely difficult compared to managing them on longer time scales.

Imagine, for instance, that you have underlying resignation that causes your S1 to put 5x the search power into generating plausible failure scenarios than plausible success scenarios. This might be really hard to detect on the 5 second level, especially if you don't have a good estimate of the actual prevalence of plausible failure or success scenarios (or, a good estimate of the actual prevalence of plausible failure or success scenarios, as accessible by your own style of thinking). But on longer time scales, you can see yourself potentially bending too pessimistic and start to investigate why. That might then turn up the resignation.

Rationality and Levels of Intervention
I think I'm willing to concede that there is something of an empirical question about what works best for truth-seeking, as much as that feels like a dangerous statement to acknowledge. Though seemingly true, it feels like it's something that people who try to get you commit bad epistemic moves like to raise [1].

There's a tricky balance to maintain here. On one hand, we don't want to commit bad epistemic moves. On the other hand, failing to acknowledge the empirical basis of something when the evidence of its being empirical is presented is itself a bad epistemic move.

With epistemic dangers, I think there is a choice between "confront" and "evade". Both are dangerous. Confronting the danger might harm you epistemically, and is frequently the wrong idea — like "confronting" radiation. But evading the danger might harm you epistemically, and is also frequently wrong — like "evading" a treatable illness. Ultimately, whether to confront or evade is an empirical question.

Allowing questions of motivation to factor into one's truth-seeking process feels most perilous to me, mostly as it seems too easy to claim one's motivation will be affected adversely to justify any desired behavior. I don't deny certain moves might destroy motivation, but it seems the risks of allowing such a fear to be a justification for changing behavior are much worse. Granted, that's an empirical claim I'm making.

One good test here might be: Is a person willing to take hits to their morale for the sake of acquiring the truth? If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics. Another good test might be: If the person avoids useful behavior X in order to maintain their motivation, do they have a plan to get to a state where they won't have to avoid behavior X forever? If not, that might be a cause for concern.

Rationality and Levels of Intervention

I currently think we are in a world where a lot of discussion of near-guesses, mildly informed conjectures, probably-wrong speculation, and so forth is extremely helpful, at least in contexts where one is trying to discover new truths.

My primary solution to this has been (1) epistemic tagging, including coarse-grained/qualitative tags, plus (2) a study of what the different tags actually amount to empirically. So person X can say something and tag it as "probably wrong, just an idea", and you can know that when person X uses that tag, the idea is, e.g., usually correct or usually very illuminating. Then over time you can try to get people to sync up on the use of tags and an understanding of what the tags mean.

In cases where it looks like people irrationally update on a proposition, even with appropriate tags, it might be better to not discuss that proposition (or discuss in a smaller, safer group) until it has achieved adequately good epistemic status.

Open & Welcome Thread - September 2019

Hi everyone! For those who don’t know me, I’m Geoff Anders. I’ve been the leader of a community adjacent to the rationalist community for many years, a community centered around my research organization Leverage Research. I engaged mostly with the rationalist community in 2011-2014. I visited SingInst in March 2011, taught at the Rationality Boot Camp in June and July 2011, attended the July 2012 CFAR workshop, and then was a guest instructor at CFAR from 2012-2014.

For the past many years, I’ve been primarily focused on research. Leverage has now undergone a large change, and as part of that I’m switching to substantially more public engagement. I’m planning to write up a retrospective on the first eight and a half years of Leverage’s work and put that on my personal blog.

In the meantime, I thought it would be good to start engaging with people more and I thought the rationalist community and LessWrong was a good place to start. As part of my own pursuit of truth, I’ve developed methods, techniques, and attitudes that could be thought of as an approach to “rationality”. These techniques, methods, etc., differ from those I’ve seen promulgated by rationalists, so hopefully there’s room for a good discussion, and maybe we can bridge some inferential distance :).

Also, I’m mindful that I’m coming in from a different intellectual culture, so please let me know if I accidentally violate any community norms, it’s not intentional.

Best causal/dependency diagram software for fluid capture?

Here are instructions for setting up the defaults the way some people have found helpful:

  1. Open yEd.
  2. Create a new document.
  3. Click the white background; a small yellow square should appear on the canvas.
  4. Click the small yellow square so as to select it.
  5. Click and drag one of the corners of the yellow square to resize it. Make it the default size you'd like your text boxes to be. You will be able to change this later.
  6. Make sure the yellow square is still selected.
  7. Look at the menu in the lower right. It is called "Properties View". It will show you information about the yellow square.
  8. Click the small yellow square in the menu next to the words "Fill Color".
  9. Select the color white for the Fill Color.
  10. Lower in the menu, under "Label", there is an item called "Placement". Find it. Change Placement to "Internal" and "Center".
  11. Right below Placement in the menu is "Size". Find it. Change Size to "Fit Node Width".
  12. Right below Size is "Configuration". Find it. Change Configuration to "Cropping".
  13. Right below Configuration is "Alignment". Find it. Ensure that Alignment is "Center".
  14. In the upper toolbar, click "File" then "Preferences".
  15. A menu will come up. Click the "Editor" tab.
  16. You will see a list of checkboxes. "Edit Label on Create Node" will be unchecked. Check it.
  17. Click Apply.
  18. In the upper toolbar, click "Edit" then "Manage Palette".
  19. A menu will come up. In the upper left there will be a button called "New Section". Click it.
  20. Name the new section after yourself.
  21. Verify that the new section has been created by locating it in the righthand list of "Displayed Palette Selections".
  22. Close the Palette Manager menu.
  23. Doubleclick your white textbox to edit its label.
  24. Put in something suitably generic to indicate a default textbox. I use "[text]" (without the quotes).
  25. Select your white textbox. Be sure that you have selected it, but are not now editing the label.
  26. Right click the white textbox. A menu will appear.
  27. On the menu, mouse over "Add to Palette", then select the palette you named after yourself.
  28. On the righthand side of the screen, there will be a menu at the top called "Palette". Find it.
  29. Scroll through the palettes in the Palette menu until you find the palette you named after yourself. Expand it.
  30. You will see your white textbox in the palette you have named after yourself. Click it to select it.
  31. Right click the white textbox in the palette. Select "Use as Default".
  32. To check that you have done everything properly, click on the white background canvas. Did it create a white textbox like your original, and then automatically allow you to edit the label? If so, you're done.

Then... a. Click the white background to create a box. b. Click a box and drag to create an arrow. c. Click an already existing box to select it. Once selected, click and drag to move it. d. Doubleclick an already existing box to edit its label.

Enjoy!

A Critique of Leverage Research's Connection Theory

For at least 2 years prior to January 2009, I procrastinated between 1-3 hours a day reading random internet news sites. After I created my first CT chart, I made the following prediction: "If I design a way to gain information about the world that does not involve reading internet news sites that also does not alter my way of achieving my other intrinsic goods, then I will stop spending time reading these internet news sites." The "does not alter my way of achieving my other intrinsic goods" was unpacked. It included: "does not alter my way of gaining social acceptance", "does not alter my relationships with my family members", etc. The specifics were unpacked there as well.

This was prediction was falsifiable - it would have failed if I had kept reading internet news sites. It was also bold - cogsci folk and good random human psychologists would have predicted no change in my internet news reading behavior. And it was also successful - after implementing the recommendation in January 2009, I stopped procrastinating as predicted. Now, of course there are multiple explanations for the success of the prediction, including "CT is true" and "you just used your willpower". Nevertheless, this is an example of a faisifiable, bold, successful prediction.

A Critique of Leverage Research's Connection Theory

If I recall correctly, I was saying that I didn't know how to use CT to predict simple things of the form "Xs will always Y" or "Xs will Y at rate Z", where X and Y refer to simple observables like "human", "blush", etc. It would be great if I could do this, but unfortunately I can't.

Instead, what I can do is use the CT charting procedure to generate a CT chart for someone and then use CT to derive predictions from the chart. This yields predictions of the form "if a person with chart X does Y, Z will occur". These predictions frequently do not overlap with what existing cognitive science would have one expect.

The way I could have evidence in favor of CT would be if I had created CT charts using the CT procedure, used CT to derive predictions from the charts, and then tested the predictions. And I've done this.

On Leverage Research's plan for an optimal world

Connection Theory is not the main thing that we do. It's one of seven main projects. I would estimate that about 15% of our current effort goes directly into CT right now. It's true that having a superior understanding of the human mind is an important part of our plan, and it's true that CT is the main theory we're currently looking at. So that is one reason people are focusing on it. But it's also one of the better-developed parts of our website right now. So that's probably another reason.

Introducing Leverage Research

I can usually do any type of work. Sometimes it becomes harder for me to write detailed documents in the last couple hours of my day.

On Leverage Research's plan for an optimal world

We've tried to fill in step 3 quite a bit. Check out the plan and also our backup plan. We're definitely open to suggestions for ways to improve, especially places where the connection between the steps is the most tenuous.

Load More