ahartell

Comments

On clinging

Indeed, before dismissing it entirely, one would presumably want an account of why it features so prominently in our mental and social lives.

One aspect to this seems to be that clinging is a mechanism by which a portion of the network maintains its own activation.  Given evolutionary dynamics, it's unsurprising to see widespread greediness and self-recommendation among neurons/neural structures (cf Neurons Gone Wild).

Endorsed.

In addition to safety and contact, another dynamic was that I was generally not S1 expecting much value to come out of Dragon Army, so chafing more within the system seemed like pain, effort, and time spent for little expected gain.

Stag hunts, anyone?

Edit: Though, I will note that it can be hard to find the space between "I'm damaging the group by excluding my optimization power from the process" and "I'm being a Red Knight here and should be game for whatever the commander decides." It may seem like the obvious split is "expressive in discussion and game in the field" but discussion time is actually really valuable. So it seems like the actual thing is "be game until the cost to you becomes great enough that something needs to change". If you reduce the minimum size of misfit enough, then it becomes intractable to deal with everyone's needs. But then you have to figure out if a recent failure was a result of things being seriously broken or just a sign that you need to Be Better in some operationalized and "doable" way. When do you bring up the problem? It's hard.

Many of the Dragons who stepped into the role of the Ghost for a time did so softly and gradually, and it never felt like this level of absence was Notably Different from the previous level, in a paradox-of-the-heap sort of way. Set a bar, and set a gradient around that bar, and stay in contact.

As the person who fell most heavily into this role, the above resonates a lot. Below are some other thoughts on my experience.


I had the sense early on that I wasn't getting very much value out of group activities, and felt not very connected to the house. In this way I think "Black Knight"-style considerations were major contributors to my Ghost behavior. Competing commitments and general depression were also relevant. I didn't really feel like there was much the house could do to help me with that, but I don't know whether that's true. If it weren't for the Black Knight dynamic, I think I would have prioritized DA over other commitments, but depression may have been sufficient for me to end up as a Ghost anyway.

Not Getting Value Out of Group Activities

The things that the whole house can do (or even a large subset) are unlikely to be the on the capability frontier of the individual in an area of serious interest for that individual. Everyone needs to be able to do the thing, and there will be more variance in skill in areas that are a major focus of some but not all of the group. Programming ability is an example.

Because of something like this, DA group activities rarely felt like they were on a growth-edge that I cared about. In particular, group exercise usually felt costly with little benefit, and I never managed to get EE to be especially valuable for me. Social things like our weekly house dinner (a substantial fraction of Dragon Army hours) felt less fun or less growthy than the likely alternatives, but I probably put unusually low value on this kind of bonding.

Now when I imagine a group that is striving for excellence, it seems like there are two ways it can work:

1) The members share a common major project and can work together towards that goal. Here it makes sense for the group to ask for a high time commitment from its members, since time put towards the group directly advances a major goal of the individual.

2) The members have different goals. In this case it seems like the group should ask for a smaller time commitment. Members can mutually draw inspiration from each other and can coordinate when there is a shared goal, but generally the group should offer affordances, not impose requirements.

Counter-evidence: I think I would have gotten a lot of value out of covering the bases on dimensions I care about. Exercise was supposed to do this, and would do it along Duncan's version of the "capable well-rounded human" dimension. We discussed doing something like this for rationality skills, but we didn't follow through.

In this case, all members share a common goal of reaching a minimum bar in some area. Still, this can be boring for those who are already above the bar, and for me this sort of "catching up"/"covering the bases" is much less exciting than pushing forward on a main area of interest. (Which means group-time still ends up as less-fun-than-the-alternative by default.)

There were experiments intended to incentivize Dragons to do solo work on things they considered high priority, but my impression was that there was little encouragement/accountability/useful structure. Things I was originally excited about turned into homework I had to do for DA.

Contra double crux

[These don't seem like cruxes to me, but are places where our models differ.]

[...]

a crux for some belief B is another belief C which if one changed one's mind about C, one would change one's mind about B.

[...]

A double crux is a particular case where two people disagree over B and have the same crux, albeit going in opposite directions. Say if Xenia believes B (because she believes C) and Yevgeny disbelieves B (because he does not believe C), then if Xenia stopped believing C, she would stop believing B (and thus agree with Yevgeny) and vice-versa.

[...]

Across most reasonable people on most recondite topics, 'cruxes' are rare, and 'double cruxes' (roughly) exponentially rarer.

It seems like your model might be missing a class of double cruxes:

It doesn't have to be the case that, if my interlocutor and I drew up belief maps, we would both find a load-bearing belief C about which we disagree. Rather, it's often the case that my interlocutor has some 'crucial' argument or belief which isn't on my radar at all, but would indeed change my mind about B if I were convinced it were true. In another framing, I have an implicit crux for most beliefs that there is no extremely strong argument/evidence to the contrary, which can match up against any load-bearing belief the other person has. In this light, it seems to me that one should not be very surprised to find double cruxes pretty regularly.

Further, even when you have a belief map where the main belief rests on many small pieces of evidence, it is usually possible to move up a level of abstraction and summarize all of that evidence in a higher-level claim, which can serve as a crux. This does not address your point about relatively unimportant shifts around 49%/51%, but in practice it seems like a meaningful point.

Tensions in Truthseeking

[Note: This comment seems pretty pedantic in retrospect. Posting anyway to gauge reception, and because I'd still prefer clarity.]

On honest businesses, I'd expect successful ones to involve overconfidence on average because of winner's curse.

I'm having trouble understanding this application of winner's curse.

Are you saying something like the following:

  1. People put in more resources and generally try harder when they estimate a higher chance of success. (Analogous to people bidding more in an auction when they estimate a higher value.)

  2. These actions increase the chance of success, so overconfident people are overrepresented among successes.

  3. This overrepresentation holds even if the "true chance of success" is the main factor. Overconfidence of founders just needs to shift the distribution of successes a bit, for "successful ones to involve overconfidence on average".

First, this seems weird to me because I got the impression that you were arguing against overconfidence being useful.

Second, are you implying that successful businesses have on average "overpaid" for their successes in effort/resources? That is central to my understanding of winner's curse, but maybe not yours.

Sorry if I'm totally missing your point.

CFAR workshop with new instructors in Seattle, 6/7-6/11

This was a test comment for something very important.

[This comment is no longer endorsed by its author]Reply
Lesswrong 2016 Survey

Just finished. I'm sure my calibration was terrible though.

August 2015 Media Thread

Hello Internet is a fun "two guys talking" podcast made by two popular youtubers including CGPGray, the guy who made this great video about the future of automation and employment. Low (almost no) informational content, but really enjoyable, and CGPGray will often say things that make it sound as if he's read at least some of LessWrong/Overcoming Bias. At the very least he's a transhumanist.

Load More