Decision Theory: Newcomb's Problem


My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

In the last two years, CFAR hasn't done much outward-facing work at all, due to COVID, and so has neither been a MIRI funnel nor definitively not a MIRI funnel.

Yes, but I would predict that we won't be the same sort of MIRI funnel going forward. This is because MIRI used to have specific research programs that it needed to hire for, and it it was sponsoring AIRCS (covering direct expenses plus loaning us some researchers to help run the thing) in order to recruit for that, and those research programs have been discontinued and so AIRCS won't be so much of a thing anymore.

This has been the main part of why no AIRCS post vaccines, not just COVID.

I, and I would guess some others at CFAR, am interested in running AIRCS-like programs going forward, especially if there are groups that want to help us pay the direct expenses for those programs and/or researchers that want to collaborate with us on such programs. (Message me if you're reading this and in one of those categories.) But it'll be less MIRI-specific this time, since there isn't that recruiting angle.

Also, more broadly, CFAR has adopted different structures for organizing ourselves internally, and we are bigger now into "if you work for CFAR, or are a graduate of our instructor training program, and you have a 'telos' that you're on fire to do, you can probably do it with CFAR's venue/dollars/collaborations of some sorts" (we're calling this "platform CFAR," Elizabeth Garrett invented it and set it up maybe about a year ago, can't remember), and also into doing hourly rather than salaried work in general (so we don't feel an obligation to fill time with some imagined 'supposed to do CFAR-like activity" vagueness, so that we can be mentally free) and are also into taking more care not to have me or anyone speak for others at CFAR or organize people into a common imagined narrative one must pretend to believe, but rather into letting people do what we each believe in, and try to engage each other where sensible. Which makes it a bit harder to know what CFAR will be doing going forward, and also leaves me thinking it'll have a bit more variety in it. Probably.

Zoe Curzi's Experience with Leverage Research

Thanks! I would love follow-up on LW to the twitch stream, if anyone wants to. There were a lot of really interesting things being said in the text chat that we didn’t manage to engage with, for example. Although unfortunately the recording was lost, which is unfortunate because IMO it was a great conversation.

TekhneMakre writes:

This suggests, to me, a (totally conjectural!) story where [Geoff] got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him…

This seems right to me

Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from "other people's donors". And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.

Yes. My present view is that Geoff’s reaching out to donors here was legit, and my and others’ complaints were not; donors should be able to hear all the pitches, and it’s messed up to think of “person reached out to donor X to describe a thingy X might want to donate to” as a territorial infringement.

This seems to me like an example of me and others escalating the “narrative cold war” that you mention.

[Geoff] seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance…

I noticed some of this, though less than I might’ve predicted from the background context in which Geoff was, as you note, talking to 50 people, believing himself to be recorded, and in an overall social context in which a community he has long been in a “narrative cold war” with (under your hypothesis, and mine) was in the midst of trying to decide whether to something-like scapegoat him.

I appreciate both that you mentioned your perception (brought it into text rather than subtext, where we can reason about it, and can try to be conscious of all the things together), and that you’re trying to figure out how to incentivize and not disincentivize Geoff’s choice to do the video (which IMO shared a bunch of good info).

I’d like to zoom in on an example that IMO demonstrates that the causes of the “hemming and hawing” are sometimes (probably experience-backed) mistrust of the rationalist community as a [context that is willing to hear and fairly evaluate his actual evidence], rather than, say, desire for the truth to be hidden:

At one point toward the end of the twitch, Geoff was responding to a question about how we got from a pretty cooperative state in ~2013, and said something kinda like “… I’m trying to figure out how to say this without sounding like I’m being unfair your side of things,” or something, and I was like “maybe just don’t, and I or others can disagree if we think you’re wrong,” and then he sort of went “okay, if you’re asking for it” and stopped hemming and hawing and told a simple and direct story about how in the early days of 2011-2014, Leverage did a bunch of things to try to cause specific collaborations that would benefit particular other groups (THINK, the original EA leaders gathering in the Leverage house in 2013, the 2014 retreat + summit, a book launch party for ‘Our Final Invention’ co-run with SingInst, some general queries about what kind of collaborations folks might want, early attempts to merge with SingInst and with 80k), and how he would’ve been interested in and receptive to other bids for common projects if I or others had brought him some. And I was like “yes, that matches my memory and perception; I remember you and Leverage seeming unusually interested in getting specific collaborations or common projects that might support your goals + other groups’ goals at once, going, and more than other groups, and trying to support cooperation in this way” and he seemed surprised that I would acknowledge this.

So, I think part of the trouble is that Geoff didn’t have positive expectations of us as a context in which to truth-seek together.

One partial contributor to this expectation of Geoff’s, I would guess, is the pattern via which (in my perception) the rationalist community sometimes decides peoples’ epistemics/etc. are “different and bad” and then distances from them, punishes those who don’t act as though we need to distance from them, etc., often in a manner that can seem kinda drastic and all-or-nothing, rather than docking points proportional to what it indicates about a person’s likely future ability to share useful thoughts in a milder-mannered fashion. For example, during a panel discussion at the (Leverage-run) 2014 EA Summit, in front of 200 people, I asked Geoff aloud whether he in fact thought that sticking a pole though someone’s head (a la Phineas Gage) would have no effect on their cognition except via their sense-perception. Geoff answered “yes”, as I expected since he’d previously mentioned this view. And… there was a whole bunch of reaction. E.g., Habryka, in the twitch chat, mentioned having been interning with Leverage at the time of that panel conversation, and said “[that bit of panel conversation] caused me nightmares… because I was interning at Leverage at the time, and it made me feel very alienated from my environment. And felt like some kind of common ground was pulled out from under me.”

I for many years often refrained from sharing some of positive views/data/etc. I had about Leverage, for fear of being [judged or something] for it. (TBC, I had both positive and negative views, and some error bars. But Leverage looked to me like well-meaning people who were trying a hard-core something that might turn out cool, and that was developing interesting techniques and models via psychological research, and I mostly refrained from saying this because I was cowardly about it in response to social pressure. … in addition to my usual practice of sometimes refraining from sharing some of my hesitations about the place, as about most places, in a flinchy attempt to avoid conflict.)

I didn't hear anything that strongly confirms or denies adversarial hypotheses like "Geoff was fairly actively doing something pretty distortiony in Leverage that caused harm, and is sort of hiding this by downplaying / redirecting attention / etc.".

My guess is that he was and is at least partially some of doing this, in addition to making an earnest (and better than I’d expected on generic-across-people priors) effort to share true things. Re: the past dynamics, I and IMO others were also doing actively distortionary stuff, and I think the Geoff’s choices, and mine and others’, need to be understood together, as similar responses to a common landscape.

As I mentioned in the twitch that alas didn’t get recorded, in ~2008-2014, ish, somehow a lot of different EA and rationality and AI risk groups felt like allies and members of a common substantive community, at least in my perception (including my perception of the social context that I imagined lots of other peopl were in. And later on, most seemed to me to kinda give up on most of the others, opting still for a social surface of cooperation/harmony, but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements (with tools of truth-seeking rather than only truce-seeking/surface-harmony-preservation, etc.). (With some of the together-ness getting larger over time in the early years, and then with things drifting apart again.) I’m really interested in whether that transition matches others’ perceptions, and, if so, what y’all think the causes were. IMO it was partly about what I’ve been calling “narrative addiction” and “narrative pyramid schemes,” which needs elaboration rather than a set of phrases (I tried this a bit in the lost twitch video) but I need to go now so may try it later.

Zoe Curzi's Experience with Leverage Research

Alas, no. I'm pretty bummed about it, because I thought the conversation was rather good, but Geoff pushed the "save recording" button after it was started and that didn't work.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Thank you. I disagree with "... relishes 'breaking' others", and probably some other points, but a bunch of this seems really right and like content I haven't seen written up elsewhere. Do share more if you have it. I'm also curious where you got this stuff from.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

CFAR staff retreats often involve circling. Our last one, a couple weeks ago, had this, though as an optional evening thing that some but not most took part in.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Basically no. Can't say a plain "no," but can say "basically no." I'm not willing to give details on this one. I'm somehow fretting on this one, asking if "basically no" is true from all vantage points (it isn't, but it's true from most), looking for a phrase similar to that but slightly weaker, considering e.g. "mostly no", but something stronger is true. I think this'll be the last thing I say in this thread about this topic.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I think some of it has got to be that it's somehow easier to talk about CFAR/MIRI, rather than a sheer number of people thing. I think Leverage is somehow unusually hard to talk about, such that maybe we should figure out how to be extraordinarily kind/compassionate/gentle to anyone attempting it, or something.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Yes; I want to acknowledge that there was a large cost here. (I wasn't sure, from just the comment threads; but I just talked to a couple people who said they'd been thinking of writing up some observations about Leverage but had been distracted by this.)

I am personally really grateful for a bunch of the stuff in this post and its comment thread. But I hope the Leverage discussion really does get returned to, and I'll try to lend some momentum that way. Hope some others do too, insofar as some can find ways to actually help people put things together or talk.

Zoe Curzi's Experience with Leverage Research

Yep. I hope this isn’t bad to do, but I am doing it.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong.

Actually, that was true for the last few years (with an ambiguous in-between time during covid), but it is not true now. Partly because MIRI abandoned the research direction we’d most been trying to help them recruit for. CFAR will be choosing its own paths going forward more.

Load More