Vaniver

Wiki Contributions

Comments

Zoe Curzi's Experience with Leverage Research

I'm sort of surprised that you'd interpret that as a mistake. It seems to me like Eliezer is running a probabilistic strategy, which has both type I and type II errors, and so a 'mistake' is something like "setting the level wrong to get a bad balance of errors" instead of "the strategy encountered an error in this instance." But also I don't have the sense that Eliezer was making an error.

Zoe Curzi's Experience with Leverage Research

Geoff describes being harmed by some sort of initial rejection by the rationality/EA community (around 2011? 2010?).

One of the interesting things about that timeframe is that a lot of the stuff is online; here's the 2012 discussion (Jan 9th, Jan 10th, Sep 19th), for example. (I tried to find his earliest comment that I remembered, but I don't think it was with the Geoff_Anders account or it wasn't on LessWrong; I think it was before Leverage got started, and people responded pretty skeptically then also?)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I think most of LW believes we should not risk ostracizing a group (with respect to the rest of the world) that might save the world, by publicizing a few broken eggs. If that's the case, much discussion is completely moot. I personally kinda think that the world's best shot is the one where MIRI/CFAR type orgs don't break so many eggs. And I think transparency is the only realistic mechanism for course correction. 

FWIW, I (former MIRI employee and current LW admin) saw a draft of this post before it was published, and told jessicata that I thought she should publish it, roughly because of that belief in transparency / ethical treatment of people.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

they randomly make big errors

I think it's important that the errors are not random; I think you mean something more like "they make large opaque errors."

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Were you criticized for socializing with people outside MIRI/CFAR, especially with "rival groups"?

As a datapoint, while working at MIRI I started dating someone working at OpenAI, and never felt any pressure from MIRI people to drop the relationship (and he was welcomed at the MIRI events that we did, and so on), despite Eliezer's tweets discussed here being a pretty widespread belief at MIRI. (He wasn't one of the founders, and I think people at MIRI saw a clear difference between "founding OpenAI" and "working at OpenAI given that it was founded", so idk if they would agree with the frame that OpenAI was a 'rival group'.)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I believe Anthropic doesn't expect its employees to be in the office every day, but I think this is more pandemic-related than it is a deliberate organizational design choice; my guess is that most Anthropic employees will be in the office a year from now.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

On the other side of it, why do people seem TOO DETERMINED to turn him into a scapegoat? Most of you don't sound like you really know him at all.

A blogger I read sometimes talks about his experience with lung cancer (decades ago), where people would ask his wife "so, he smoked, right?" and his wife would say "nope" and then they would look unsettled. He attributed it to something like "people want to feel like all health issues are deserved, and so their being good / in control will protect them." A world where people sometimes get lung cancer without having pressed the "give me lung cancer" button is scarier than the world where the only way to get it is by pressing the button.

I think there's something here where people are projecting all of the potential harm onto Michael, in a way that's sort of fair from a 'driving their actions' perspective (if they're worried about the effects of talking to him, maybe they shouldn't talk to him), but which really isn't owning the degree to which the effects they're worried about are caused by their instability or the them-Michael dynamic.

[A thing Anna and I discussed recently is, roughly, the tension between "telling the truth" and "not destabilizing the current regime"; I think it's easy to see there as being a core disagreement about whether or not it's better to see the way in which the organizations surrounding you are ___, and Michael is being thought of as some sort of pole for the "tell the truth, even if everything falls apart" principle.]

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Somehow this reminds me of the time I did a Tarot reading for someone, whose only previous experience had been Brent Dill doing a Tarot reading, and they were... sort of shocked at the difference. (I prefer three card layouts with a simple context where both people think carefully about what each of the cards could mean; I've never seen his, but the impression I got was way more showmanship.)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Note that there's an important distinction between "corporate management" and "corporate employment"--the thing where you say "yeesh, I'm glad I'm not a manager at Google" is substantially different from the thing where you say "yeesh, I'm glad I'm not a programmer at Google", and the audience here has many more programmers than managers.

[And also Vanessa's experience matches my impressions, tho I've spent less time in industry.]

[EDIT: I also thought it was clear that you meant this more as a "this is what MIRI was like" than "MIRI was unusually bad", but I also think this means you're open to nostalgebraist's objection, that you're ordering things pretty differently from how people might naively order them.]

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I mean, I also do things that I would consider 'art' that I think are distinct from rationality. But, like, just like I wouldn't really consider 'meditation' an art project instead of 'inner work' or 'learning how to think' or w/e, I wouldn't really consider Circling an art project instead of those things.

Load More