KPier

Comments

Sorted by
KPier357

I have been in touch with around a half dozen former OpenAI employees who I spoke to before former employees were released and all of them later informed me they were released, and they were not in any identifiable reference class such that I’d expect OpenAI would have been able to selectively release them while not releasing most people. I have further been in touch with many other former employees since they were released who confirmed this. I have not heard from anyone who wasn’t released, and I think it is reasonably likely I would have heard from them anonymously on Signal. Also, not releasing a bunch of people after saying they would seems like an enormously unpopular, hard to keep secret, and not very advantageous move for OpenAI, which is already taking a lot of flak for this. I also have a model of how people choose whether or not to make public statements where it’s extremely unsurprising most people would not choose to do so.

I would indeed guess that all of the people you listed have been released if they were even subject to such agreements in the first place, which I do not know (and the fact Geoffrey Irving was not offered such an agreement is some basis to think they were not uniformly imposed during some of the relevant time periods, imo.)

KPier645

(This is Kelsey Piper). I am quite confident the contract has been widely retracted. The overwhelming majority of people who received an email did not make an immediate public comment. I am unaware of any people who signed the agreement after 2019 and did not receive the email, outside cases where the nondisparagement agreement was mutual (which includes Sutskever and likely also Anthropic leadership). In every case I am aware of, people who signed before 2019 did not reliably receive an email but were reliably able to get released if they emailed OpenAI HR. 

If you signed such an agreement and have not been released, you can of course contact me on Signal: 303 261 2769. 
 

KPier587

Cross posting from the EA Forum: 

It could be that I am misreading or misunderstanding these screenshots, but having read through them a couple of times trying to parse what happened, here's what I came away with:

On December 15, Alice states that she'd had very little to eat all day, that she'd repeatedly tried and failed to find a way to order takeout to their location, and tries to ask that people go to Burger King and get her an Impossible Burger which in the linked screenshots they decline to do because they don't want to get fast food. She asks again about Burger King and is told it's inconvenient to get there.  Instead, they go to a different restaurant and offer to get her something from the restaurant they went to. Alice looks at the menu online and sees that there are no vegan options. Drew confirms that 'they have some salads' but nothing else for her. She assures him that it's fine to not get her anything.


It seems completely reasonable that Alice remembers this as 'she was barely eating, and no one in the house was willing to go out and get her nonvegan foods' - after all, the end result of all of those message exchanges was no food being obtained for Alice and her requests for Burger King being repeatedly deflected with 'we are down to get anything that isn't fast food' and 'we are down to go anywhere within a 12 min drive' and 'our only criteria is decent vibe + not fast food', after which she fails to find a restaurant meeting those (I note, kind of restrictive if not in a highly dense area) criteria and they go somewhere without vegan options and don't get her anything to eat. 

It also seems totally reasonable that no one at Nonlinear understood there was a problem. Alice's language throughout emphasizes how she'll be fine, it's no big deal, she's so grateful that they tried (even though they failed and she didn't get any food out of the 12/15 trip, if I understand correctly). I do not think that these exchanges depict the people at Nonlinear as being cruel, insane, or unusual as people. But it doesn't seem to me that Alice is lying to have experienced this as 'she had covid, was barely eating, told people she was barely eating, and they declined to pick up Burger King for her because they didn't want to go to a fast food restaurant, and instead gave her very limiting criteria and went somewhere that didn't have any options she could eat'.

On December 16th it does look like they successfully purchased food for her. 

My big takeaway from these exchanges is not that the Nonlinear team are heartless or insane people, but that this degree of professional and personal entanglement and dependence, in a foreign country, with a young person, is simply a recipe for disaster. Alice's needs in the 12/15 chat logs are acutely not being met. She's hungry, she's sick, she conveys that she has barely eaten, she evidently really wants someone to go to BK and get an impossible burger for her, but (speculatively) because of this professional/personal entanglement, she lobbies for this only by asking a few times why they ruled out Burger King, and ultimately doesn't protest when they instead go somewhere without food she can eat, assuring them it's completely fine. This is also how I relate to my coworkers, tbh - but luckily, I don't live with them and exclusively socialize with them and depend on them completely when sick!!

Given my experience with talking with people about strongly emotional events, I am inclined towards the interpretation where Alice remembers the 15th with acute distress and remembers it as 'not getting her needs met despite trying quite hard to do so', and the Nonlinear team remembers that they went out of their way that week to get Alice food - which is based on the logs from the 16th clearly true! But I don't think I'd call Alice a liar based on reading this, because she did express that she'd barely eaten and request apologetically for them to go somewhere she could get vegan food (with BK the only option she'd been able to find) only for them to refuse BK because of the vibes/inconvenience.

Answer by KPier40

We celebrate the May date because May is a good time for a holiday (not close to other major holidays, good weather in our part of the world) and December is very close to the date of Solstice and also close to Christmas, Thanksgiving, etc. 

KPier750

I appreciate this post. I get the sense that the author is trying to do something incredibly complicated and is aware of exactly how hard it is, and the post does it as well as it can be done. 

I want to try to contribute by describing a characteristic thing I've noticed from people who I later realized were doing a lot of frame control on me: 

Comments like 'almost no one is actually trying but you, you're actually trying' 'most people don't actually want to hear this, and I'm hoping you're different'.' I can only tell you this if you want to hear it' 'it feels like you're already getting it, no one gets that far on their own' 'almost everyone is too locked into the system to actually listen to what I'm about to say' 'I've been wanting to find the right person to say this to, but no one wants to listen, but I think you might actually be ready to hear it': the common thread is that you, the listener, are special, and the speaker is the person who gets to recognize you as special, and the proof of your specialness is that you're going to try/going to listen/going to hear them out/ not going to instantly jump to conclusions

Counterexamples: 'you're the only Political Affiliation X I've ever found worth listening to' does not at all seem to come from the same kinds of motivations as the above. Some people have said "[x writing] demonstrated a rare ability to Actually Get It" and weren't doing weird manipulative shit at all; people who said it publicly in fact I think have in every case just been sincere/being nice/recommending a thinker they think highly of. The frame control people all said it privately or semiprivately, possibly because that way they can reuse the compliment on lots of people, possibly I'm just overgeneralizing from a small number of data points. 

KPier20

Were the positive tests from the same batch/purchased all together?

KPier20

And same question for a positive test: if you get a positive and then retest and get a negative, do you have a sense of how much of an overall update you should make? I've been treating that as 'well, it was probably a false positive then', but multiplying the two updates together would imply it's probably legit?

KPier40

Are test errors going to be highly correlated? If you take two tests (either of the same type or of different types) and both come back negative, how much of an update is the second test?

KPier180

Given your described desiderata, I would think that a slightly more rural location along the coast of California ought to be up there. Large properties in Orinda are not that expensive (there are gorgeous 16-30 acre lots for about 1million on Zillow right now), and right now, for better and for worse, the Bay is the locus of the rationalist and EA communities and of the tech industry; convincing people to move to a pastoral retreat 1hour from the city everyone already lives in is a much easier sell and smoother transition than convincing them to move across the country. (I recognize that MIRI is doing this in part because of thinking that it's bad for the Bay to be that, but I think the Bay community already has at least four distinctive sub communities with different values and norms and priorities, and a campus in more-rural California could form a distinctive one while not disrupting all existing social bonds.) I know Bay zoning is notorious, but that's much less true as soon as you're out of the Bay proper, and all of those properties emphasize in the listings that you have total flexibility about what to build on that land. Other nearby properties are often also for sale. 

I worry that if MIRI moves to a place with no local rationalists or rationalist-inclined people, they'll be less likely to make new friends and more likely to become very insular, as the people who valued their non-MIRI relationships most fall away; it seems like a huge advantage if a move is either to a place with a preexisting rationalist community or doesn't require severing ties with the current ones.

The big downside of this, to my mind, would be fire, and it's a substantial downside, but on the whole I anticipate-success much more strongly for a rural-California enclave than for the locations you describe. (Disclaimer: this may be because I have strong roots in the Bay and am not personally likely to move.)

KPier160

That is, of course, consistent with it being net neutral to give people money which they spend on school fees, if the mechanism here is 'there are X good jobs, all of which go to people who've had formal education, but formal education adds no value here'. In that scenario it's in anyone's interest to send their kid to school, but all of the kids being sent to school does not net improve anything.

It seems kind of unlikely to me that primary school teaches nothing - and even just teaching English and basic literacy and numeracy seems really valuable - but if it does, that wouldn't make this woman irrational while it would make cash transfers spent on schooling poorly spent overall.

Load More