Austin (of Manifund) and Linch (of EA Funds) explore our disagreements on OpenAI. Austin is approximately an apologist for OpenAI, and for-profit/tech companies overall; while Linch takes a more hardline stance.

Neither of us are experts on OpenAI or have insider information; we’re just hashing out our different worldviews in a casual setting. If you listen or follow along, let us know what you think!

  • Topics covered:
  • 00:33 OpenAI's contradictions
  • 02:44 NDAs and tech vs rationalist culture
  • 13:51 Is there enough criticism of OpenAI? Too much?
  • 17:15 Superalignment's 20% compute promise
  • 21:50 The EA departures from OpenAI
  • 26:10 ScarJo
  • 34:41 Should AGI be pursued by a for-profit company?
  • 46:10 A hypothetical AGI "Manhattan Project"
  • 51:45 Government vs company services
  • 54:08 Is OpenAI democratic?

Note that this podcast was recorded May 22, before Kelsey Piper’s expose on NDAs; subsequent NDA walkbacks by OpenAI; and reports that Sky was indeed commissioned before the ScarJo reachout

Link with video & transcripts: https://share.descript.com/view/1N6kvfXFLr9

1: OpenAI

 

Austin: [00:00:00] So we're here today to talk about OpenAI a subject that Linch and I have talked about in some context before.

I think I, coming from a more tech startup background, have views that maybe are a little bit more alien to the OpenAI. What I think of as a standard EA like AI safety, maybe Doomer ish views, which maybe Lynch represents a little bit more of.

So I'm excited to have a chance to hash this out with you today, Lynch.

Linch: Yeah, happy to do I think if anything, I'm probably more negative about like OpenAI than the average like EA person, like for example, the OpenPhil cluster viewpoints. Interesting.

[00:00:33] OpenAsteroidImpact & OpenAI's contradictions

 

Austin: And when you created Open Asteroid Impact, right?

This is like a satirical thing, trying to get people to pay more attention to OpenAI and like ways that it could go. Badly. And I was working with you on that. I think in my mind, I didn't even think of this as " Oh, this is because we want to draw attention to like things, OpenAI is doing badly."

Rather like it just seems funny to me.

Linch: Yeah. Yeah. I think badly is like one conception of it. I think contradiction is another conception. Like I was trying to highlight the ways in which I thought the public messaging, or at least Yeah. Public messaging as understood by people I know conflicted with a lot of the actions.

Austin: Yeah, maybe that's a great place to start actually.

With the overall sense of messaging being contradictory, can you say a bit more about that?

Linch: Yeah. These things are like. Oh not necessarily super obvious, but I think there's definitely a very weird needle or like a very odd thing.

So one example is that they talk about, so we talk about this in Open Asteroid Impact, but they say a lot of the value they brought is like bringing attention to AI or AI risk by making like really splashy like AI demos and like basically building the thing that's potentially on the ramp to being dangerous.

I think that just like an odd approach. I think it's like a pretty unusual approach. I couldn't think of many like historical precedents for doing something like this.

Austin: That's not necessarily contradictory though or when you think, when you say contradictory, I think that usually implies like they're saying one thing and doing another or saying two different things at the same time.

Linch: Sure. I guess like talking about openness is like a pretty odd thing a lot of the time from like the software perspective. To have obviously not like really open source for that don't open source most of their work, but also like open means many things right from like open societies perspective, you might think that people are free to talk about what they want to, or they're able to voice opinions.

I know for many people in the, during my issue, I safety crowd are worried about technological capabilities leaks for other Specific technical things that are like dangerous to say, and that's why you might need like NDAs for other ways to cover that. But it seems, as we now know, but I think it was pretty obvious before they don't say a lot of things, period.

There's a lot of stuff that they could talk about. They like employees basically can't. And that's pretty odd. I think.

[00:02:44] NDAs and tech vs rationalist culture

 

Austin: Yeah. So I guess jumping to one of the topics of the day, the NDAs I, my high level of thought is that I feel like people in the area are making a mountain out of a molehill, so to speak.

It seems unlikely in my head to be the case that like the NDAs are significantly restricting speech. And I have more of you that are like, Oh, these are like relatively standard. Maybe there's a few ways in which not that standard. I don't think, I especially don't think there was like a lot of like ill intent, like trying to suppress disparagement or disclosure of like things that beyond what is like.

Roughly typical in Silicon Valley and yeah I guess I have some kind of model of if there were like important things that the people who have recently left OpenAI would have wanted to say they would be saying those things and it's not they feel, oh, I have signed this contract and as a result, can't say these things I don't think the signing of the contract makes a huge difference to their yeah.

Internal sense of whether it's worth blowing a whistle on like bad behavior at OpenAI. So I just think there's like too much of a conspiratorial view on like how much these NDAs actually affect the speech actions of the people who have left, or people who are still at OpenAI.

Linch: Yeah. So for context, for people who don't understand, know this, like they have, so there's like NDAs, which are non disclosure agreements.

Those are fairly common in industry and there's non disparagement agreements, which are like Somewhat common, but not super common where you can't say that it's not, you can't say like important information or like trade secrets or anything, but you just can't say bad things about a company. And my understanding is many, if not most, for all OpenAIX employees have signed like a non disparagement agreement, which is much more significant to us.

And those are lifelong. So if you sign it, like legally, you're not supposed to ever criticize OpenAI. Which is pretty big, and third, my understanding is that these agreements were signed on the fear of losing their equity that they've already owned, which makes it much a higher bar than I think is common for other cases I've heard of.

Austin: Yeah. So thanks for that context. I think one other thing that I wanted to draw attention to was that somewhat recently, I think there was like a kerfuffle, maybe a small discussion on the, on LessWrong perhaps, where I believe Wave who is like a company that does mobile payments in Africa.

[00:05:00] They're known for being like pretty EA affiliated. I think many of their like top employees are familiar with that. Wave mentioned they also have non, I believe disparagement agreements for people who leave. And this caused some, I think like clash of cultures between like I think the wave founder Lincoln Quirk was like, Oh, this is like a pretty normal thing.

And then I think about Oli Habryka of LessWrong, I was like, no, this is changing speech norms and like very much against the like rationalist culture. And I also took the like tech maybe like Lincoln Quirk line here, which is this seems like pretty fine and normal. And I guess to expand on that a little bit I do think the non disparagement agreement is there for some reason mostly that when you have an employee leave, you don't want to get into a lot of bad, like ugly press battles over he said, she said things like that. And I think there's like a degree of reasonableness basically in these kinds of contracts, which is if you say like things that are like generally like factually true and important to say, like something like.

Yeah. I don't know, like this top exec at OpenAI had an affair with one of their interns and this is like a big scandal. I don't think this is the kind of thing that like, then OpenAI would go after you in a non disparagement, using the non disparagement clause, right? So again, I'm modeling these, like society, I think generally to be reasonable and like the people at OpenAI to be like to have a sense of what, which things is like disparate agreement can cover and my modeling, meanwhile, the like rationalist EA crowd of taking these things like too literally if you look at the literal text of the contract, maybe it says oh, you can't say anything bad about this.

But literally, if you try to do this and an OpenAI tried to go after you with the contracting, I think they would lose in the court public opinion and they understand this. And everyone understands this.

Linch: It's just obviously a chilling effect though. If you're like on the fence of whether to say something or not, you're just not going to say it.

And like in aggregate, like it's often the case where you have an elephant and everybody sees one piece of elephant, but they don't see all of it. And, and especially in like. Cases where there's like bad behavior, but there's no smoking gun. It's just like pretty common to make it very hard to know, like without all the pieces of information out there.

Austin: Yeah but I really think the like non disparagement agreement is such a like tiny portion of the like reasons people don't speak up. I think like people don't say things for yeah, mostly things like. Wanting to be nice or not thinking that it is a worth raising or things like that.

And not one

Linch: of the most powerful people in Silicon Valley, may not like you and may go after your employee, your future employers. There's a ton of stuff like that. Like this is like one piece of that angle that we see we don't know all the information we don't see.

Austin: Yeah. Again, that seems Kind of conspiratorial.

I guess I don't know Sam Allman, for example, in person. And I know some people say things like, oh he could be, like, vindictive, or some people are, like, concerned of retribution or things like that. Maybe I'm just much more of an optimist, or I believe in people in general, like Sam Bankman-Fried sorry, Sam Altman and Sam Bankman-Fried Included.

There were some other things said about SBF that he could be, like, vindictive, and people were, like, afraid to say things. And I think all this is Roughly BS roughly it wasn't that this or

I guess I don't want to say, put words in people's mouths, maybe or put intentions in people's heads or whatever possibly they actually believe this, but I just, I don't know, think that actually, there wouldn't have been, like, important retribution or something like that, or

Linch: yeah.

Yeah. A lot of people seem to believe this about Sam Bankman-Fried, like the, the former CEO of FTX US, I think, for example, had a bunch of specific things like he was served by lawyers and stuff like that. Sued by lawyers? Served. Served, I see. I think. Like, where like he, sorry, that's not exactly it.

There was like internal disagreement about FTX US policy. And then instead of resolving it the way that like. You might expect like to executives to hash it out instead of like Benjamin Fried had his lawyer sent like a specific thing to him. And that's just like powerful the course, I think for like at least some powerful people who are like, on the 90th percentile of like ruthlessness or sociopathy within amongst CEOs.

Yeah, I definitely I do want to make it clear that like my opinion of Sam is not that he's like being a typical CEO here. Like I do think he's sorry some Altman. Yeah. Yeah.

Austin: Yeah. I definitely think there are many ways in which like some Altman is not typical CA, but there are many ways in which like every CEO is like not a typical CEO.

And I guess. Thus far, I have not been like particularly like concerned by like Sam Allman specifically his actions. And there was a point here around as we're talking about the NDA stuff, I think the way like OpenAI and Sam responded to this publicly on Twitter, it was like this was like a mistake or especially the, like the worst parts, the like clawback of asset equity things were not like, intended to be a thing.

And they haven't actually. And they're like, trying to fix the like contracts or whatever, or make things right to their employees. And I think that's reasonable a fix I think one thing you might say is that Oh, they only do this in response to like large public outcry or something.

And it's like evidence of like bad behavior that they need to be policed more. But I guess I wouldn't take that line. And I would also take the there's a hundred things to get right when you are like an executive at a company. And oftentimes this kind of minor detail you don't pay that much attention to your, like lawyers tell you this is a standard, like non disparagement agreement.

I'm like, cool. Okay. That's I [00:10:00] really

Linch: doubt this is like what his lawyers dreamed up of without somebody in the exact team. Probably him, but at least somebody like. Yeah.

Austin: And I guess I haven't read the text of this non inspirational agreement very closely, or None of us have.

None of us have. Oh, is it not like a agreement that the non disparagement agreement itself is like private or?

Linch: Yeah. None of us has. Yeah.

Austin: The whole self sealing NDA thing. But to use like a closer example, I think Lincoln Quirk, the CEO of Wave, also had this non disparagement agreement in their contracts and do you model him as having the same amount of I really think it's just like a, Oh, you like you're filling out like a template.

Like I do this a lot for manifold, like contracts. I just go to a website. There's a template. I like check some boxes. And I think in the moment I'm making like should I make them sign a non disclosure agreement? Yeah, sure. This seems better for the company. It seems safer.

I don't see why I wouldn't do this. And I did click that and move on. And that was like, not even a considered decision.

Linch: Yeah. If this was like the type of. That's in that 90 percent of commodity does. I really think that there just isn't a case like in fact, for the Lincoln Cook thing, I think I had a dialogue like one of those were less on dialogues with Ruby where I was like this thing's bad, but it's like very reasonable from different cultural norms.

You shouldn't really help expect it to be held up to the rationalist ones. But I think A lifelong non disparagement agreement with like your best equity being called back as far as I could tell, I don't know about this. It seems very unusual and it's also silencing. And I don't know, this is just all pretty unusual.

And I don't know how unusual it is because obviously because they're all self silencing, not a lot of people have come up with saying, oh, this is like really common, like lots of companies do it. It's I've heard it's Yeah. Almost none actually do this.

Austin: You don't think they get credit for walking it back or being like may a couple of I think Sam actually is like, yeah, this is my mistake, like an oversight as and I was, this is my fault, I am like into charge and they're going to try to fix it.

I'm like,

Linch: I think morally they don't, because this is like pragmatically probably one of the better options they could take. They do get like some competency points.

Austin: Yeah. Yeah. I don't know. I guess I'm just like occupying like a very different view, which is Oh, these people are like probably pretty like good hearted in general, or it's have a good heart in general.

Yeah. And are trying to like, They have a million things to manage and like this one was like a big issue. Whoops. They're gonna try to fix that and like they're gonna move on and try to do like

Linch: other things Yeah, so the details of their walk back is also interesting, right? They didn't say that like they will nullify the previous ones will nullify the specific condition of non discrimination previous ones Or anything like that, right?

They will talk to me with his response to if you have a disagreement with it Which is you know, an art thing and Kelsey has And I think others, especially Kelsey, I'm sure he's like seeing Kelsey's comments have pushed them multiple times, both, like official communication and on Twitter to just say that they were like, not go after people who disparage the company.

Um, signing the original agreement for, retracted this, original agreement and, just silence from them so far.

Austin: Yeah. I guess one thing I am keeping my eyes out on is the Daniel Kokotajlo prediction market. There's a manifold market on will he get his equity back?

I think for listeners he posted on the EA forum that. He'd given about he's given up equity equivalent to about 80% of his network, which means like if he currently had like

Linch: his future network

Austin: Yeah. His future net worth. Yes. Yeah. If he currently had, let's say like a million dollars, he possibly gave up $4 million yeah.

Of worth of OpenAI equity to be able to not sign that non-disparagement agreement.

Linch: Yeah.

Austin: And I think a lot of people probably rightfully lauded him as oh, this is like a good thing to do. This is a very like brave maneuver. But OpenAI putting its money where its mouth is would be like reinstituting that kind of equity to people like Daniel.

We'll see how that turns out. That will be like I think at least like some evidence to me to be like, Oh yeah, they are trying to do the right thing. And this is like a misstep.

[00:13:51] Is there enough criticism of OpenAI? Too much?

 

Linch: Yeah. Yeah. I think that would be some evidence to me. I think also if there was, how do I put this?

If more people like start like criticize OpenAI and it's just not much Out there afterwards when like it feels like safe then obviously that feels better to me I feel like there's probably just a lot of pressure to not criticize open for various reasons like I know within EA like it's pretty common for people to want to get jobs within like labs and stuff and Often they just don't criticize the labs as much as they could because of that especially for like technical safety researchers.

Austin: I think that's like a thing that probably happens to some people. I feel the opposite though. I feel like on places like the EA forum, there's like too much nitpicking or trying to find flaws with OpenAI in a way that's not very charitable. And I think there might be like the opposite effect where like you gain more like status points or something like that within the community if you are seen as being like hard headed or pushing for the right thing or something like that and I don't think these cancel each other out.

I think these are probably very different people actually, who are like staying silent, the like researchers who hope to work at OpenAI versus the like people [00:15:00] who are more in the Doomer camp and trying to push against it. But yeah,

Linch: I think that it's like the more informed people choose to be silent roughly.

And Less informed people who just see from a distance that this whole thing is like really fucked and like foggy and like lots of black ink.

Austin: Do you have examples of people who you think of as like particularly informed who are like notably not saying much?

Linch: Almost all the time. ex-OpenAI employees.

Austin: Like the ones who left in certain Anthropic and also the ones who have left recent or the ones who have left recently, or I guess, yeah, both

Linch: Both camps, like you expect like more, like you expect more of the Anthropic people to at least have said something about now, like most likely the split of acrimonious so those are the obvious ones, I think, let me see.

Yeah, I think, I also know people who work at like safety non profits who like say less than they probably should.

Austin: Yeah,

Linch: Again, it seems although that said, I think if anything critiqued by Anthropic is even more hard. We're like, Like more

Austin: hard to come by? Like critique of

Linch: Anthropic? Yeah, we're like, people are even more self silent, within the EA circles, like even more self silent in critiques of Anthropic than of OpenAI, would be my guess.

Austin: Yeah, I think that's like plausible. I don't know. Again, I am operating from like some other model where like people aren't saying much because there isn't much to say is like my best guess. They're There's not there may be like some kind of small things that like, didn't quite seem right that they like put out there, but I dunno, I'm like thinking of my like recent, like leaving a manifold, for example, which like, you could imagine somebody that could look at that and see my public statements and think there's gotta be like a bigger story than this, or there's gotta be like, bitter acrimonious things that like Austin isn't like saying, I don't really think there is, I don't know.

I think that

Linch: You're like one of the most patently transparent people out there. And you also have criticized them when you left and stuff. And that seems like fine. I don't think I don't know. I think it just looked like a pretty different situation from the outside.

Austin: Sure, but I know Jen Leakey, I hope I'm pronouncing his name right.

He did have this Twitter thread that was like a critique of the way, do you think he has like many things that he's not saying as well, or? I think he has

Linch: less things that he's not saying would be my guess, but he probably, there's like standard NDA things that cover like some of the things he can't say.

[00:17:15] Superalignment's 20% compute promise

 

Linch: It's known about He hasn't publicly said much about the compute stuff. I'm going back on the computer agreements. It's only through like press leaks, which may have been from him. I have it for other people that we need like more details about the compute stuff. I don't know. There's probably other things like that.

Austin: Yeah. The compute stuff is interesting because that is like a concrete example. OpenAI committed 20 percent of the training compute up till the point when super alignment was launched

Linch: to

Austin: the super alignment team.

Linch: Yeah.

Austin: It roughly seems like they didn't get that compute or maybe open, I didn't quite honor the agreement.

I imagine in the In the heads of the people who are like managing open, I, they were like, Oh, we will we promise this, but we don't, we can't deliver all of it right now. Maybe you have to maybe next year, maybe the year after or something like that. So maybe it doesn't feel like a broken promise to them, but it definitely seems a bit of a broken pro or at least insofar as the like super alignment team, is saying Oh, we didn't get enough compute. I probably mostly take them at their word. I do feel like compute is tricky though, because like compute is basically like money or like like influence in some sense in this like world of OpenAI, I imagine.

Where it is like one of the most like scarce resources that they have and deciding where to spend your compute is I don't know how to express it. It's like spending, like allocating your headcount, allocating your like, yeah, it's one of the most

Linch: important things. That's why it's not sounding like a strong promise and incredible one. We're like a different way if a country says with, um, I don't know if a politician says we're gonna give 20 percent of money into Social Security or Medicare or something and they end up giving like 1 percent and that obviously is a broken promise, right?

And money is a very important thing for the US government.

Austin: Yeah, I agree. It seems a bit like a broken promise, but like looking at the super alignment team case, right? Yeah. At some point, like if the people you promised to all decided to leave, like relatively early on like super alignment has only been around for a year or so, is my understanding.

And I assume this like promise of 20 percent of the commute was not like, oh, and we will give this to you right now if you need to like, stop all of what OpenAI is currently operating on and just do super alignment stuff, right? I don't know, I still have I do feel basically like this is like one of the like less good things, one of the clear, like clear broken promises, perhaps but still like I see lots of ways in which like there's like room into ways for two teams to talk to each other, the super alignment team within the OpenAI team

Linch: and

Austin: both sides feel like they're like doing their best, but

Not quite getting to the right thing.

Linch: Yeah, I think that I think they didn't break the explicit wording of the promise, which is I think we also are getting at this at 20 percent compute secured over the next four years that they could have meant like one possible interpretation is that they have 20 percent per year or so, up to 80 percent annual quota from that year and then they're going 2026.

That's one possible interpretation. Yeah. And that would in fact be the cheapest way for OpenAI to honor that commitment, given exponential trends [00:20:00] and things getting cheaper. But that is not how most people in the public interpreted it. That is not how I assume people within super alignment interpret it.

Austin: I guess I would have interpreted it as 5 percent per year for each of those four years or something like that. If it's, 20 percent over four years.

Linch: That's not how it works though. It's not 20% of the annual quota. I know

Austin: it like, but they had fixed the amount of compute that they promised.

20%. So I think like a rough, very reasonable thing was to like dole it out equally over the four years time. That's probably, yeah. So I would've expect knife that would

Linch: be like 20%. So to be clear, that would be 20% of that amount that year over five, four years, right? Than 5%.

Austin: Or I'm thinking

Linch: of computers not analyzed tone by default computers like continuous usually we think of like literal GPUs and 20 percent of GPUs being reserved for that time. And then as you get exponential trends that like as a percentage of total stock of GPUs that gets less and less, but it's not like 20 percent of that, like 20 percent running per one year. And then that's split into 5 percent over four years.

Austin: I realized this is a thing I actually don't know very much about. I had assumed that it was like there are a certain number of floating operations that have been used to train. All of OpenAI models up to this point, one fifth of that moving forward is and maybe that is like a billion floating operations, but not floating operation per second.

Just like total like operations. That was my understanding or my, but I haven't, I don't know what the like agreement, I haven't looked into this as a thing, I guess we could like

Linch: research. I haven't thought about it that way, but that also makes sense. Yeah. Yeah, I was thinking of the physical. Thing, but I guess, yeah, I guess like compute, like it's pretty reasonable to find compute as the, yeah, flop.

Yeah, the flop.

Austin: Okay. Yeah, I guess in any other case, like this is like getting into the weeds somewhat, but yeah. Overall, I think this is like an important point where it seems like if the super alignment team didn't get the compute that they thought was according to them based on this promise, that is like a broken promise in terms of like OpenAI trying to help align each guy.

[00:21:50] The EA departures from OpenAI

 

Austin: Cool. I think another point and we're already started talking about this is like the impact of all the people leaving OpenAI lately. It's, I forget who it started with but at this point there's Ilya and Jan the two leads of super alignment. A bunch of Daniel Kokotajlo, William Saunders, like Leopold Aschenbrenner.

A few other people, I think .

Linch: Yeah.

Austin: Who are kinda like somewhat related tied into the EA space. Other people who are like, roughly in my social circles or like friends of friends or something like that.

Linch: Yeah.

Austin: Who yeah. Have like suddenly left in the last like week or so, or have been leaving over the span of the last couple of months.

Linch: Yep.

Austin: What do you make of this.

Linch: Yeah, for the world, pretty neutral I assume this is already baked in they couldn't get anything done yet. OpenAI, or couldn't get much done at OpenAI, so they left maybe they will kick out, maybe they won't.

It was pretty voluntary, but yeah, hopefully

like the best case scenario here is that Ilya and Jan weren't very good at politics and weren't able to do the thing, but like deep in their hearts Sam and Greg and the rest, like really wanted to care about safety and would work with safety on somebody with a group of people who are better at politics.

And. Maybe John Schulman, I think is the new person. Maybe he would be that person. Maybe they'll find somebody else. That's the best case scenario. There's most of it was just like political ability within the company. I was scenario is of course that like most of the safety work was like ultimately just trying to do some food safety washing in a way that's not super blatant.

And no matter who's in charge there, like it's just not gonna get much done.

Austin: Yeah. Maybe I'm like narratizing this a little bit too much, but I do like roughly feel like it is like a sad thing that once upon a time or not even that long ago, there was like At least somewhat like a good faith effort. It seems to have people working in the same company and like talking to each other and like sharing ideas there.

And now that like split is much more like visible, something like the split between AI safety and OpenAI or EA and OpenAI or something like that.

Linch: Yeah. Yeah. I think some of their point that came out was that, I don't know about AI but yeah, OpenAI, I think that was somewhat engineered from OpenAI side.

Like they didn't like the. Depressing them as like a EA associated company for various reasons, including like the FTX labs and stuff. So it's understandable why they feel that way. And it also makes sense. They aren't much of a EA company. So it makes sense that they don't want to be associated as one.

Austin: Yeah, this is the kind of ties into another issue which is something around like many places wanting to turn that turn away the EA brand at this point, like viewing it as like negative, which I think is a whole other discussion, but we won't try to go like too deep into that. I do think I think it's

I feel like this can't have been that big of

what, what gives us a sense of like the, like brand of EA being negative like negative assets. Yeah. was a major contribution to like OpenAI kind of separating from the recent people.

Linch: 5 percent maybe 3%. I'm not sure.

Austin: Sorry. You think about 3 percent of the like rationale for that was, yeah. Okay. So maybe then when [00:25:00] we're pretty aligned like, I don't think it was like that big of a deciding factor. That sounds right.

Linch: Yeah. They were like, yeah. According to, I think the New York Times, like while they're reporting, yeah, they were like, internal discussions within OpenAI of like how to distance themselves from EA.

But yeah, from like just senior people, but I don't know, like you said, a startup CEO works on a hundred people, a hundred things, probably wasn't that big a priority. Sure.

Austin: Yeah I guess I, maybe I'm just like very conflict averse or something, but I just like, like it very much when like people like get together and talk and work out things and don't like much when like things are more like, oh, clear like lines in the sand.

We're going to get into a fight. I haven't been doing this building like e/acc versus EA, like narrative as oh, this is seems like it's going to worsen the quality of the discourse and make a lot of people mad and get a lot of clicks for not very good reason. Things like that.

Linch: Yeah. I see that. Yeah, all else equal. I would also rather be in a world where like people are like, Like actually talking to each other, understanding each other's points. I don't notice myself being, like, less smart for less capable of clear reason when, more tribal impulses take me.

That's not great, I like to be smart and capable of good reasoning.

[00:26:10] ScarJo

 

Austin: I guess moving on from this then into the third, I think topic du jour Scarlett Johansson's? What is it? Not sure how to pronounce these names. Yeah. I'm sure that Scar Jo is how I've been seeing her easier way.

Yeah. I think it's

Linch: Johansson. Yeah. Johansson. Yeah. I don't really

Austin: know. Was this a thing that you also had like strong feelings about with regards to OpenAI? I think a lot of people are painting this as, OpenAI choosing to use Scarlett Johansson's voice or voice alike in Skye as like a broken promise or like another example of them not being able to be trusted.

And this once again seemed to me to be a bit like catastrophizing or a bit like, making a mountain out of a molehill. Interesting.

Linch: I don't think broken promise is quite the framing I would have. I don't think they promise much to her. I think it's more I think there's a few things. I think impersonating people is kinda sus but I'm not sure. I I could be convinced otherwise here. I think that's one thing. I think that I'm saying that it's, I don't know for sure, but it seems like a pretty odd coincidence if it's coincidence.

Austin: Yeah. I listened to a very short clip of, Skye versus Scarlett Johansson's voice and in her, and they seem similar, but like similar in the way that like two women coming from like similar backgrounds might have like similar voices.

It might seem similar, especially like through the lens of a somewhat robotic like screen filter or something like that in the case of Sky's voice. And

Linch: yeah, I think also like the model has probably iterated like multiple times over time. So it would be hard to find the ground truth or it would be hard to I would probably want to look at like a lot of clips to know and try to pinpoint the exact time that.

Scarlett claims that her friends like said it sounded really similar as well.

Austin: Yeah. So there's one question which is around are they like telling the truth or lying, and that would be like a big one. I think probably one that we may like not find out, may never find out whether Sam Altman or the people working on a sky voice, like intentionally.

Try to make this similar to the scarred your voice. I really hope we do find out, but it might be pretty hard. Yeah. Yeah. I think like absent of that just reinforces your belief in if you were suspicious of the OpenAI people, you'd be like, oh, this is another example of them, like doing sketchy things, especially like breaking a promise or again, not a promise, but like infringing on ScarJo's rights or something like that.

Linch: Yeah.

Austin: And you said earlier that you weren't so sure about is this kind of impersonation even bad? Yeah, for one, it's not exactly impersonation. I think I would typically think of impersonation as pretending to be. A person, and this is something more like the whatchamacallit, the like mid journey artist issue.

Do artists and like voice actors and I guess writers have a right to their style to the way they draw their work, to the way they sound in this case, to the way they write. And I guess I also Some of my sympathy for OpenAI is I don't really know that I believe that creators should have this right.

This is like possibly a very controversial take. And I know like in many places, especially in like video game development, for example, many video game artists are strongly of the belief that this, all this AI generated art stuff is really bad. And you'll see like things like Steam reviews when they use an AI generated game, they'll get like downvoted to oblivion because people are like, no, we want to support the artists.

And not pitch in on this side of the corporations, like shilling AI. I kinda take the opposite side, I take the corporation side, or the AI side in this, where yeah and maybe this is a part of what gives me more sympathy for the, again, OpenAI camp over the ScarJo camp.

Linch: Yeah. Yeah,

Kinda, I, yeah, I don't have strong opinions with here. I feel like I don't have very good intuitions here, and this feels like the type of thing that I don't know, hopefully like reasoned discourse and public opinion and like some form of public philosophy and like people like trying to express their preferences honestly, ideally people who aren't like, super economically or socially incentivized to say things.

Hopefully we'll get to a point where this is like the type of thing that society can agree [00:30:00] on. I don't really feel like this type of thing was like a clear truth of the matter, so to speak like absent social context, way that, like murder is like probably bad in almost all contexts where it's not bad.

Austin: Sure. It's one of those like issues that is probably gonna get more like salience over time and let the legal system and like society around it needs to think a little bit more about what like copyright means and what an artist write to their likeness feels like those things are much easier to generate and produce.

Linch: Yeah.

I do feel like, at some level I feel more strong intuitively that if somebody writes in my style, that's flattering but if somebody tries to talk in my style or act like me, even if they disclaim in loud letters, they're not me, that feels I don't know.

Off creepy.

Austin: Interesting. There's a thing where like parody for example, is protected under like us law, at least. Aside from that, if somebody tries to use my voice, I guess I don't have a very strong association with my own voice. I never thought of my voice is like core to my identity, but I can see how maybe somebody like Scar Jo cares a lot more about that.

That is like her part of her, like life's work. How she makes her like living.

Linch: And it's also relevant that most. I don't know, many of us have read like "Reasons and Persons" or similar work. Many of us don't tie that much stuff to our personal identity within EA. In a way that I think is probably almost the opposite for like actors, for I think knowing people somewhere in between and things for like professional actors are definitely on the extreme end of they just tie a lot to their personal identity.

Like for various reasons, including that it's like very professional and relevant to them.

Austin: Yeah. And I almost wonder if there's one place where our intuitive like we see an actor, they are like a monkey the way we're a monkey and we don't like the monkeys. It leads us a little bit astray in terms of what should be the right thing to do from like a overall, like social economic perspective or something like that.

Where we think yeah, the actors deserve their privacy rights, like image likeness rights.

Linch: Yeah.

Austin: Whereas like in the opposite world where like, people just didn't like anyone could like make like deep takes of anyone. Yeah. There would just be much more creative, like media and people like consumers would be a lot more satisfied.

I think there might be something like that. I'm just thinking about this for the first time. This is not like a very like,

Linch: well

Austin: recent argument,

Linch: I think. I

Austin: mean,

Linch: I guess we'll see. I guess I'm not convinced that you have that much better out because of this. Especially yeah, if you exclude parody and stuff, but I don't know.

So it is If you think about outer science, and I think like some of it like discovery, it does feel weird that the first person to discover a style gets to really own it fully. Yeah. But I think a lot a lot of copyright doesn't look like this, right? Token their own works for the idea of good visibility.

Obviously not a hero's journey or anything, and I think I don't know, like tying your identity to your voice, it's like a pretty reasonable thing, like in history to do it's it doesn't seem that extreme. It's one thing. And then my pre deception point, I'm still like really weirded out by OpenAI saying that they can't single the voice actors out because of privacy.

That, speaking of privacy, that does feel a bit odd to me.

Austin: Yeah, definitely if I were OpenAI, I think I would be like asking the voice actress who voiced Skye to like, give up those privacy rights, maybe pay them extra money and be like, can we like tell people this was you?

And it'd probably be good for this person's career,

Linch: yeah. Yeah. On priors, like how frequent is it that actors don't want to be credited for their work? Yeah. Come on. It just feels very odd. And

Austin: I like

Linch: Open, I managed should find the few actors in the world who happen to really care about their privacy the same way that they ex-employees, they just care so much about privacy that they can't complain about Open.

I, yeah I, I dunno man.

Austin: I mean it would definitely make sense to me as like a, just a pure business move. Like probably as a business open, I want be to think of sky as opposed to as like X person's voice. Yeah. It's

Linch: but except for this current situation, it's much lower liability if one of 'em, do something shitty, or speaks out against OpenAI. People don't want to identify this as Sky speaking out against OpenAI. Yeah, where they do something shitty, like they, like, when he had DOI, they don't want to. Yeah. Obviously, there's lots of reasons why You don't want that.

Austin: Sure. But yeah I guess like then there's some thinking of are they again, like lying about it being a privacy thing? Could they have like relatively easily gotten that unlocked and are just choosing not to do so because it would be bad? Is it like actually secretly Scarlett Johansson's voice like in disguise?

I think these are all like,

Linch: Yeah, we're like the scholar Johansson famous scholar Johansson impersonator that I hired.

Austin: Yeah. And I guess I just like, once again don't really take the like, conspiratorial view on these things. I just feel like, Oh yeah, let's like, maybe some of these things.

I don't know.

Linch: Yeah. Famous conspiratorial, but I feel like this is just straightforwardly like a, amoral business doing amoral businesses. If they could get away with it, they would.

[00:34:41] Should AGI be pursued by a for-profit company?

 

Austin: Okay. Maybe moving back away from the stuff going on today and just like bringing it back out to a more overall how do you feel about OpenAI's approach?

How do you feel about like OpenAI's approach? Like overall to date?

Linch: Yeah. I think it's like better than some worlds I think it's one of the worst. Among the worst words were [00:35:00] possible words out there. I think like somebody in that situation Sam Altman is like person, personality aside, a company in the position of opening the eye Can in fact do a lot worse, can in fact be a lot worse, I think they do talk about safety quite a bit.

They they try to be reasonable a lot of the time They like, You know, spend some, not a lot, of resources, but some resources on public goods and safety. They, talk about it from the beginning. They care about near term ethics issues and spend non trivial resources on that and so forth.

In that position, I think they do a fair number of reasonable things. I think that I talked about this with you, and I, Briefly mentioned this on the EA forum. I think that putting themselves in that position is somewhat blameworthy that I think

Austin: as a, like a for profit corporation

Linch: trying to deal with AGI.

Austin: And alternatively, like they could have found this as they did originally find this as a nonprofit, but you could have also imagined it as like a government sponsor initiative or like an academic initiative or something else. Yeah.

Linch: A lab within a company like not doing AI stuff at all.

There's many options.

Austin: Yeah. I think that was like a very interesting point when you brought that up that like they had different, or one could have imagined trying to push to create AGI not inside of the API. For profit company

Linch: specifically a for profit company dedicated to AGI. Yes, dedicated to AGI.

Austin: And I guess this is where I think something like in my world, which is very much influenced by tech startups being as like, very like good. I feel path of a startup doing a thing, it's just so much more likely to succeed or execute well on a thing compared to the government quote unquote doing a thing for an academic group or a bunch of researchers like doing a thing.

I think that's like on one hand. On the other is the like fundraising considerations where open eyes started like actually as a nonprofit. And I think like a famously, they decided to switch over to more of a for profit model and started raising like billions of dollars when they realized, okay actually to make AGI work, we just need the funding and the nonprofit, we're not going to get enough in donations to be able to pay the cost of all this compute.

So we are going to have to raise from venture backed raise venture back funding from like for profit like investors. You, do you what would the alternative one be maybe like try to shut down.

Linch: Yeah. If it, if you can't do something morally, then don't do it at all.

It's like the obvious. Yeah, a nonprofit. Don't like, don't think yourself as though, people building AGI, that's one of the groups that people like. Helping to make a GI safe. There's lots of stuff you could do. They could become a policy nonprofit, they could do safety research the way that like file does what red work does.

There's just like a lot of options if if Yeah. Get acquired.

Austin: Yeah. I guess I, I feel like these are all like, like pretty unimpressive slash like very bad alternatives in some way that like, I guess I

Linch: care more about the thing being good than the thing being impressive, which I think is a big difference between like my like internal ethos and that of Silicon Valley.

I think Oppenheimer was not like morality tale about Oppenheimer being a great person and everybody should aspire to be like him.

Austin: Yeah, I hear that. I think that Trying to be good is like pretty important. Probably like more important than being impressive. But at the same time, I think like one thing, the like altruistic mindset, the good mindset kind of undervalues is just the degree of like uncertainty in the world, the like you can't get to, I think, really good outcomes just by Trying to do good things.

You actually need feedback loops. You actually need the signal that comes from running a thing at larger and larger scales. And there's the classic, like where has a lot of the good in the world come from and libertarian pill people tend to think a lot of like why we are like happier and like rich and like prosperous and like many as a, like human beings.

Is because of like capitalism or like the liberal, like world order or something, the ability for people to start things with capital and grow those and scale those up. And that creates most of the good in the world. I think this is pretty compelling. I think it's I think many people agree with this take already.

Yeah. Yeah. And then there's like a specific sub question of like, for the issue of AGI, should you use the same engine of like capitalism to create this thing? Or should you try to do something else? And I guess I feel like most of the alternatives you named would like something like not do a very good job of getting to AGI.

And then that being the case, like some other for profit might do that better job. And I guess if you're sitting in Sam's position or the founders of OpenAI's position like maybe the very reasonable thing is okay we will be the ones to do it instead.

Linch: I guess like for large scale scientific projects.

I don't really buy this

Austin: Okay,

Linch: like I feel like there are many large scale scientific projects that are not like Built by a company that's like dedicated to doing it.

Austin: I think like once upon a time, yes, but I think the track record of these things lately has been quite bad. Like lately, for example, I think of SpaceX as a much more [00:40:00] impressive achievement in getting things like, into space than like NASA where once upon a time I was like, Oh yeah, the

Like shut space, like shuttle system got like astronauts to the moon.

Cool. But like now, like not so much. It's not

Linch: true. It's I don't know, there's like motion rovers and there's like far flung like space things and like deep space probes and stuff that all come from not SpaceX. Yeah, I think they think it exists. SpaceX is mostly like scaling up the thing that like we do.

Where do you know how to do and doing a lot cheaper and a bunch of other things that are good.

Austin: But then the doing a lot cheaper part is actually like quite important. Oh, I agree. Yeah. Or it enabled like cheapness enables like a qualitative shift in what is possible and enables things like Starlink.

And I think we'll enable sustainable, like Mars travel or something like is the goal of this SpaceX launches followed by Starship development.

Linch: Yeah.

Austin: Yeah. So I don't know like how closely you follow the space. I'm not like, the most ardent like SpaceX fan, but I have a really strong sense that like a company in this case was a much better way of like working on like the scientific achievement of at least like reusable rockets and things like that.

I think like maybe things that the government seems to do better than like for profit. I was going to say something like fusion, but actually I'm also not sure on this point. Like historically, a lot of the research into fusion reactors or things like that was like government funded.

I know Sam Allman has invested in like companies like Helion, and I don't know enough about the state of progress here to see which of these is more impressive or will be.

Linch: Yeah. Yeah. That's fair. I think a lot of basic research is still funded by the government for like offense Government funded things in academia eventually spin out once the science is established.

Austin: I definitely agree with that, but the the basic research to me, it's not very clear that what HDI looks like, especially with, if the scaling hypothesis is true, is what it requires to scale it versus engineering work, stuff that seems to progress much faster inside of large companies.

Yeah,

Linch: so my guess is that if you wanted to build HDI within a government lab, it would be more expensive. Maybe it's 10x more expensive, maybe it's 100x more expensive. Okay. I'm just like, this is such an important thing that it's like worth paying the cost to do it well which in a way that I would not say for most other things.

I would not say this for spaceships. I would not say this for fusion probably.

Austin: Yeah. And I think like being very expensive I think has like other costs time to like deployment time to like us being able to use the thing. But also just It doesn't strike me that if the government had done 100x as much money people at the goal of making AGI, the resulting AGI would be more aligned, or something like that, or better by our lights.

It's not very non obvious. It could be the case, but it's not obviously the case.

Linch: I'd be interested in looking at the empirical record of how often disasters happen from More government y things versus non government y things. I've seen that, especially in governments within democratic regimes I would guess that the governments are safer.

Suddenly you could go into the communist territory and be, like, bad, but, yeah, I think it's just I think I just have the intuition that the government run things are, like, in fact safer. Because, they sacrifice other things for safety. And this is even more true if you control for competency of people.

Austin: I think this is where like safety becomes too broad of a concept to be like very useful, right? There's one kind of safety, which is do people like die or something like that as a result of the work here? But I don't think this is the like useful kind of safety when it comes to AI safety.

It is like something a lot more like complicated, like weird will this like thing that we are summoning out of the ether, hold our beliefs and act the way we want it to. And I think it's I won't expect much of a difference, like for that kind of, for do humans die as a result of AI from GPT 4, GPT 5 level of like work, whether that happens in like OpenAI or like X government.

Linch: Yeah. I don't think there'll be necessarily be a huge difference, but a couple of reasons to think that there might be some. When is that like alignment research might look. Less like Mech Interp stuff or like the scaling hypothesis and look a bit more like fundamental research or like somewhere in between like even Mech Interp feels like a Bit less you could just like still compute and make a thing happen. Although maybe you could especially at higher levels of capabilities I don't necessarily bet on it at least.

The other thing is just are you willing to make the thing stop? Are you willing to slow down? If things look scary? And my claim is that like the incentives for a for-profit company that's focused on ai, it's like basically the worst possible one here.

Not literally the worst. Like you could have two, two countries like that are really racing and, war, hot war against each other, maybe. Even then, I'm like, I don't know, we have like nuclear treaties and we have de escalation before, I'm just like, like a company voluntarily shutting down its main product because of like speculative claims, or that just seems like insane, that just seems like really hard you have to like, Be extremely honorable.

Never risk, either risk or really honorable and well meaning person. And I guess I don't think of any of the AI labs as that. In fact, very few are that. It just seems like a really weird and insane incentive structure to place yourself in.

Austin: Yeah, I can see that. I [00:45:00] guess if I try to think of OpenAI as like a climate global or something like that they have this like particular profit incentive to produce a lot of oil and as a result, they do things like lobby like Congress or try to spread disinformation about what climate change is going to look like first.

Then that would make me, that makes me a little bit like more sympathetic to the idea that like, oh, maybe the for profit like structure for, OpenAI is not that good.

Linch: Yeah, I think that's the main thing I want to emphasize. But then a secondary point is that it's, that's the main thing they're doing.

It's not they're like a, they're like Microsoft or Google or some other company that has a bunch of product lines. It's it's not like asking Google to cancel their I don't know, the YouTube line. Because it's like going poorly and like bad for people. It's like asking Google to cancel like the ads or like asking Facebook to cancel social media.

Just like a very high bar to ask for a pause or a slow down.

Austin: It's a good point. But I feel like the societal external response to this kind of thing, like regulation and protests and things like that will probably do a reasonable job of keeping things in line. And I think this particular structure of like company tries to do a thing with a very clear mission and a profit incentive.

[00:46:10] A hypothetical AGI "Manhattan Project"

 

Austin: And insofar as there are negative externalities, That is socially handled by other aspects of society designed to try and control for that. That seems pretty okay. Compared to some hypothetical this was like a Manhattan Project style initiative where lots of government people hired the smartest researchers to work for the government instead.

That doesn't seem better, I guess.

Linch: Yeah. Why not? You have a Manhattan style of like projects that people, because people think AGI is going to be really good for the world and lots of people have mixed opinions. Unlike the actual Manhattan project, this was not done because of the Nazis, either because the U. S. is way ahead of China or because they figured out enough international agreements such that any other possible contenders also have their arm in the pie. You had this and. I don't know. Civil society is talking about the risk, like people debate this regularly, like it's a topic of conversation among other intellectuals, like things keep progressing and that really gonna there's no way to stop it?

Austin: I think the structure of the Manhattan Project was like much more responsible towards the U. S. government or maybe like a few like specific like generals and people inside the U. S. government.

Linch: Yes.

Austin: And this feels like a much less broadly likely to be good for society incentive reporting structure than what OpenAI has, which is yes, like responsible to the shareholders, but also it's customers and it's employees and like a lot more society.

I also feel famously Manhattan Projects was very secretive. Like they don't tell very many people about what they were doing. I think the OpenAI as it is set up and the way most corporations are set up, they are much more Open in the sense of letting people know what they're doing and trying to appeal to the public opinion in a way that I think government projects do not have to win.

Linch: I guess the version of the Manhattan Project for AI imagining is one that's much more public to

Austin: Much more open than Open is today

Linch: yeah, like to be like responsive to civil society, like it's not going to be like a secret line in the government's budget because under war, it's going to be pretty clear. We spend like X dollars on this thing. If it's like trillion dollars or more of spending, like it's going to be a type of thing that's covered in presidential debates.

It's a type of thing that like, how the intellectuals in nearby areas like talk about regularly. They're going to have papers in Science. They're going to be position papers. They're going to specific details of if it turns out it's there's like algorithmic breakthroughs that are scary rather than just scale, then like specific things will be hidden. But a lot of the details are going to be public.

Austin: I guess,

Linch: It might look more like Apollo, for example. Sure. I can, the American public did not want Apollo to happen.

Austin: I feel like, I'm not sure if this is just a bias because I'm again, closer to this Silicon Valley world, but I have much more of a sense of how to get like my own preferences. And I think a lot of other people's preferences, embedded into the way OpenAI currently operates.

Compared to if it were like any like government like initiatives like an Apollo, right? If Apollo is doing something, I think it's very dangerous. What do I do? Write a letter to somebody maybe is like my, like the first thing I think of.

Versus with OpenAI go to Twitter. Like you can try to do a public campaign. You can like change how you pay for things. Like being like a consumer of this thing very much effects.

The touch points between like me and most of I think society and a company. Feel much better developed than the touch points between a government program and like society.

Linch: It's just really wrong to me. I think like for OpenAI specifically, I think like underlying reason is that they're like not that far away from a social group. And if I want, if I really want to complain about OpenAI, I have a very specific technical proposal that I would like him to implement. I could just ask I could find somebody, I could just ask them to do it.

And that's probably true for you as well, and many people in the audience. But this is not like normal civil society. This is not the majority of people who aren't like,

Austin: Bay

Linch: Area tech adjacent.

Austin: Maybe if you thought about it, it's like Walmart versus like Kansas, like [00:50:00] city government or something like that.

Yeah. I think

Linch: if you're somebody in Kansas, you probably have Is there a way to change like Walmart's behavior, change like Kansas city's behavior than Walmart's?

Austin: I'm not sure. I do feel like Walmart is a lot more like responsive to your like requests or something like that. If you are like unhappy, you can talk to somebody and Walmart, like a Walmart like store manager is going to listen to you and tell you like what they can do about it.

That kind of thing.

Linch: Yeah. Walmart is like a very consumer focused company. Sure. Supposedly I don't like what Palantir does. Okay, maybe that's not a good example because they really partner with the government. Blackwater which is a military, like they're like not really a military contractor, but they it's not because of like laws that's like making them scary.

What do I do?

Austin: Yeah, I guess it's hard for me to know because I don't really know what Blackwater is, but I agree that there are many companies in this reference class. But, OpenAI does not seem to be in this reference class. They seem to be in the try to put a thing in front of many people

Linch: kind of

Austin: reference class.

And that I think gives them like a lot more like direct incentives to, do things right by their customers. I think that's actually like a pretty important point that I'm developing this idea like right now, just as we're talking, but I actually think that's like pretty important as far as they're trying to create this AI that like serves the world, they actually have a very strong incentive towards keeping their users like happy and providing them like a valuable service in a way that like, yeah, It's a very powerful engine of corporations and does not exist nearly as much with like things like NASA or the Manhattan Project.

Linch: Yeah. Manhattan Project especially. Yeah. But yeah, interesting. Yeah.

I do agree that you get a lot more like feedback from reality, more data more ways to like to see how like customers want your product to be done.

[00:51:45] Government vs company services

 

Linch: How much of that is a government versus company thing? Would you say that Like Fox, okay, maybe that's not, like CNN say, do you think that's like more aligned than the BBC, for example?

Okay.

Would you say that?

Austin: Oh, sorry do I think CNN is more aligned than the BBC? Yeah. I don't even know very much what the differences are. The BBC is like a British, like broadcasting company.

CNN is like an American one. They're both like relatively neutral in terms of the political spectrum.

Linch: Yeah. And BBC is owned by the British government and CNN is not.

Austin: Yeah. I have very little Sure, sure.

Linch: Yeah, that's fair. That's fair. What examples that we might both be familiar with? NPR, any chance?

Maybe media is not a good example. I don't know. Um, E versus if the government's running it.

Austin: Doesn't the government run PG& E, I actually don't really know. They seem very close to the state monopoly. I guess I agree that the services are not very good or something like that.

Linch: Yeah, I don't know. UPS versus USPS.

Austin: Sure. Or like Amazon delivery versus UPS or something like that. Or

Linch: Amazon delivery runs on UPS and USPS

Austin: among many other I think Amazon has it own, like driver and things like that. I guess I feel like generally a higher level of quality service from like the Amazon private delivery service than from like public, like US Mail or something like that.

But Okay.

Linch: Yeah, I feel like they're similarly, sorry, Amazon specifically now, because like it's trying to do a different thing, but you're trying to get one of my packages from like, point A to point B. I'm not seeing like a big difference between USPS and UPS. I think I agree with that.

Yeah. Yeah, but maybe that's not fair. I know that SOEs in general do worse somewhat. State owned enterprises? Yeah. Like this is true in China at least, and I assume it's true across the world. But yeah, that's exactly it though. I think the thing that, I guess the thing that matters to me is if I'm not a consumer, can I get something to stop?

And it feels like I have more leverage for government things than corporation things, especially, and one reason I don't is because governments are more powerful, but, like we are like kind of conditioning against that if we're like actually have AGI.

Austin: Cool. Did you have any I think we're like running at like about an hour or so. Did you have any other things you wanted to chat about on the OpenAI side? Things that you thought are like pretty important to your worldview of, like, how you think about OpenAI that you didn't get a chance to express?

[00:54:08] Is OpenAI democratic?

 

Linch: I guess one thing, I don't know how strongly I feel this because I'm confused about how much I care about democracy and stuff. But I do think there's an important sense in which like OpenAI is like just a very strongly like Sorry, the mission of creating AGI, one of the most powerful technologies of all time and Doing not sure what with it That just feels very anti democratic

Austin: What about it feels anti democratic?

Linch: I guess it's like a lot of people did not consent to this danger being imposed on them. Assuming you have non trivial pDoom, which my understanding is that OpenAI leadership at least publicly said they do.

Austin: Yeah, I agree that they have like non trivial pDoom. I again think that like they expect them working on it lowers pDoom compared to [00:55:00] them not working on it compared to some other competitor working on it. Or

Linch: yeah,

Austin: I guess they'll do the main thing or like

Linch: Yeah, I think making that decision on behalf of humanity feels a bit odd and maybe that's the best choice they could make like I'm not like unsympathetic to like situations where like, you just got to do this in and wait for other people to catch up But it does feel like as a society we should be asking questions or something

Austin: Maybe. I think many things about OpenAI seem like quite democratic to me or the way they are opening up the access of chat GPT to many people is like the kind of practical democracy that matters a lot. Anyone can use like the cheaper versions of the models for free in a way that is helpful to their own lives.

Linch: Yeah,

Austin: I think that kind of giving things to lots of people and trying to help them is I don't know if it's democratic exactly, but along the vision of trying to help many people as opposed to concentrating the like good things that come out of AI to a few to the people who pay a lot for it, or people in charge of it.

Linch: Yeah. I think that's like a framing of morality that many EA's or like it's a framing of like altruism or like justice. Sure. It's not like a framing of like participation. It's not like a framing of giving people the agent agency to choose whether to, if things are happening.

Austin: I think like giving people like AI in their pockets is a kind of like democracy can mean one kind of thing. Ability to vote for, legislation or vote for people to enact legislation. I think the spirit of democracy goes beyond that. It is something like, giving people the ability to flourish pursue happiness, and I think they're doing quite well on that regard.

On the question of is the process by which OpenAI, makes their decisions democratic. Again, going back to the, many people are consumers of OpenAI, they pay OpenAI money for a service that is provided. There's a kind of democracy, more correctly, it is a kind of like a plutocracy, maybe like the people with the money in this like ecosystem, like influence the decisions of OpenAI.

But I think like practically it is not just like who is paying, but also like their free uses probably matter a lot to them. The. Like OpenAI as an entity is trying to serve these people well, because like in the future, some of them might actually turn into paying customers, that kind of thing.

Linch: Yeah, I think this is a type of experiment that if you don't have massive negative externalities or massive externalities period, then I'm happy to, this is just part of being a good civil society democracy isn't everything freedom is, the underlying thing that matters more than democracy, and having lots of companies choose to do their own thing, and serve their customers and so forth, that enhances a lot of freedom.

I think the question does become that once you're all like, talking about percentage points of doom, but much, much less, honestly. It's the same type of thing that you really want, free exploration ideas and I don't know maybe you should maybe that's like the safest path forward You know, in any light and perspective, the most Oh freedom has forward.

Austin: Yeah. The like standard way by which like democracy will have it say in AI is that like our elected leaders in like Sacramento or Washington are going to sit down and think hard and talk to like lobbyists paid for by like many sides of the equation to think like what kind of legislation should we enact on how it, how the like development of AGI happens.

And. I guess they could OpenAI could be more democratic if they like literally conducted like polls or something like that or like surveys of many people in the US around the world about how they feel. Are you thinking like that's the kind of thing that is missing from? What a responsible steward of AI would be doing?

Linch: I don't think that in the incentive structure to pick themselves, I don't think that being super responsible. I think that there's been some resources I'm trying to figure out how to govern AI that obviously not enough, but, they have many constraints. They are, like, thinking about how to get more inputs and so forth.

I think that's a benefit. I think, like, some of the things they do are like weird, like lobbying, like bunch of the lobbying things they do, or saying that they care a lot about safety, but then like secretly lobbying against, but like they try pretty hard to lobby for LLM exception in the EU, like AI laws, for example, stuff like that, it's weird, but overall I'm not like mad massively.

problematic in the position they put themselves in. I think more it's like just as a civil society, do we want this type of thing to happen? Do we are we excited about like individual companies making choices for things that might like, Do myself and obviously there's a probabilistic, right?

Like a tiny probability that like microplastics will be really bad for humanity or whatever. Sure. We should probably, that should not be enough to like, stop plastics companies from like refining petroleum stuff. At least like not the information we have.

Like maybe we should spend a bit more resources into researching or something, but I feel like at anywhere close to level, like anywhere close to like risk of car accident, killing off humanity, I don't know, it feels odd that civil society is not like putting more pressure and maybe rethinking whether we want this to be done at [01:00:00] all by private companies.

Austin: Yeah, I do expect this kind of pressure to just continue to ramp up over time as like people like start seeing what AI is capable of doing and tying it to, their very normal takes on how bad it would be if very smart AI, walk the earth. Yeah.

Other than that, I guess the critique of OpenAI as somewhat anti democratic doesn't really resonate with me, but I can see yeah, there's something in this reference class maybe of Oh, a nuclear weapon or but much more dangerous maybe that should be a thing that like democratic processes have some kind of, yeah, control, decide

Linch: how it happens and whether it should happen at all and when it happens and stuff like that

Austin: are like, yeah, the drivers in the space, as opposed to right now, they're like, are looking to like, play it like a check on the space. Yeah. Cool. Cool.

I think that's a good place to end it. Thanks. That was a great conversation. Yeah. Thanks for chatting.

Linch: Yeah. Hope there are five people or however many people like to view this.

Austin: You five people, you are awesome. Yeah. Awesome.

Linch: Yeah. Yeah. One of my favorite people. I'm sure. Yeah.

New Comment
25 comments, sorted by Click to highlight new comments since:

I will push back on democratic in the sense I think Linch is pushing the term being actually all that good a property for cosmically important orgs. See Bryan Caplan's The Myth of the Rational Voter, and the literature around [Social-desirability bias](Social-desirability bias) for reasons why, which I'm sure Linch is familiar with, but I notice is not mentioned.

I also claim that most catastrophes through both recent and long-ago history have been caused by governments, not just in the trivial sense, but also if we normalize by amount of stuff done. A good example everyone right now should be familiar with is qualified immunity, and the effects it has on irresponsible policing. The fact is we usually hold our companies to much higher standards than our governments (or do we just have more control over the incentives of our companies than our governments). It is also strange that the example Linch gives for a bad company is Blackwater, which while bad, is... about par-for-the-course when it comes to CIA projects.

I note too the America-centric bias with all of these examples & comparisons. Maybe the American government is just too incompetent compared to others, and we should instead embed the project within France or Norway.

There's a general narrative that basic research is best done in government/academia, but is this true? The academia end seems possibly true in the 20th century, most discoveries were made by academics. But there was also a significant contribution by folks at research labs started by monopolies of the period (most notably Bell Laboratories). Though this seems like the kind of thing which could turn out to be false going forward, as our universities become more bloated, and we kill off our monopolies. But in either case, I don't know why Linch thinks quality basic research will be done by the government? People like bringing up the Apollo program & Manhattan project, but both of those were quality projects due to their applied research, not their basic research which was all laid down ahead of time. I'm not saying it doesn't happen, but does anyone have good case studies? CERN comes to mind, but of course for projects that just require governments to throw massive amounts of money at a problem, government does well. AGI is plausibly like this, but alignment is not (though more money would be nice).

Government also tends to go slow, which I think is the strongest argument in favor of doing AGI inside a government. But also, man I don't trust government to implement an alignment solution if such a solution is invented during the intervening time. I'm imagining trying to convince a stick-in-the-ass bureaucrat fancying himself a scientist philosopher, whose only contribution to the project was politicking at a few important senators to thereby get himself enough authority to stand in the way of anyone changing anything about the project, who thinks he knows the solution to alignment that he is in fact wrong, and he should use so-and-so proven strategy, or such-and-such ensemble approach instead. Maybe a cynical picture, but one I'd bet resonates with those working to improve government processes.

I'd be interested to hear how Austin has updated regarding Sam's trustworthiness over the past few days.

Hello! Sorry for missing this comment the first time around :)

I will push back on democratic in the sense I think Linch is pushing the term being actually all that good a property for cosmically important orgs. See Bryan Caplan's The Myth of the Rational Voter, and the literature around [Social-desirability bias](Social-desirability bias) for reasons why, which I'm sure Linch is familiar with, but I notice is not mentioned.

I definitely think this is a reasonable criticism. I think my overall response is the fairly trite Churchill quote "Democracy is the worst form of government, except all the others that have been tried." I think broadly 

a) monopoly on force is good and a historical advancement in many ways

b) (liberal, democratic) governments have done an okay job with the enormous responsibility that we have handed them. 

c) I don't think corporate actors should be given this much power

d) I think I want to separate out considerations of individuals' moral goodness from what the incentives and institutions point someone towards.

di) I do think the typical OpenAI employees' values are closer to mine, and they're more competent than typical Americans, or typical gov't bureaucrats

dii) OTOH I think the US gov't has many checks and balances that private companies do not have (I think Leopold made a similar point in his most recent podcast). 

relatedly:

The fact is we usually hold our companies to much higher standards than our governments

To the extent this is true, I think it's because companies have many external checks on them (eg customers, competition, the government). I don't think I'd be comfortable with corporate actors' internal checks and balances (employees, their boards, etc) being nearly as strong as gov'ts' internal checks.

e) I agree with you that democratic governments are heavily flawed. I just think it's hard (far from impossible!) to do better, and I'm very skeptical that cosmically important organizations ought to be at what I facetiously refer to as the "forefront of corporate governance innovation." While experiments in policy/governance innovation is very useful and necessary, I think we want to minimize the number of variables that could go wrong on our first few critical tries at doing something both cosmically important and very difficult. Governments in general, and the USG in particular, have been much more battle-tested re: handling important life and death situations, in a way that AI companies very much have not been.
---

I note too the America-centric bias with all of these examples & comparisons. Maybe the American government is just too incompetent compared to others, and we should instead embed the project within France or Norway.

I think my own preferred option is an intergovernmental operation like CERN, ruled by the UN Security Council or NATO or something. I have relatively little hope that the USG will let this happen however. And I have even less hope -- vanishingly little -- that the USG will be okay with a non-US governmental project in a more "competent" country like Norway or Singapore. 

But if we wave aside the impracticality concerns, I'd also be worried about whether it's strategically wise to locate an AGI project in a smaller/more "competent" government that's less battle-tested than the US. On the object-level, I'd be very worried about information security concerns, where most of the smaller/more peacetime-competent governments might just not be robust to targeted hacks and cooption attempts (social and otherwise). On the meta-level, the lack of past experience with extreme outside pressure means we should be wary of them repeating their peacetime success "when shit hits the ceiling", even if we can't trace an exact causal mechanism for why. 

most catastrophes through both recent and long-ago history have been caused by governments

 

Interesting lens! Though I'm not sure if this is fair -- the largest things that are done tend to get done through governments, whether those things are good or bad. If you blame catastrophes like Mao's famine or Hitler's genocide on governments, you should also credit things like slavery abolition and vaccination and general decline of violence in civilized society to governments too.

I'd be interested to hear how Austin has updated regarding Sam's trustworthiness over the past few days.

Hm I feel like a bunch of people have updated majorly negatively, but I haven't -- only small amounts. I think he eg gets credit for the ScarJo thing. I am mostly withholding judgement, though; now that the NDAs have been dropped, curious to see what comes to light (if nothing does, that would be more positive credit towards Sam, and some validation to my point that NDAs were not really concealing much).

Wait, to be clear, are you saying that you think it would be to Sam's credit to learn that he forced employees to sign NDAs by straightforwardly lying to them about their legal obligations, using extremely adversarial time pressure tactics and making very intense but vague threats? 

This behavior seems really obviously indefensible.

I don't have a strong take on the ScarJo thing. I don't really see how it would be to his credit, my guess is he straightforwardly lied about his intention to make the voice sound like ScarJo, but that's of course very hard to verify, and it wouldn't be a big deal either way IMO.

[-]kave20

Austin is saying absence of evidence is evidence of absence (in the absence of a preempting filter)

Sure, but the evidence is about the filtering that has occurred and how the filtering was conducted, not about what the filters were hiding. Threatening someone with violence to not insult you is bad independently of whether they had anything to insult you about.

[-]kave20

What does "the evidence is about" mean? I don't think there's one piece of evidence, and I think evidence is normally relevant to multiple latent variables.

I agree that the fact there was filtering and how it was conducted is bad evidence. On the other hand, "now that the NDAs have been dropped, curious to see what comes to light (if nothing does, that would be more positive credit towards Sam [...])" seems to be talking about how the lack of something to insult Sam about is positive evidence about Sam. I don't think it's very strong evidence, fwiw, but noting that it is positive evidence seems pretty clearly distinct from saying "it would be to Sam's credit to learn that he forced employees to sign NDAs by [bad stuff]"

Sure, but Austin answered the fully general question of "how [have you] updated regarding Sam's trustworthiness over the past few days[?]" with "I haven't updated majorly negatively", in a generic tone. 

When I say "the evidence is about the filtering" I am saying "the thing that seems like the obvious update would be about would be the filtering, not what the filtering was hiding". 

I agree that one can keep a separate ledger, but to not make a major negative update on Sam in the aggregate based on the information that was released requires IMO either that one already knew about such things and had the information integrated (which would presumably result in a low opinion of Sam's conduct) or a distinct lack of moral compass (or third, a positive update that happened to mostly cancel out the negative update, though I think it would be confusing to communicate that via saying "I [updated] only small amounts"). 

[-]kave20

I have some feeling that this back-and-forth is bad or a waste of something, but I just don't see how

[Austin is saying that] it would be to Sam's credit to learn that he forced employees to sign NDAs by straightforwardly lying to them about their legal obligations, using extremely adversarial time pressure tactics and making very intense but vague threats?

is at all a plausible interpretation, or anything like a necessary implication, of what Austin wrote.

[-]kave20

OK, perhaps you are saying what I would phrase as "are you saying it's not greatly to Sam's discredit if he forced employees to sign ...?".

Ah, sure. I didn't meant to say much about implying a large positive update here, and mostly intended to say "are you saying it's not to any kind of substantial discredit here?".

Interesting lens! Though I'm not sure if this is fair -- the largest things that are done tend to get done through governments, whether those things are good or bad. If you blame catastrophes like Mao's famine or Hitler's genocide on governments, you should also credit things like slavery abolition and vaccination and general decline of violence in civilized society to governments too.

I do mostly[1] credit such things to governments, but the argument is about whether companies or governments are more liable to take on very large tail risks. Not about whether governments are generally good or bad. It may be that governments just like starting larger projects than corporations. But in that case, I think the claim that a greater percentage of those end in catastrophe than similarly large projects started by corporations still looks good.


  1. I definitely don't credit slavery abolition to governments, at least in America, since that industry was largely made possible in the first place by governments subsidizing the cost of chasing down runaway slaves. I'd guess general decline of violence is more attributable to generally increasing affluence, which has a range of factors associated with it, than government intervention so directly. But I'm largely ignorant on that particular subject. The "mostly" here means "I acknowledge governments do some good things". ↩︎

[-]kave40

that industry was largely made possible in the first place by governments subsidizing the cost of chasing down runaway slaves

Whoa! Source?

I don't know the exact article that convinced me, but I bet this summary of the history of economic thought on the subject is a good place to start, which I have skimmed, and seems to cover the main points with citations.

The transcript was tough to read, I got through it, but I don't think I followed everyone's points. Tons of typos and also sometimes it would put marginal words in the wrong speaker's mouth. I think paying someone to go through this and fix typos or having it done by a person originally (I'd guess this is machine-transcribed) would've been worth it, I would've chipped in $10 for that and I'm sure many others would've.

Thanks for the feedback! I wasn't sure how much effort to put into this producing this transcript (this entire podcast thing is pretty experimental); good to know you were trying to read along.

It was machine transcribed via Descript but then I did put in another ~90min cleaning it up a bit, removing filler words and correcting egregious mistranscriptions. I could have spent another hour or so to really clean it up, and perhaps will do so next time (or find some scaleable way to handle it eg outsource or LLM). I think that put it in an uncanny valley of "almost readable, but quite a bad experience".

Yeah, sadly AFAICT it just takes hours of human time to produce good transcripts.

+1 here, I've found this to be a major pain, and just didn't do it in my last one with Eli Lifland. 

+1 that the transcript is rough. Unfortunately it's just pretty expensive to create one that's decent to read - for $1.50/min (so like $95 for this episode) you can get an OK transcript from rev.com within 24 hours, and then if you want to actually eliminate typos, you just have to go over it yourself.

I'd also encourage you to not use the descript feature that cuts out all ums and ahs - it just makes it sound super disjointed (especially when watching the video).

Ah interesting, thanks for the tips.

I use filler a lot so thought the um/ah removal was helpful (it actually cut down the recording by something like 10 minutes overall). It's especially good for making the transcript readable, though perhaps I could just edit the transcript without changing the audio/video.

I think I care about the video being easier to watch more than I care about missing the ums and ahs? But maybe I'm not appreciating how much umming you do.

Note that this podcast was recorded May 22, before Kelsey Piper’s expose on NDAs

I don't understand this - [Kelsey's initial expose](https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release) was published on the 18th. Do you mean "before her second post" or something?

Yeah I meant her second post, the one that showed off the emails around the NDAs.

I'd like to listen to something like this in principle, but it has really unfortunate timing with the further information that's been revealed, making it somewhat less exciting. It would be interesting to hear how/whether the participants believes change.

The second half deals with more timeless considerations, like whether OpenAI should be embedded in a larger organization which doesn't have its main reason for existence being creating AGI, like a large company or a government.