Chris Lehane, the inventor of the original term ‘vast right wing conspiracy’ back in the 1990s to dismiss the (true) allegations against Bill Clinton by Monica Lewinsky
This is inaccurate in a few ways.
Lehane did not invent the term "vast right wing conspiracy", AFAICT; Hillary Clinton was the first person to use that phrase in reference to criticisms of the Clintons, in a 1998 interview. Some sources (including Lehane's Wikipedia page) attribute the term to Lehane's 1995 memo Communication Stream of Conspiracy Commerce, but I searched the memo for that phrase and it does not appear there. Lehane's Wikipedia page cites (and apparently misreads) this SFGate article, which discusses Lehane's memo in connection with Clinton's quote but does not actually attribute the phrase to Lehane.
The memo's use of the term "conspiracy" was about how the right spread conspiracy theories about the Clintons, not about how the right was engaged in a conspiracy against the Clintons. Its primary example involved claims about Vince Foster which it (like present-day Wikipedia) described as "conspiracy theories" (as you can see by searching the memo for the string "conspirac").
Also, Lehane's memo was published in July 1995 which was before the Clinton-Lewinsky sexual relationship began (Nov 1995), and so obviously wasn't a response to allegations about that relationship.
Lehane's memo did include some negatives stories about the Clintons that turned out to be accurate, such as the Gennifer Flowers allegations. So there is some legitimate criticism about Lehane's memo, including how it presented all of these negative stories as part of a pipeline for spreading unreliable allegations about the Clintons, and didn't take seriously the possibility that they might be accurate. But it doesn't look like his work was mainly focused on dismissing true allegations.
I wonder if seeking a general protective order banning OpenAI from further Subpoenas of nonprofits without court review is warranted for the case - that seems like a good first step, and an appropriate precedent for the overwhelmingly likely later cases, given OpenAI's behavior.
A little over a month ago, I documented how OpenAI had descended into paranoia and bad faith lobbying surrounding California’s SB 53.
This included sending a deeply bad faith letter to Governor Newsom, which sadly is par for the course at this point.
It also included lawfare attacks against bill advocates, including Nathan Calvin and others, using Elon Musk’s unrelated lawsuits and vendetta against OpenAI as a pretext, accusing them of being in cahoots with Elon Musk.
Previous reporting of this did not reflect well on OpenAI, but it sounded like the demand was limited in scope to a supposed link with Elon Musk or Meta CEO Mark Zuckerberg, links which very clearly never existed.
Accusing essentially everyone who has ever done anything OpenAI dislikes of having united in a hallucinated ‘vast conspiracy’ is all classic behavior for OpenAI’s Chief Global Affairs Officer Chris Lehane, the inventor of the original term ‘vast right wing conspiracy’ back in the 1990s to dismiss the (true) allegations against Bill Clinton by Monica Lewinsky. It was presumably mostly or entirely an op, a trick. And if they somehow actually believe it, that’s way worse.
We thought that this was the extent of what happened.
Now that SB 53 has passed, Nathan Calvin is now free to share the full story.
It turns out it was substantially worse than previously believed.
And then, in response, OpenAI CSO Jason Kwon doubled down on it.
What OpenAI Tried To Do To Nathan Calvin
Here is the key passage from the Chris Lehane statement Nathan quotes, which shall we say does not correspond to the reality of what happened (as I documented last time, Nathan’s highlighted passage is bolded):
It Doesn’t Look Good
Let’s not get carried away. Elon Musk has been engaging in lawfare against OpenAI, r where many (but importantly not all, the exception being challenging the conversion to a for-profit) of his lawsuits have lacked legal merit, and making various outlandish claims. OpenAI being a bad actor against third parties does not excuse that.
OpenAI and Sam Altman do a lot of very good things that are much better than I would expect from the baseline (replacement level) next company or next CEO up, such as a random member or CEO of the Mag-7.
They will need to keep doing this and further step up, if they remain the dominant AI lab, and we are to get through this. As Samuel Hammond says, OpenAI must be held to a higher standard, not only legally but across the board.
Alas, not only is that not a high enough standard for the unique circumstances history has thrust upon them, especially on alignment, OpenAI and Sam Altman also do a lot of things that are highly not good, and in many cases actively worse than my expectations for replacement level behavior. These actions example of that. And in this and several other key ways, especially in terms of public communications and lobbying, OpenAI and Altman’s behaviors have been getting steadily worse.
OpenAI’s Jason Kwon Responds
Rather than an apology, this response is what we like to call ‘doubling down.’
Elon Musk has indeed repeatedly sued OpenAI, and many of those lawsuits are without legal merit, but if you think the primary purpose of him doing that is his own financial benefit, you clearly know nothing about Elon Musk.
No, it doesn’t, because this action is overdetermined once you know what the lawsuit is about. OpenAI is trying to pull off one of the greatest thefts in human history, the ‘conversion’ to a for-profit in which it will attempt to expropriate the bulk of its non-profit arm’s control rights as well as the bulk of its financial stake in the company. This would be very bad for AI safety, so AI safety organizations are trying to stop it, and thus support this particular Elon lawsuit against OpenAI, which the judge noted had quite a lot of legal merit, with the primary question being whether Musk has standing to sue.
This went well beyond that, and you were admonished by the judge for how far beyond that your attempts at such discoveries went. It takes a lot to get judges to use such language.
Again, this does not at all line up with the requests being made.
You opposed SB 53. What are you even talking about. Have you seen the letter you sent to Newsom? Doubling down on this position, and drawing attention to this deeply bad faith lobbying by doing so, is absurd.
He provides PDFs, here is the transcription:
(He then shares a tweet about SB 1047, where OpenAI tells employees they are free to sign a petition in support of it, which raises questions answered by the Tweet.)
Excellent. Thank you, sir, for the full request.
There is a community note:
A Brief Amateur Legal Analysis Of The Request
Before looking at others reactions to Kwon’s statement, here’s how I view each of the nine requests, with the help of OpenAI’s own GPT-5 Thinking (I like to only use ChatGPT when analyzing OpenAI in such situations, to ensure I’m being fully fair), but really the confirmed smoking gun is #7:
Given that Calvin quoted #7 as the problem and he’s confirming #7 as quoted, I don’t see how Kwon thought the full text would make it look better, but I always appreciate transparency.
Oh, also, there is another.
What OpenAI Tried To Do To Tyler Johnston
My model of Kwon’s response to this was it would be ‘if you care so much about the restructuring that means we suspect you’re involved with Musk’? And thus that they’re entitled to ask for everything related to OpenAI.
We now have Jason Kwon’s actual response to the Johnson case, which is that Tyler ‘backed Elon’s opposition to OpenAI’s restructuring.’ So yes, nailed it.
Also, yep, he’s tripling down.
If you find yourself in a hole, sir, the typical advice is to stop digging.
He also helpfully shared the full subpoena given to Tyler Johnston. I won’t quote this one in full as it is mostly similar to the one given to Calvin. It includes (in addition to various clauses that aim more narrowly at relationships to Musk or Meta that don’t exist) a request for all funding sources of the Midas Project, all documents concerning the governance or organizational structure of OpenAI or any actual, contemplated, or potential change thereto, or concerning any potential investment by a for-profit entity in OpenAI or any affiliated entity, or any such funding relationship of any kind.
Nathan Compiles Responses to Kwon
Rather than respond himself to Kwon’s first response, Calvin instead quoted many people responding to the information similarly to how I did. This seems like a very one sided situation. The response is damning, if anything substantially more damning than the original subpoena.
I’ll also throw in this one:
How unusual was this?
I think ‘scorched Earth tactics’ seems to me like it is pushing it, but I wouldn’t say it was extremely hyperbolic, the never having heard of a company behaving like this seems highly relevant.
The First Thing We Do
Lawyers will often do crazy escalations by default any time you’re not looking, and need to be held back. Insane demands can be, in an important sense, unintentional.
That’s still on you, especially if (as in the NDAs and threats over equity that Daniel Kokotajlo exposed) you have a track record of doing this. If it keeps happening on your watch, then you’re choosing to have that happen on your watch.
The other problem with this explanation is Kwon’s response.
If Kwon had responded with, essentially, “oh whoops, sorry, that was a bulldog lawyer mauling people, our bad, we should have been more careful” then they still did it and it was still not the first time it happened on their watch but I’d have been willing to not make it that big a deal.
That is very much not what Kwon said. Kwon doubled down that this was reasonable, and that this was ‘a routine step.’
My understanding is that ‘send subpoenas at all’ is totally a routine step, but that the scope of these requests within the context of an amicus brief is quite the opposite.
Michael Page also strongly claims this is not normal.
It would be a real shame if, as a result of Kwon’s rhetoric, we shared these links a lot. If everyone who reads this were to, let’s say, familiarize themselves with what content got all these people at OpenAI so upset.
OpenAI Head of Mission Alignment Joshua Achiam Speaks Out
We all owe Joshua Achiam a large debt of gratitude for speaking out about this.
Well said. I have strong disagreements with Joshua Achiam about the expected future path of AI and difficulties we will face along the way, and the extent to which OpenAI has been a good faith actor fighting for good, but I believe these to be sincere disagreements, and this is what it looks like to call out the people you believe in, when you see them doing something wrong.
I agree with Charles on all these fronts.
If you could speak out this strongly against your employer, from Joshua’s position, with confidence that they wouldn’t hold it against you, that would be remarkable and rare. It would be especially surprising given what we already know about past OpenAI actions, very obviously Joshua is taking a risk here.
It Could Be Worse
At least OpenAI (and xAI) are (at least primarily) using the courts to engage in lawfare over actual warfare or other extralegal means, or any form of trying to leverage their control over their own AIs. Things could be so much worse.
In principle, if OpenAI is legally entitled to information, there is nothing wrong with taking actions whose primary goal is to extract that information. When we believed that the subpoenas were narrowly targeted at items directly related to Musk and Meta, I still felt this did not seem like info they were entitled to, and it seemed like some combination of intimidation (‘the process is the punishment’), paranoia and a fishing expedition, but if they did have that paranoia I could understand their perspective in a sympathetic way. Given the full details and extent, I can no longer do that.
Chris Lehane Is Who We Thought He Was
Wherever else and however deep the problems go, they include Chris Lehane. Chris Lehane is also the architect of a16z’s $100 million+ dollar Super PAC dedicated to opposing any and all regulation of AI, of any kind, anywhere, for any reason.
If OpenAI wants to convince us that it wants to do better, it can fire Chris Lehane. Doing so would cause me to update substantially positively on OpenAI.
A Matter of Distrust
There have been various incidents that suggest we should distrust OpenAI, or that they are not being a good faith legal actor.
Joshua Achiam highlights one of those incidents. He points out one thing that is clearly to OpenAI’s credit in that case: Once Daniel Kokotajlo went public with what was going on with the NDAs and threats to confiscate OpenAI equity, OpenAI swiftly moved to do the right thing.
However much you do or do not buy their explanation for how things got so bad in that case, making it right once pointed out mitigated much of the damage.
In other major cases of damaging trust, OpenAI has simply stayed silent. They buried the investigation into everything related to Sam Altman being briefly fired, including Altman’s attempts to remove Helen Toner from the board. They don’t talk about the firings and departures of so many of their top AI safety researchers, or of Leopold. They buried most mention of existential risk or even major downsides or life changes from AI in public communications. They don’t talk about their lobbying efforts (as most companies do not, for similar and obvious reasons). They don’t really attempt to justify the terms of their attempted conversion to a for-profit, which would largely de facto disempower the non-profit and be one of the biggest thefts in human history.
Silence is par for the course in such situations. It’s the default. It’s expected.
Here Jason Kwon is, in what seems like an official capacity, not only not apologizing or fixing the issue, he is repeatedly doing the opposite of what they did in the NDA case, and doubled down on OpenAI’s actions. He is actively defending OpenAI’s actions as appropriate, justified and normal, and continuing to misrepresent what OpenAI did regarding SB 53 and to imply that anyone opposing them should be suspected of being in league with Elon Musk, or worse Mark Zuckerberg.
OpenAI, via Jason Kwon, has said, yes, this was the right thing to do. One is left with the assumption this will be standard operating procedure going forward.
There was a clear opportunity, and to some extent still is an opportunity, to say ‘upon review we find that our bulldog lawyers overstepped in this case, we should have prevented this and we are sorry about that. We are taking steps to ensure this does not happen again.’
If they had taken that approach, this incident would still have damaged trust, especially since it is part of a pattern, but far less so than what happened here. If that happens soon after this post, and it comes from Altman, from that alone I’d be something like 50% less concerned about this incident going forward, even if they retain Chris Lehane.