These investors were Dustin Moskovitz, Jaan Tallinn and Sam Bankman-Fried
nitpick: SBF/FTX did not participate in the initial round - they bought $500M worth of non-voting shares later, after the company was well on its way.
more importantly, i often get the criticism that "if you're concerned with AI then why do you invest in it". even though the critics usually (and incorrectly) imply that the AI would not happen (at least not nearly as fast) if i did not invest, i acknowledge that this is a fair criticism from the FDT perspective (as witnessed by wei dai's recent comment how he declined the opportunity to invest in anthropic).
i'm open to improving my policy (which is - empirically - also correllated with the respective policies of dustin as well as FLI) of - roughly - "invest in AI and spend the proceeds on AI safety" -- but the improvements need to take into account that a) prominent AI founders have no trouble raising funds (in most of the alternative worlds anthropic is VC funded from the start, like several other openAI offshoots), b) the volume of my philanthropy is correllated with my net worth, and c) my philanthropy is more needed in the worlds where AI progresses faster.
EDIT: i appreciate the post otherwise -- upvoted!
i acknowledge that this is a fair criticism from the FDT perspective (as witnessed by wei dai's recent comment how he declined the opportunity to invest in anthropic).
To clarify a possible confusion, I do not endorse using "FDT" (or UDT or LDT) here, because the state of decision theory research is such that I am very confused about how to apply these decision theories in practice, and personally mostly rely on a mix of other views about rationality and morality, including standard CDT-based game theory and common sense ethics.
(My current best guess is that there is minimal "logical correlation" between humans so LDT becomes CDT-like when applied to humans, and standard game theory seems to work well enough in practice or is the best tool that we currently have when it comes to multiplayer situations. Efforts to ground human moral/ethical intuitions on FDT-style reasoning do not seem very convincing to me so far, so I'm just going to stick with the intuitions themselves for now.)
In this particular case, I mainly wanted to avoid signaling approval of Anthropic's plans and safety views or getting personally involved in activities that increase x-risk in my judgement. Avoiding conflicts of interest (becoming biased in favor of Anthropic in my thoughts and speech) was also an important consideration.
My suspicion is that if we were to work out the math behind FDT (and it's up in the air right now whether this is even possible) and apply it to humans, the appropriate reference class for a typical human decision would be tiny, basically just copies of oneself in other possible universes.
One reason for suspecting this is that humans aren't running clean decision theories, but have all kinds of other considerations and influences impinging on their decisions. For example psychological differences between us around risk tolerance and spending/donating money, different credences for various ethical ideas/constraints, different intuitions about AI safety and other people's intentions, etc., probably make it wrong to think of us as belonging to the same reference class.
indeed, illiquidity is a big constraint to my philanthropy, so in very short timelines my “invest (in startups) and redistribute” policy does not work too well.
There is a question about whether the safety efforts your money supported at or around the companies ended up compensating for the developments
yes. more generally, sign uncertainty sucks (and is a recurring discussion topic in SFF round debates).
It seems that if Dustin and you had not funded Series A of Anthropic, they would have had a harder time starting up.
they certainly would not have had harder time setting up the company nor getting the equivalent level of funding (perhaps even at a better valuation). it’s plausible that pointing to “aligned” investors helped with initial recruiting — but that’s unclear to me. my model of dario/founders just did not want the VC profit-motive to play a big part in the initial strategy.
Does this have to do with liquidity issues or something else?
yup, liquidity (also see the comments below), crypto prices, and about half of my philanthropy not being listed on that page. also SFF s-process works with aggregated marginal value functions, so there is no hard cutoff (hence the “evaluators could not make grants that they wanted to” sentence makes less sense than in traditional “chunky and discretionary” philanthropic context).
1. i agree. as wei explicitly mentions, signalling approval was a big reason why he did not invest, and it definitely gave me a pause, too (i had a call with nate & eliezer on this topic around that time). still, if i try to imagine a world where i declined to invest, i don't see it being obviously better (ofc it's possible that the difference is still yet to reveal itself).
concerns about startups being net negative are extremely rare (outside of AI, i can't remember any other case -- though it's possible that i'm forgetting some). i believe this is the main reason why VCs and SV technologists tend to be AI xrisk deniers (another being that it's harder to fundraise as a VC/technologist if you have sign uncertainty) -- their prior is too strong to consider AI an exception. a couple of years ago i was at an event in SF where top tech CEOs talked about wanting to create "lots of externalties", implying that externalities can only be positive.
2. yeah, the priorities page is now more than a year old and in bad need of an update. thanks for the criticism -- fwded to the people drafting the update.
I joined Anthropic in 2021 because I thought it was an extraordinarily good way to help make AI go well for humanity, and I have continued to think so. If that changed, or if any of my written lines were crossed, I'd quit.
I think many of the factual claims in this essay are wrong (for example, neither Karen Hao nor Max Tegmark are in my experience reliable sources on Anthropic); we also seem to disagree on more basic questions like "has Anthropic published any important safety and interpretability research", and whether commercial success could be part of a good AI Safety strategy. Overall this essay feels sufficiently one-sided and uncharitable that I don't really have much to say beyond "I strongly disagree, and would have quit and spoken out years ago otherwise".
I regret that I don't have the time or energy for a more detailed response, but thought it was worth noting the bare fact that I have detailed views on these issues (including a lot of non-public information) and still strongly disagree.
if any of my written lines were crossed, I'd quit.
Just out of curiosity, what are your written lines? I am not sure whether this was intended as a reference to lines you wrote yourself internally, or something you feel comfortable sharing. No worries if not, I would just find it helpful for orienting to things.
These are personal committments which I wrote down before I joined, or when the topic (e.g. RSP and LTBT) arose later. Some are 'hard' lines (if $event happens); others are 'soft' (if in my best judgement ...) and may say something about the basis for that judgement - most obviously that I won't count my pay or pledged donations as a reason to avoid leaving or speaking out.
I'm not comfortable giving a full or exact list (cf), but a sample of things that would lead me to quit:
[Feel free not to respond / react with the "not worth getting into" emoji"]
Severe or willful violation of our RSP, or misleading the public about it.
Should this be read as "[Severe or willful violation of our RSP] or [misleading the public about the RSP]" or should this be read as "[Severe or willful violation of our RSP] or [Severe or willful misleading the public about the RSP]".
In my views/experience, I'd say there are instances where the public and (perhaps more strongly) Anthropic employees have been misled about the RSP somewhat willfully (e.g., there's an obvious well known important misconception that is convenient for Anthropic leadership and that wasn't corrected), though I guess I wouldn't consider this to be a severe violation.
If the LTBT was abolished without a good replacement.
Curious about how you'd relate to the "the LTBT isn't applying any meaningful oversight, Anthropic leadership has strong control over board appointments, and this isn't on track to change (and Anthropic leadership isn't really trying to change this)". I'd say this is the current status quo. This is kind of a tricky thing to do well, but it doesn't really seem from the outside like Anthropic is actually trying on this. (Which is maybe a reasonable choice, because idk if the LTBT was ever really that important given realistic constraints, but you see to think it is important.)
I certainly agree that the LTBT has de jure control (or as you say "formal control").
By "strong control" I meant more precisely something like: "lots of influence in practice, e.g. the influence of Anthropic leadership is comparable to the influence that the LTBT itself is exerting in practice over appointments or comparable (though probably less) to the influence that (e.g.) Sam Altman has had over recent board appointments at OpenAI". Perhaps "it seems like they have a bunch of control" would have been a more accurate way to put things.
I think it would be totally unsurprising for the LTBT to have de jure power but not that much de facto power (given the influence of Anthropic leadership) and from the outside it sure looks like this is the case at the moment.
See this Time article, which was presumably explicitly sought out by Anthropic to reassure investors in the aftermath of the OpenAI board crisis, in which Brian Israel (at the time general counsel at Anthropic) is paraphrased as repeatedly saying (to investors) "what happened at OpenAI can't happen to us". The article (again, likely explicitly sought out by Anthropic as far as I can tell) also says "it also means that the LTBT...
The Time article is materially wrong about a bunch of stuff
Agreed which is why I noted this in my comment.[1] I think it's a bad sign that Anthropic seemingly actively sought out an article that ended up being wrong/misleading in a way which was convenient for Anthropic at the time and then didn't correct it.
I really don't want to get into pedantic details, but there's no "supposed to" time for LTBT board appointments, I think you're counting from the first day they were legally able to appoint someone. Also https://www.anthropic.com/company lists five board members out of five seats, and four Trustees out of a maximum five. IMO it's fine to take a few months to make sure you've found the right person!
First, I agree that there isn't a "supposed to" time, my wording here was sloppy, sorry about that.
My understanding was a that there was a long delay (e.g. much longer than a few months) between the LTBT being able to appoint a board member and actually appointing such a member and a long time where the LTBT only had 3 members. I think this long of a delay is somewhat concerning.
My understanding is that the LTBT could still decide one more seat (so that it determines a majority...
Severe or willful violation of our RSP, or misleading the public about it.
I'm guessing you don't consider the v3 updates to be willful violation of the RSP, but, I don't really know why.
I can tell a story that's like "well it is still following a process of changing the RSP that was laid on in advance". But I'd pretty surprised if you thought "go through the motions of changing it advance" is loadbearing for whether Anthropic is good, if the RSP was predictably going to get changed when it became too costly.
But, if you don't consider the v3 updates a spiritual violation I don't really know what could count as a violation that is more specific than "you think overall Anthropic is net bad for the world", at which point, I don't know what work "the RSP" would be doing in your personal accounting.
I know it's hard to articulate exact details for things like this, and I know you're tired[1] of these convos, but, I would appreciate some kind of example of what would actually count as a willful RSP violation in your book, that is particularly different from "the leadership not being trustworthy or having bad judgment."
I'm sympathetic about people complaining a bunch without understandin
Sure, thanks for noting it.
You're always welcome to point out any factual claim that was incorrect. I had to mostly go off public information. So I can imagine that some things I included are straight-up wrong, or lack important context, etc.
Periodically I've considered writing a post similar to this. A piece that I think this doesn't fully dive into is "did Anthropic have a commitment not to push the capability frontier?".
I had once written a doc aimed at Anthropic employees, during SB 1047 Era, when I had been felt like Anthropic was advocating for changes to the law that were hard to interpret un-cynically.[1] I've had a vague intention to rewrite this into a more public facing thing, but, for now I'm just going to lift out the section talking about the "pushing the capability frontier" thing.
...When I chatted with several anthropic employees at the happy hour a
couple months~year ago, at some point I brought up the “Dustin Moskowitz’s earnest belief was that Anthropic had an explicit policy of not advancing the AI frontier” thing. Some employees have said something like “that was never an explicit commitment. It might have been a thing we were generally trying to do a couple years ago, but that was more like “our de facto strategic priorities at the time”, not “an explicit policy or commitment.”When I brought it up, the vibe in the discussion-circle was “yeah, that is kinda weird, I don’t know what happened ther
I keep checking back here to see if people have responded to this seemingly cut and dry breach of promise by the leadership, but the lack of commentary is somewhat worrying.
I am in the camp that thinks that it is very good for people concerned about AI risk to be working at the frontier of development. I think it's good to criticize frontier labs who care and pressure them but I really wish it wasn't made with the unhelpful and untrue assertion that it would be better if Anthropic hadn't been founded or supported.
The problem, as I argued in this post, is that people way overvalue accelerating timelines and seem willing to make tremendous sacrifices just to slow things down a small amount. If you advocate that people concerned about AI risk avoid working on AI capabilities, the first order effect of this is filtering AI capability researchers so that they care less about AI risk. Slowing progress down is a smaller, second order effect. But many people seem to take it for granted that completely ceding frontier AI work to people who don't care about AI risk would be preferable because it would slow down timelines! This seems insane to me. How much time would possibly need to be saved for that to be worth it?
To try to get to our crux: I've found that caring significantly about accelerating timelines seems to hinge on a very particular view of alignment w...
DeepMind was funded by Jaan Tallinn and Peter Thiel
i did not participate in DM's first round (series A) -- my investment fund invested in series B and series C, and ended up with about 1% stake in the company. this sentence is therefore moderately misleading.
Wow. There's a very "room where it happens" vibe about this post. Lots of consequential people mentioned, and showing up in the comments. And it's making me feel like...
Like, there was this discussion club online, ok? Full of people who seemed to talk about interesting things. So I started posting there too, did a little bit of math, got invited to one or two events. There was a bit of money floating around too. But I always stayed a bit at arms length, was a bit less sharp than the central folks, less smart, less jumping on opportunities.
And now that folks from the same circle essentially ended up doing this huge consequential thing - the whole AI thing I mean, not just Anthropic - and many got rich in the process... the main feeling in my mind isn't envy, but relief. That my being a bit dull, lazy and distant saved me from being part of something very ugly. This huge wheel of history crushing the human form, and I almost ended up pushing it along, but didn't.
Or as Mike Monteiro put it:
...Tech, which has always made progress in astounding leaps and bounds, is just speedrunning the cycle faster than any industry we’ve seen before. It’s gone from good vibes, to a real thing, to unico
Oh, damn. I feel so... Sad. For everyone. For the people who they once were, before moloch ate their brains. For us, now staring into the maw of the monster. For the world.
Thanks for this.
A minor comment and a major one:
The nits: the section on the the Israeli military's use of AI against Hamas could use some tightening to avoid getting bogged down in the particularities of the Palestine situation. The line "some of the surveillance tactics Israeli settlers tested in Palestine" (my emphasis) to me suggests the interpretation that all Israelis are "settlers," which is not the conventional use of that term. The conventional use of settlers applied only to those Israelis living over the Green Line, and particularly those doing so with the ideological intent of expanding Israel's de facto borders. Similarly but separately, the discussion about Microsoft's response to me seemed to take as facts what I believe to still only be allegations.
The major comment: I feel you could go farther to connect the dots between the "enshittification" of Anthropic and the issues you raise about the potential of AI to help enshittify democratic regimes. The idea that there are "exogenously" good and bad guys, with the former being trustworthy to develop A(G)I and the latter being the ones "we" want to stop from winning the race, is really central to AI discourse. You've pointed out the pattern in which participating in the race turns the "good" guys into bad guys (or at least untrustworthy ones).
This is an excellent write-up. I'm pretty new to the AI safety space, and as I've been learning more (especially with regards to the key players involved), I have found myself wondering why more people do not view Dario with a more critical lens. As you detailed, it seems like he was one of the key engines behind scaling, and I wonder if AI progress would have advanced as quickly as it did if he had not championed it. I'm curious to know if you have any plans to write up an essay about OpenPhil and the funding landscape. I know you mentioned Holden's investments into Anthropic, but another thing I've noticed as a newcomer is just how many safety organization OpenPhil has helped to fund. Anecdotally, I have heard a few people in the community complain that they feel that OpenPhil has made it more difficult to publicly advocate for AI safety policies because they are afraid of how it might negatively affect Anthropic.
Despite the shift, 80,000 Hours continues to recommend talented engineers to join Anthropic.
FWIW, it looks to me like they restrict their linked roles to things that are vaguely related to safety or alignment. (I think that the 80,000 Hours job board does include some roles that don't have a plausible mechanism for improving AI outcomes except via the route of making Anthropic more powerful, e.g. the alignment fine-tuning role.)
Wait, why did this get moved to personal blog?
Just surprised because this is actually a long essay I tried to carefully argue through. And the topic is something we can be rational about.
Thanks for sharing openly. I want to respect your choice here as moderator.
Given that you think this was not obvious, could you maybe take another moment to consider?
This seems a topic that is actually important to discuss. I have tried to focus as much as possible on arguing based on background information.
(Another mod leaned in the other direction, and I do think there's like, this is is pretty factual and timeless, and Dario is more of a public figure than an inside-baseball lesswrong community member, so, seemed okay to err in the other direction. But still flagging it as an edge case for people trying to intuit the rules)
Title is confusing and maybe misleading, when I see "accelerationists" I think either e/acc or the idea that we should hasten the collapse of society in order to bring about a communist, white supremacist, or other extremist utopia. This is different from accelerating AI progress and, as far as I know, not the motivation of most people at Anthropic.
(Cross-posted from EA Forum): I think you could have strengthened your argument here further by talking about how even in Dario's op-ed opposing the ban on state-level regulation of AI, he specifically says that regulation should be "narrowly focused on transparency and not overly prescriptive or burdensome". That seems to indicate opposition to virtually any regulations that would actually directly require doing anything at all to make models themselves safer. It's demanding that regulations be more minimal than even the watered-down version of SB 1047 that Anthropic publicly claimed to support.
Insightful thanks. Minor point:
The rationale of reducing hardware overhang is flawed: [...] It assumes that ‘AGI’ is inevitable and/or desirable. Yet new technologies can be banned (especially when still unprofitable and not depended on by society).
Not enamored with the reducing-hardware-overhang argument either, but to essentially imply we'd have much evidence that advances in AI were preventable in today's current econ & geopolitical environment seems rather bogus to me - and the linked paper certainly does not provide much evidence to support t...
In 2021, a circle of researchers left OpenAI, after a bitter dispute with their executives. They started a competing company, Anthropic, stating that they wanted to put safety first. The safety community responded with broad support. Thought leaders recommended engineers to apply, and allied billionaires invested.[1]
Anthropic’s focus has shifted – from internal-only research and cautious demos of model safety and capabilities, toward commercialising models for Amazon and the military.
Despite the shift, 80,000 Hours continues to recommend talented engineers to join Anthropic.[2] On the LessWrong forum, many authors continue to support safety work at Anthropic, but I also see side-conversations where people raise concerns about premature model releases and policy overreaches. So, a bunch of seemingly conflicting opinions about work by different Anthropic staff, and no overview. But the bigger problem is that we are not evaluating Anthropic on its original justification for existence.
Did early researchers put safety first? And did their work set the right example to follow, raising the prospect of a ‘race to the top’? If yes, we should keep supporting Anthropic. Unfortunately, I argue, it’s a strong no.
From the get-go, these researchers acted in effect as moderate accelerationists. They picked courses of action that significantly sped up and/or locked in AI developments, while offering flawed rationales of improving safety.
Some limitations of this post:
I skip many nuances. The conclusion seems roughly right though, because of overdetermination. Two courses of action – scaling GPT rapidly under a safety guise, starting a ‘safety-first’ competitor that actually competed on capabilities – each shortened timelines so much that no other actions taken could compensate. Later actions at Anthropic were less bad but still worsened the damage.[3]
I skip details of technical safety agendas because these carry little to no weight. As far as I see, there was no groundbreaking safety progress at or before Anthropic that can justify the speed-up that their researchers caused. I also think their minimum necessary aim is intractable (controlling ‘AGI’ enough, in time or ever, to stay safe[4]).
I fail to mention other positive contributions made by Anthropic folks to the world.[5] This feels unfair. If you joined Anthropic later, this post is likely not even about your work, though consider whether you're okay with following your higher-ups.
→ If you disagree with this perspective, then section 4 and 5 are less useful for you.
Let's dig into five courses of action:
1. Scaled GPT before founding Anthropic
Dario Amodei co-led the OpenAI team that developed GPT-2 and GPT-3. He, Tom Brown, Jared Kaplan, Benjamin Mann, and Paul Christiano were part of a small cohort of technical researchers responsible for enabling OpenAI to release ChatGPT.
This is covered in a fact-checked book by the journalist Karen Hao. I was surprised by how large the role of Dario was, whom for years I had seen as a safety researcher. His scaling of GPT was instrumental, not only in setting Dario up for founding Anthropic in 2021, but also in setting off the boom after ChatGPT.
So I’ll excerpt from the book, to provide the historical context for the rest of this post:
Dario Amodei insisted on scaling fast, even as others suggested a more gradual approach. It’s more than that – his circle actively promoted it. Dario’s collaborator and close friend, Jared Kaplan, led a project to investigate the scaling of data, compute, and model size.
In January 2020, Jake and Dario published the Scaling Laws paper along with Tom Brown and Sam McCandlish (later CTO at Anthropic). Meaning that a majority of Anthropic's founding team of seven people were on this one paper.
None of this is an infohazard, but it does pull the attention – including of competitors – toward the idea of scaling faster. This seems reckless – if you want to have more gradual development so you have time to work on safety, then what is the point? There is a scientific interest here, but so was there in scaling the rate of fission reactions. If you go ahead publishing anyway, you’re acting as a capability researcher, not a safety researcher.
This was not the first time.
In June 2017, Paul Christiano, who later joined Anthropic as trustee, published about a technique he invented, reinforcement learning from human feedback. His co-authors include Dario and Tom – as well as Jan Leike, who joined Anthropic later.
Here is the opening text:
The authors here emphasised making agents act usefully by solving tasks cheaply enough.
Recall that Dario joined forces on developing GPT because he wanted to apply RLHF to non-toy-environments. This allowed Dario and Paul to make GPT usable in superficially safe ways and, as a result, commercialisable. Paul later gave justifications why inventing RLHF and applying this technique to improving model functionality had low downside. There are reasons to be skeptical.
In December 2020, Dario’s team published the paper that introduced GPT-3. Tom is the first author of the paper, followed by Benjamin Mann, another Anthropic founder.
Here is the opening text:
To me, this reads like the start of a recipe for improving capabilities. If your goal is actually to prevent competitors from accelerating capabilities, why tell them the way?
But by that point, the harm had already been done, as covered in Karen Hao’s book:
So GPT-3 – as scaled by Dario’s team and linked up to an API – had woken up capability researchers at other labs, even though their executives were not yet budging on strategy.
Others were alarmed and advocated internally against scaling large language models. But these were not AGI safety researchers, but critical AI researchers, like Dr. Timnit Gebru.
In March 2021, Timnit Gebru collaborated on a paper that led to her expulsion by Google leaders (namely Jeff Dean). Notice the contrast to earlier quoted opening texts:
This matches how I guess careful safety researchers write. Cover the past architectural innovations but try not to push for more. Focus on risks and paths to mitigate those risks.
Instead, Dario's circle acted as capability researchers at OpenAI. At the time, at least three rationales were given for why scaling capabilities is a responsible thing to do:
Rationale #1: ‘AI progress is inevitable’
Dario’s team expected that if they did not scale GPT, this direction of development would have happened soon enough at another company anyway. This is questionable.
Even the originator of transformers, Google, refrained from training on copyrighted text. Training on a library-sized corpus was unheard of. Even after the release of GPT-3, Jeff Dean, head of AI research at the time, failed to convince Google executives to ramp up investment into LLMs. Only after ChatGPT was released did Google toggle to ‘code red’.
Chinese companies would not have started what OpenAI did, Karen Hao argues:
Only Dario and collaborators were massively scaling transformers on texts scraped from pirated books and webpages. If the safety folks had refrained, scaling would have been slower. And OpenAI may have run out of compute – since if it had not scaled so fast to GPT-2+, Microsoft might not have made the $1 billion investment, and OpenAI would not have been able to spend most of it on discounted Azure compute to scale to GPT-3.
Karen Hao covers what happened at the time:
No other company was prepared to train transformer models on text at this scale. And it’s unclear whether OpenAI would have gotten to a ChatGPT-like product without the efforts of Dario and others in his safety circle. It’s not implausible that OpenAI would have caved in.[6] It was a nonprofit that was bleeding cash on retaining researchers who were some of the most in-demand in the industry, but kept exploring various unprofitable directions.
The existence of OpenAI shortened the time to a ChatGPT-like product by, I guess, at least a few years. It was Dario’s circle racing to scale to GPT-2 and GPT-3 – and then racing to compete at Anthropic – that removed most of the bottlenecks to getting there.
What if upon seeing GPT-1, they had reacted “Hell no. The future is too precious to gamble on capability scaling”? What if they looked for allies, and used any tactic on the books to prevent dangerous scaling? They didn't seem motivated to. If they had, they would have been forced to leave earlier, as Timnit Gebru was. But our communities would now be in a better position to make choices, than where they actually left us.
Rationale #2: ‘we scale first so we can make it safe’
Recall this earlier excerpt:
Dario thought that by getting ahead, his research circle could then take the time to make the most capable models safe before (commercial) release. The alternative in his eyes was allowing reckless competitors to get there first and deploy faster.
While Dario’s circle cared particularly for safety in the existential sense, in retrospect it seems misguided to justify actual accelerated development with speculative notions of maybe experimentally reaching otherwise unobtained safety milestones. What they ended up doing was use RLHF to finetune models for relatively superficial safety aspects.
Counterfactually, any company first on the scene here would likely have finetuned their models anyway for many of the same safety aspects, forced by the demands by consumers and government enforcement agencies. Microsoft's staff did so, after its rushed Sydney release of GPT-4 triggered intense reactions by the public.
Maybe though RLHF enabled interesting work on complex alignment proposals. But is this significant progress on the actual hard problem? Can any such proposal be built into something comprehensive enough to keep fully autonomous learning systems safe?
Dario’s rationale further relied on his expectation that OpenAI's leaders would delay releasing the models his team had scaled up, and that this would stem a capability race.
But OpenAI positioned itself as ‘open’. And its researchers participated in an academic community where promoting your progress in papers and conferences is the norm. Every release of a GPT codebase, demo, or paper alerted other interested competing researchers. Connor Leahy could just use Dario’s team’s prior published descriptions of methods to train his own version. Jack Clark, who was policy director of OpenAI and now is at Anthropic, ended up delaying the release of GPT-2’s code by around 9 months.
Worse, GPT-3 was packaged fast into a commercial release through Microsoft. This was not Dario’s intent, who apparently felt Sam Altman had misled him. Dario did not discern he was being manipulated by a tech leader with a track record of being manipulative.
By scaling unscoped models that hide all kinds of bad functionality, and can be misused at scale (e.g. to spread scams or propaganda), Dario’s circle made society less safe. By simultaneously implying they could or were making these inscrutable models safe, they were in effect safety-washing.
Chris Olah’s work on visualising circuits and mechanistic interpretability made for flashy articles promoted on OpenAI’s homepage. In 2021, I saw an upsurge of mechinterp teams joining AI Safety Camp, whom I supported, seeing it as cool research. It nerdsniped many, but progress in mechinterp has remained stuck around mapping the localised features of neurons and the localised functions of larger circuits, under artificially constrained input distributions. This is true even of later work at Anthropic, which Chris went on to found.
Some researchers now dispute that mapping mechanistic functionality is a tractable aim. The actual functioning of a deployed LLM is complex, since it not only depends on how shifting inputs received from the world are computed into outputs, but also how those outputs get used or propagated in the world.
Traction is limited in terms of the subset of input-to-output mappings that get reliably interpreted, even in a static neural network. Even where computations of inputs to outputs are deterministically mapped, this misses how outputs end up corresponding to effects in the noisy physical world (and how effects feed back into model inputs/training).
Interpretability could be used for specific safety applications, or for AI ‘gain of function’ research. I’m not necessarily against Chris’ research. What's bad is how it got promoted.
Researchers in Chris’ circle promoted interpretability as a solution to an actual problem (inscrutable models) that they were making much worse (by scaling the models). They implied the safety work to be tractable in a way that would catch up with the capability work that they were doing. Liron Shapira has a nice term for this: tractability-washing.
Tractability-washing corrupts. It disables our community from acting with integrity to prevent reckless scaling. If instead of Dario’s team, accelerationists at Meta had taken over GPT training, we could at least know where we stand. Clearly then, it was reckless to scale data by 100x, parameters by 1000x, and compute by 10000x – over just three years.
But safety researchers did this, making it hard to orient. Was it okay to support trusted folks in safety to get to the point that they could develop their own trillion-parameter models? Or was it bad to keep supporting people who kept on scaling capabilities?
Rationale #3: ‘we reduce the hardware overhang now to prevent disruption later’
Paul sums this up well:
Sam Altman also wrote about this in 'Planning for AGI and Beyond':
It’s unclear what “many of us” means, and I do not want to presume that Sam accurately represented the views of his employees. But the draft was reviewed by “Paul Christiano, Jack Clark, Holden Karnofsky” – all of whom were already collaborating with Dario.
The rationale of reducing hardware overhang is flawed:
It is a justification that can be made just as well by someone racing to the bottom. Sam Altman not only tried to use the hardware overhang. Once chips got scarce, Sam pitched the UAE to massively invest in new chip manufacturing. And Tom Brown just before leaving to Anthropic, was in late-stage discussions with Fathom Radiant to get cheap access to their new fibre-optic-connected supercomputer.[8]
2. Founded an 'AGI' development company and started competing on capabilities
Karen Hao reports on the run-up to Dario’s circle leaving OpenAI:
There is a repeating pattern here:
Founders of an AGI start-up air their concerns about ‘safety’, and recruit safety-concerned engineers and raise initial funding that way. The culture sours under controlling leaders, as the company grows dependent on Big Tech's compute and billion-dollar investments.
This pattern has roughly repeated three times:
We are dealing with a gnarly situation.
One take on this is a brutal realist stance: That’s just how business gets done. They convince us to part with our time and money and drop us when we’re no longer needed, they gather their loyal lackeys and climb to the top, and then they just keep playing this game of extraction until they’ve won.
It is true that’s how business gets done. But I don’t think any of us here are just in it for the business. Safety researchers went to work at Anthropic because they care. I wouldn’t want us to tune out our values – but it’s important to discern where Anthropic’s leaders are losing integrity with the values we shared.
The safety community started with much trust in and willingness to support Anthropic. That sentiment seems to be waning. We are seeing leaders starting to break some commitments and enter into shady deals like OpenAI leaders did – allowing them to gain relevance in circles of influence, and to keep themselves and their company on top.
Something like this happened before, so discernment is needed. It would suck if we support another ‘safety-focussed’ start-up that ends up competing on capabilities.
I’ll share my impression of how Anthropic staff presented their commitments to safety in the early days, and how this seemed in increasing tension with how the company acted.
Early commitments
In March 2023, Anthropic published its 'Core Views on AI Safety':
The general impression I came away with was that Anthropic was going to be careful not to release models with capabilities that significantly exceeded those of ChatGPT and other competing products. Instead, Anthropic would compete on having a reliable and safe product, and try to pull competitors into doing the same.
Dario has repeatedly called for a race to the top on safety, such as in this Time piece.
Degrading commitments
After safety-allied billionaires invested in Series A and B, Anthropic’s leaders moved on to pitch investors outside of the safety community.
On April 2023, TechCrunch leaked a summary of the Series C pitchdeck:
Some people in the safety community commented with concerns. Anthropic leaders seemed to act like racing on capabilities was necessary. It felt egregious compared to the expectations that I and friends in safety had gotten from Anthropic. Worse, leaders had kept these new plans hidden from the safety community – it took a journalist to leak it.
From there, Anthropic started releasing models with capabilities that ChatGPT lacked:
None of these are major advancements beyond state of the art. You could argue that Anthropic stuck to original commitments here, either deliberately or because they lacked anything substantially more capable than OpenAI to release. Nonetheless, they were competing on capabilities, and the direction of those capabilities is concerning.
If a decade ago, safety researchers had come up with a list of engineering projects to warn about, I guess it would include ‘don’t rush to build agents’, and ‘don’t connect the agent up to the internet’ and ‘don’t build an agent to code by itself’. While the notion of current large language models actually working as autonomous agents is way overhyped, Anthropic engineers are developing models in directions that would have scared early AGI safety researchers. Even from a system safety perspective, it’s risky to build an unscoped system that can modify surrounding infrastructure in unexpected ways (by editing code, clicking through browsers, etc).
Anthropic has definitely been less reckless than OpenAI in terms of model releases.
I just think that ‘less reckless’ is not a good metric. ‘Less reckless’ is still reckless.
Another way to look at this is that Dario, like other AI leaders before him, does not think he is acting recklessly, because he thinks things likely go well anyway – as he kept saying:
Declining safety governance
The most we can hope for is oversight by their board, or by the trust set up to elect new board members. But the board’s most recent addition is Reed Hastings, known for scaling a film subscription company, but not a safe engineering culture. Indeed, the reason given is that Reed “brings extensive experience from founding and scaling Netflix into a global entertainment powerhouse”. Before that, trustees elected Jay Krepps, giving a similar reason: his “extensive experience in building and scaling highly successful tech companies will play an important role as Anthropic prepares for the next phase of growth”. Before that, Yasmin Razavi from Spark Capital joined, for making the biggest investment in the Series C round.
The board lacks any independent safety oversight. It is presided by Daniela Amodei, who along with Dario Amodei has remained there since founding Anthropic. For the rest, three tech leaders joined, prized for their ability to scale companies. There used to be one independent-ish safety researcher, Luke Muehlhauser, but he left one year ago.
The trust itself cannot be trusted. It was supposed to “elect a majority of the board” for the sake of long-term interests such as “to carefully evaluate future models for catastrophic risks”. Instead, trustees brought in two tech guys who are good at scaling tech companies. The trust was also meant to be run by five trustees, but it’s been under that count for almost two years – they failed to replace trustees after two left (Edit: there are actually now 4 trustees, named at the bottom of this page; all lack expertise in ensuring safety).
3. Lobbied for policies that minimised Anthropic’s accountability for safety
Jack Clark has been the policy director at Anthropic ever since he left the same role at OpenAI. Under Jack, some of the policy advocacy tended to reduce Anthropic’s accountability. There was a tendency to minimise Anthropic having to abide by any hard or comprehensive safety commitments.
Much of this policy work is behind closed doors. But I rely on just some online materials I’ve read.
I’ll focus on two policy initiatives discussed at length in the safety community:
Minimal ‘Responsible Scaling Policies’
In September 2023, Anthropic announced its ‘Responsible Scaling Policy’.
Anthropic’s RSPs are well known in the safety community. I’ll just point to the case made by Paul Christiano, a month after he joined Anthropic’s Long-Term Benefit Trust:
While Paul did not wholeheartedly endorse RSPs, and included some reservations, the thrust of it is that he encouraged the safety community to support Anthropic’s internal and external policy work on RSPs.[9]
A key issue with RSPs is how they're presented as 'good enough for now'. If companies adopt RSPs voluntarily, the argument goes, it'd lay the groundwork for regulations later.
Several authors on the forum argued that this was misleading.
– Siméon Campos:
– Oliver Habryka:
– Remmelt Ellen (me):
At the time, Anthropic’s policy team was actively lobbying for RSPs in US and UK government circles. This bore fruit. Ahead of the UK AI Safety Summit, leading AI companies were asked to outline their responsible capability scaling policy. Both OpenAI and Deepmind soon released their own policies on ‘responsible’ scaling.
Some policy folks I knew were so concerned that they went on trips to advocate against RSPs. Organisers put out a treaty petition as a watered–down version of the original treaty, because they wanted to get as many signatories from leading figures, in part to counter Anthropic’s advocacy for self-regulation through RSPs.
Opinions here differ. I think that Anthropic advocated for companies to adopt overly minimal policies that put off accountability for releasing models that violate already established safe engineering practices. I'm going to quote some technical researchers who are experienced in working with and/or advising on these established practices:
Siméon Campos wrote on existing risk management frameworks:
Heidy Khlaaf wrote on scoped risk assessments before joining UK's AI Safety Institute:
Timnit Gebru replied on industry shortcomings to the National Academy of Engineering:
Once, a senior safety engineer on medical devices messaged me, alarmed after the release of ChatGPT. It boggled her that such an unscoped product could just be released to the public. In her industry, medical products have to be designed for a clearly defined scope (setting, purpose, users) and tested for safety in that scope. This all has to be documented in book-sized volumes of paper work, and the FDA gets the final say.
Other established industries also have lengthy premarket approval processes. New cars and planes too must undergo audits, before a US government department decides to deny or approve the product’s release to market.
The AI industry, however, is an outgrowth of the software industry, which has a notorious disregard of safety. Start-ups sprint to code up a product and rush through release stages.
XKCD put it well:
At least programmers at start-ups write out code blocks with somewhat interpretable functions. Auto-encoded weights of LLMs, on the other hand, are close to inscrutable.
So that’s the context Anthropic is operating in.
Safety practices in the AI industry are often appalling. Companies like Anthropic ‘scale’ by automatically encoding a model to learn hidden functionality from terabytes of undocumented data, and then marketing it as a product that can be used everywhere.
Releasing unscoped automated systems like this is a set-up for insidious and eventually critical failures. Anthropic can't evaluate Claude comprehensively for such safety issues.
Staff do not openly admit that they are acting way out of the bounds of established safety practices. Instead, they expect us to trust them having some minimal responsibilities for scaling models. Rather than a race to the top, Anthropic cemented a race to the bottom.
I don’t deny the researchers’ commitment – they want to make general AI generally safe. But if the problem turns out too complex to adequately solve for, or they don’t follow through, we’re stuffed.
Unfortunately, their leaders recently backpedalled on one internal policy commitment:
Anthropic staked out its responsibility for designing models to be safe, which is minimal. It can change internal policy any time. We cannot trust its board to keep leaders in line.
This leaves external regulation. As Paul wrote:
“I don’t think voluntary implementation of responsible scaling policies is a substitute for regulation. Voluntary commitments are unlikely to be universally adopted or to have adequate oversight, and I think the public should demand a higher degree of safety.”
Unfortunately, Anthropic has lobbied to cut down regulations that were widely supported by the public. The clearest case of this is California’s safety bill SB 1047.
Lobbied against provisions in SB 1047
The bill’s demands were light, mandating ‘reasonable care’ in training future models to prevent critical harms. It did not even apply to the model that Anthropic pitched to investors, as requiring 1025 FLOPS for training. The bill only kicked in at a computing power greater than 1026.
Yet Anthropic lobbied against the bill:
Anthropic did not want to be burdened by having to follow government-mandated requirements before critical harms occurred. Specifically, it tried to cut pre-harm enforcement, an approach reminiscent of the more stringent premarket approval process:
Anthropic's justification was that best practices did not exist yet, and that new practices had to be invented from scratch:
It is the flipside of a common critique to RSPs – comprehensive risk mitigation practices do exist, but Anthropic ignored their existence and invented a minimal one from scratch.
Effectively, Anthropic's position was that established safe engineering practices could not be applied to AI. It rather had AI companies implement new safety practices that those doing "original scientific research" would come up with (with a preference for RSPs).
Anthropic did not want an independent state agency that could define, audit, and enforce requirements. The justification was again that the field lacked "established best practices". Thus, an independent agency lacking "firsthand experience developing frontier models" could not be relied on to prevent developers from causing critical harms in society. Instead, such an agency "might end up harming not just frontier model developers but the startup ecosystem or independent developers, or impeding innovation in general."
Anthropic rejected the notion of preventative enforcement of robust safety requirements. It instead recommended holding companies liable for causing a catastrophe after the fact.
It's legit to worry about cementing bad practices, especially if some entity comes up with new practices by itself. But even if an independent agency had managed to get off the ground in California, its ability to impose requirements could get neutralised through relentless lobbying and obfuscation by the richest, most influential companies in that state. This manoeuvring to put off actual enforcement seems to be the greater concern.
Given how minimal the requirements were intended to be – focussing just on mandating developers to act with 'reasonable care' on 'critical harms' – in an industry that severely lacks safety enforcement, Anthropic's lobbying against was actively detrimental to safety.
Anthropic further acted in ways that increased the chance that the bill would be killed:
Only after amendments, did Anthropic weakly support the bill. At this stage, Dario wrote a letter to Gavin Newsom. This was seen as one of the most significant positive developments at the time by supporters of the bill. But it was too little too late. Newsom canned the bill.
Similarly this year, Dario wrote against the 10 year moratorium on state laws – three weeks after the stipulation was introduced.[10] On one hand, it was a major contribution for a leading AI company to speak out against the moratorium as stipulated. On the other hand, Dario started advocating himself for minimal regulation. He recommended mandating a transparency standard along the lines of RSPs, adding that state laws "should also be narrowly focused on transparency and not overly prescriptive or burdensome".[11] Given that Anthropic had originally described SB 1047's requirements as 'prescriptive' and 'burdensome', Dario was effectively arguing for the federal government to prevent any state from passing any law that was as demanding as SB 1047.
All of this raises a question: how much does Dario’s circle actually want to be held to account on safety, over being free to innovate how they want?
Let me end with some commentary by Jack Clark:
4. Built ties with AI weapons contractors and the US military
If an AI company's leaders are committed to safety, one red line to not cross is building systems used to kill people. People dying because of your AI system is a breach of safety.
Once you pursue getting paid billions of dollars to set up AI for the military, you open up bad potential directions for yourself and other companies. A next step could be to get paid to optimise AI to be useful for the military, which in wartime means optimise for killing.
For large language models, there is particular concern around using ISTAR capabilities: intelligence, surveillance, target acquisition, and reconnaissance. Commercial LLMs can be used to investigate individual persons to target, because those LLMs are trained on lots of personal data scraped everywhere from the internet, as well as private chats. The Israeli military already uses LLMs to identify maybe-Hamas-operatives – to track them down and bomb the entire apartment buildings that they live in. Its main strategic ally is the US military-industrial-intelligence complex, which offers explosive ammunition and cloud services to the Israeli military, and is adopting some of the surveillance tactics that Israel has tested in Palestine, though with much less deadly consequences for now.
So that's some political context. What does any of this have to do with Anthropic?
Anthropic's intel-defence partnership
Anthropic did not bind itself to not offering models for military or warfare uses, unlike OpenAI. Before OpenAI broke its own prohibition, Anthropic already went ahead without as much public backlash.
In November 2024, Anthropic partnered with Palantir and Amazon to “provide U.S. intelligence and defense agencies access to the Claude 3 and 3.5 family...on AWS":
Later that month, Amazon invested another $4 billion in Anthropic, which raises a conflict of interest. If Anthropic hadn’t agreed to hosting models for the military using AWS, would Amazon still have invested? Why did Anthropic go ahead with partnering with Palantir, a notorious mass surveillance and autonomous warfare contractor?
No matter how ‘responsible’ Anthropic presents itself to be, it is concerning how its operations are starting to get tied to the US military-industrial-intelligence complex.
In June 2025, Anthropic launched Claude Gov models for US national security clients. Along with OpenAI, it got a $200 million defence contract: “As part of the agreement, Anthropic will prototype frontier AI capabilities that advance U.S. national security.”
I don’t know about you, but prototyping “frontier AI capabilities” for the military seems to swerve away from their commitment to being careful about “capability demonstrations”. I guess Anthropic's leaders would push for improving model security and preventing adversarial uses, and avoid the use of their models for military target acquisition. Yet Anthropic can still end up contributing to the automation of kill chains.
For one, Anthropic leaders will know little about what US military and intelligence agencies actually use Claude for, since “access to these models is limited to those who operate in such classified environments”. Though from a business perspective, it is a plus to run their models on secret Amazon servers, since Anthropic can distance itself from any mass atrocity committed by their military client. Like Microsoft recently did.
Anthropic's earlier ties
Amazon is a cloud provider for the US military, raising conflicts of interest for Anthropic. But already before, Anthropic’s leaders had ties to military AI circles. Anthropic received a private investment from Eric Schmidt. Eric is about the best-connected guy in AI warfare. Since 2017, Eric illegally lobbied the Pentagon, chaired two military-AI committees, and then spun out his own military innovation thinktank styled after Henry Kissinger’s during the Vietnam war. Eric invests in drone swarm start-ups and frequently talks about making network-centric autonomous warfare really cheap and fast.
Eric in turn is an old colleague of Jason Matheny. Jason used to be a trustee of Anthropic and still heads the military strategy thinktank RAND corporation. Before that, Jason founded the thinktank Georgetown CSET to advise on national security concerns around AI, receiving a $55 million grant through Holden. Holden in turn is the husband of Daniela and used to be roommates with Dario.
One angle on all this: Dario’s circle acted prudently to gain seats at the military-AI table.
Another angle: Anthropic stretched the meaning of ‘safety first’ to keep up its cashflow.
5. Promoted band-aid fixes to speculative risks over existing dangers that are costly to address
In a private memo to employees, Dario wrote he had decided to solicit investments from Gulf State sovereign funds tied to dictators, despite his own misgivings.
A journalist leaked a summary of the memo:
Dario was at pains to ensure that authoritarian governments in the Middle East and China would not gain a technological edge:
But here's the rub. Anthropic never raised the issue of tech-accelerated authoritarianism in the US. This is a clear risk. By AI-targeted surveillance and personalised propaganda, US society too can get locked into a totalitarian state, beyond anything Orwell imagined.
Talking about that issue would cost Anthropic. It would force leaders to reckon with that they are themselves partnering with a mass surveillance contractor to provide automated data analysis to the intelligence arms of an increasingly authoritarian US government. To end this partnership with its main investor, Amazon, would just be out of the question.
Anthropic also does not campaign about the risk of runaway autonomous warfare, which it is gradually getting tied into by partnering with Palantir to contract for the US military.
Instead, it justifies itself as helping a democratic regime combat overseas authoritarians.
Dario does keep warning of the risk of mass automation. Why pick that? Mass job loss is bad, but surely not worse than US totalitarianism or autonomous US-China wars. It can't be just that job loss is a popular topic Dario gets asked about. Many citizens are alarmed too – on the left and on the right – about how 'Big Tech' enables 'Deep State' surveillance.
The simplest fitting explanation I found is that warning about future mass automation benefits Anthropic, but warning about how the tech is accelerating US authoritarianism or how the US military is developing a hidden kill cloud is costly to Anthropic.
I’m not super confident about this explanation. But it roughly fits the advocacy I’ve seen.
Anthropic can pitch mass automation to its investors – and it did – going so far as providing a list of targetable jobs. But Dario is not alerting the public about automation now. He is not warning how genAI already automates away the fulfilling jobs of writers and artists, driving some to suicide. He is certainly not offering to compensate creatives for billions of dollars in income loss. Recently, Anthropic’s lawyers warned it may go bankrupt if it had to pay for having pirated books. That is a risk Anthropic worries about.
Cheap fixes for risks that are still speculative
Actually much of Anthropic's campaigns are on speculative risks covered little in society.
Recently it campaigned on model welfare, which is a speculative matter, to put it lightly. Here the current solution is cheap – a feature where Claude can end ‘abusive’ chats.
Anthropic has also campaigned about models generating advice that removes bottlenecks for producing bioweapons. So far, it’s not a huge issue – Claude could regenerate guidance found on webpages that Anthropic scraped from, but those can easily be found back by any actor resourceful enough to follow through. Down the line, this could well turn into a dangerous threat. But focussing on this risk now has a side-benefit: it mostly does not cost Anthropic nor hinder it from continuing to scale and release models.
To deal with the bioweapons risk now, Anthropic uses cheap fixes. Anthropic did not thoroughly document its scraped datasets to prevent its models from regenerating various toxic or dangerous materials. That is sound engineering practice, but too costly. Instead, researchers came up with a classifier that will filter out some (but not all) of the materials depending on the assumptions that the researchers made in designing the classifier.[12] And they trained models to block answers to bio-engineering-related questions.
None of this is to say that individual Anthropic researchers are not serious about 'model welfare' or 'AI-enabled bioterrorism'. But their direction of work and costs they can make are supervised and okayed by the executives. Those executives are running a company that is bleeding cash, and needs to turn a profit to keep existing and to satisfy investors.
The company tends to campaign on risks that it can put off or offer cheap fixes for, but swerve around or downplay already widespread issues that would be costly to address.
When it comes to current issues, staff focus on distant frames that do not implicate their company – Chinese authoritarianism but not the authoritarianism growing in the US; future automation of white-collar work but not how authors lose their incomes now.
Example of an existing problem that is costly to address
Anthropic is incentivised to downplay gnarly risks that are already showing up and require high (opportunity) costs to mitigate.
So if you catch Dario inaccurately downplaying existing issues that would cost a lot to address, this provides some signal for how he will respond to future larger problems.
This is more than a question of what to prioritise.
You may not care about climate change, yet still be curious to see how Dario deals with the issue. Since he professes to care, but it's costly to address.
If Dario is honest about the carbon emissions from Anthropic scaling up computation inside data centers, the implication from the viewpoint of environmental groups would be that Anthropic must stop scaling. Or at least, pay to clean up that pollution.
Instead, Dario makes a vaguely agnostic statement, claiming to be unsure whether their models accelerate climate change or not:
Multiple researchers have pointed out issues especially with Amazon (Anthropic’s provider) trying to game carbon offsets. From a counterfactual perspective, the offsets needed are grossly underestimated. The low-hanging fruit will mostly be captured anyway, meaning that further extra carbon emissions by Amazon belong to the back of the queue of offset interventions. Also, Amazon’s data center offsets are only meant to compensate for carbon-based gas emissions at the end of the supply chain – not for all pollution across the entire hardware supply/operation chain.
Such ‘could be either’ thinking is overcomplicating the issue. Before a human did the task. Now this human still consumes energy to live, plus they or their company use energy-intensive Claude models. On net, this results in more energy usage.
Dario does not think he needs to act, because he hasn’t concluded yet that Anthropic's scaling contributes to pollution. He might not be aware of it, but this line of thinking is similar to tactics used by Big Oil to sow public doubt and thus delay having to restrict production.
Again, you might not prioritise climate change. You might not worry like me about an auto-scaling dynamic, where if AI corps get profitable, they keep reinvesting profits into increasingly automated and expanding toxic infrastructure that extracts more profit.
What is still concerning is that Dario did not take a scout mindset here. He did not seek to find the truth about an issue he thinks we should worry about if it turns out negative.
Conclusion
Dario’s circle started by scaling up GPT’s capabilities at OpenAI, and then moved on to compete with Claude at Anthropic.
They offered sophisticated justifications, and many turned out flawed in important ways. Researchers thought they'd make significant progress on safety that competitors would fail to make, promoted the desirability or inevitability of scaling to AGI while downplaying complex intractable risks, believed their company would act responsibly to delay the release of more capable models, and/or trusted leaders to stick to commitments made to the safety community once their company moved on to larger investors.
Looking back, it was a mistake in my view to support Dario’s circle to start up Anthropic. At the time, it felt like it opened up a path for ensuring that people serious about safety would work on making the most capable models safe. But despite well-intentioned work, Anthropic contributed to a race to the bottom, by accelerating model development and lobbying for minimal voluntary safety policies that we cannot trust its board to enforce.
The main question I want to leave you with: how can the safety community do radically better at discerning company directions and coordinating to hold leaders accountable?
These investors were Dustin Moskovitz and Jaan Tallinn in Series A, and Sam Bankman-Fried about a year later in Series B.
Dustin was advised to invest by Holden Karnofsky. Sam invested $500 million through FTX, by far the largest investment, though it was in non-voting shares.
Buck points out that 80K job board restricts Anthropic positions to those 'vaguely related to safety or alignment'. I forgot to mention this.
Though Anthropic advertises research or engineering positions as safety-focussed, people joining Anthropic can still get directed to work on capabilities or product improvement. I wrote a short list of related concerns before here.
As a metaphor, say a conservator just broke off the handles of a few rare Roman vases. Pausing for a moment, he pitches me for his project to ‘put vase safety first’. He gives sophisticated reasons for his approach – he believes he can only learn enough about vase safety by breaking parts of vases, and that if he doesn’t do it now, the vandals will eventually get ahead of him. After cashing in my paycheck, he resumes chipping away at the vase pieces. I walk off. What just happened, I wonder? After reflecting on it, I turn back. It’s not about the technical details, I say. I disagree with the premise – that there is no other choice but to damage Roman vases, in order to prevent the destruction of all Roman vases. I will stop supporting people, no matter how sophisticated their technical reasoning, who keep encouraging others to cause more damage.
Defined more precisely here.
As a small example, it is considerate that Anthropic researchers cautioned against educators grading students using Claude. Recently, Anthropic also intervened on hackers who used Claude's code generation to commit large-scale theft. There are a lot of well-meant efforts coming from people at Anthropic, but I have to zoom out and look at the thrust of where things have gone.
Especially if OpenAI had not received a $30 million grant advised by Holden Karnofsky, who was co-living with the Amodei siblings at the time. This decision not only kept OpenAI afloat after its biggest backer, Elon Musk, left soon after. It also legitimised OpenAI to safety researchers who were skeptical at the time.
This resembles another argument by Paul for why it wasn’t bad to develop RLHF:
While still at OpenAI, Tom Brown started advising Fathom Radiant, a start-up that was building a supercomputer with faster connect using fibre-optic cables. Their discussion was around Fathom Radiant offering discounted compute to researchers to support differential progress on ‘alignment’.
This is based on my one-on-ones with co-CEO Michael Andregg. At the time, Michael was recruiting engineers from the safety community, turning up at the EA Global conferences, the EA Operations slack, and so on. Michael told me that OpenAI researchers had told him that Fathom Radiant was the most advanced technically out of all the start-ups they had looked at. Just after safety researchers left OpenAI, Michael told me that he was planning to support OpenAI by giving them compute to support their alignment work, but that they had now decided not to.
My educated guess is that Fathom Radiant moved on to offering low-price-tier computing services to Anthropic. A policy researcher I know in the AI Safety community told me he also thinks it’s likely. He told me that Michael was roughly looking for anything that could justify that what he's doing is good.
It’s worth noting how strong Fathom Radiant’s ties were with people who later ended up in Anthropic C-suite. Virginia Blanton served as the Head of Legal and Operations at Fathom Radiant, and then left to be Director of Operations at Anthropic (I don’t know what she does now; her LinkedIn profile is deleted).
I kept checking online for news over the years. It now seems that Fathom Radiant's supercomputer project has failed. Fathomradiant.co now redirects to a minimal company website for “Atomos Systems”, which builds “general-purpose super-humanoid robots designed to operate and transform the physical world”. Michael Andregg is also no longer shown in the team – only his brother William Andregg.
It’s also notable that Paul was now tentatively advocating for a pause on hardware development/production, after having justified reducing the hardware overhang in years prior.
Maybe Anthropic’s policy team already was advocating against the 10 year moratorium in private conversations with US politicians? In that case, kudos to them.
Update: Yes, Anthropic staff did talk with US senators and some affiliated groups, someone in the know told me.
Kudos to David Mathers for pointing me to this.
This reminds me of how LAION failed to filter out thousands of images of child sexual abuse from their popular image dataset, and then just fixed it inadequately with a classifier.