LESSWRONG
LW

Anthropic (org)AI
Personal Blog

57

Anthropic Faces Potentially “Business-Ending” Copyright Lawsuit

by garrison
25th Jul 2025
Linkpost from www.obsolete.pub
11 min read
15

57

Anthropic (org)AI
Personal Blog

57

Anthropic Faces Potentially “Business-Ending” Copyright Lawsuit
41the gears to ascension
13O O
3Thane Ruthenis
10[anonymous]
2Thane Ruthenis
18[anonymous]
5Noosphere89
6Kaj_Sotala
2philh
8Kaj_Sotala
2philh
6Kaj_Sotala
1robo
2dr_s
2philh
New Comment
15 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:10 AM
[-]the gears to ascension2mo4155

If Anthropic goes to trial and loses on appeal, the resulting precedent could drag Meta, OpenAI, and possibly even Google into similar liability.

do that one then. either destroy the industry or don't, but don't destroy only anthropic.

Reply
[-]O O2mo*130

Afaict this case has been generally good for the industry but especially bad for Anthropic.


Edit: overall win, you can use books in training. You just can’t use pirated books.

Reply111
[-]Thane Ruthenis2mo*3-4

Clearly the heroic thing to do would be to go to trial and then deliberately mess it up very badly in a calculated fashion that sets an awful precedent for the other AGI companies. You might say, "but China!", but if the US cripples itself, then suddenly the USG would be much more interested in reaching some sort of international-AGI-ban deal with China, so it all works out.

(Only half-serious.)

Reply
[-][anonymous]2mo100

Responding to the serious half only, sandbagging doesn't work in general in the legal system, and in particular it wouldn't work here. That's because you have so much outside attention on the case and (presumably) so many amici briefs describing all the most powerful arguments in the AI companies' favor. If the judge sees that you are a $61 billion market cap company hiring the greatest lawyers in the world, but you're not putting forth your best legal foot when you have lawyers from other companies writing briefs outlining their own defense arguments, the consequences for you and your lawyers will be severe and any notion of "precedent" will be poisoned for all of time.

Reply
[-]Thane Ruthenis2mo20

Yeah, I figured.

If the judge sees that you are a $61 billion market cap company hiring the greatest lawyers in the world, but you're not putting forth your best legal foot when you have lawyers from other companies writing briefs outlining their own defense arguments, the consequences for you and your lawyers will be severe

What would be the actual wrongdoing here, legally speaking?

Reply
[-][anonymous]2mo182

Federal lawsuits must satisfy the case or controversy requirement of Article 3 of the Constitution. 

A failure to do so (if there is no genuine adversity between the parties in practice because they collude on the result) renders the lawsuit dead on the spot (because the federal court cannot constitutionally exercise jurisdiction over the parties, so there can be no decision on the merits) and exposes the lawyers and parties to punishment and repercussions in case they tried to conceal this from/directly lie to the judge, both because lying to a judicial officer in signed affidavits is a disbarrable offense and because this would waste the court's already undersupplied resources.

Reply2
[-]Noosphere892mo5-6

I will go further than what you are arguing here: If the maximalist interpretations are upheld, this would fundamentally break the viability of LLMs as a paradigm for AGI, because their data dependence and an unusual amount of memorization leading to pretty extreme generalization failures pretty much necessitate copyright violations on an extensive scale.

And notably, the future paradigms that wouldn't violate extensive/maximalist interpretations of copyright would have to have far more data efficiency than current models, and depending on how far copyright is upheld, this could potentially make AGI/ASI infeasible, straight up.

Yes, it's a bit of a long-shot, but this is a case to watch, because the consequences for the AI industry if the case goes badly for Anthropic could be very big for the AI industry as a whole, especially if they have to delete their model/delete their training sets entirely.

Unfortunately, the case where the copyright holders win out and fundamentally break the back of Anthropic/the AI industry is probably a bad thing from an existential risk perspective, because of capabilities potentially increasing in a way that society won't react to, so most of my hope here is that the copyright holders don't get the maximalist interpretation of damages they seek.

[This comment is no longer endorsed by its author]Reply
[-]Kaj_Sotala2mo64

Training on copyrighted data wasn't ruled infringing by itself though, only pirating the books was. So even if the maximalist interpretation of damages was upheld, companies could still legally purchase books to train on.

Reply
[-]philh2mo20

Trying to summarize the legal matters and where I'm confused on them:

  • This judge ruled that training on books you acquired legally is fair use.
  • But Anthropic didn't acquire the books they trained on legally. This judge also ruled that they're liable for the copyright infringement involved in getting them illegally.
  • The recent news is that this case has become a class action. That means any copyright holder of a book Anthropic trained on can join in easily.
    • Possibly even no need to join in before the conclusion? If Anthropic eventually has to pay out, I think they might need to have a fund set up such that anyone whose work they trained on can write in and say "you owe me money". (At least I think this sometimes happens, e.g. "company sold a defective product and lost class action suit, anyone who bought the product can claim even if they didn't hear of the suit in advance". Sometimes also "and company has to make efforts to reach out to all customers to let them know". Dunno if this sort of thing is the default.)
  • In another case ("the Meta case"), a judge ruled that "getting books illegally and training on them" was fine, actually.
    • This doesn't count for much, but my legal intuitions are saying "wait what". Is this more reasonable than it sounds?
    • Why doesn't it count as precedent in this case?
    • If Anthropic loses here, will that cause problems for Meta in their case?
  • Did OpenAI and DeepMind do the same?
Reply
[-]Kaj_Sotala2mo80
  • In another case ("the Meta case"), a judge ruled that "getting books illegally and training on them" was fine, actually.
    • This doesn't count for much, but my legal intuitions are saying "wait what". Is this more reasonable than it sounds?
    • Why doesn't it count as precedent in this case?

The ruling in the Meta case stated that it's not saying that Meta's actions were fine in general, only that the authors who sued Meta failed to make a good argument that it's wrong in their case in particular.

Because the performance of a generative AI model depends on the amount and quality of data it absorbs as part of its training, companies have been unable to resist the temptation to feed copyright-protected materials into their models—without getting permission from the copyright holders or paying them for the right to use their works for this purpose. This case presents the question whether such conduct is illegal. Although the devil is in the details, in most cases the answer will likely be yes. [...]

... in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission. Which means that the companies, to avoid liability for copyright infringement, will generally need to pay copyright holders for the right to use their materials.

But that brings us to this particular case. The above discussion is based in significant part on this Court’s general understanding of generative AI models and their capabilities. Courts can’t decide cases based on general understandings. They must decide cases based on the evidence presented by the parties.

In this case, thirteen authors—mostly famous fiction writers—have sued Meta for downloading their books from online “shadow libraries” and using the books to train Meta’s generative AI models (specifically, its large language models, called Llama). The parties have filed cross-motions for partial summary judgment, with the plaintiffs arguing that Meta’s conduct cannot possibly be fair use, and with Meta responding that its conduct must be considered fair use as a matter of law. In connection with these fair use arguments, the plaintiffs offer two primary theories for how the markets for their works are affected by Meta’s copying. They contend that Llama is capable of reproducing small snippets of text from their books. And they contend that Meta, by using their works for training without permission, has diminished the authors’ ability to license their works for the purpose of training large language models. As explained below, both of these arguments are clear losers. Llama is not capable of generating enough text from the plaintiffs’ books to matter, and the plaintiffs are not entitled to the market for licensing their works as AI training data. As for the potentially winning argument—that Meta has copied their works to create a product that will likely flood the market with similar works, causing market dilution—the plaintiffs barely give this issue lip service, and they present no evidence about how the current or expected outputs from Meta’s models would dilute the market for their own works. 

Given the state of the record, the Court has no choice but to grant summary judgment to Meta on the plaintiffs’ claim that the company violated copyright law by training its models with their books. But in the grand scheme of things, the consequences of this ruling are limited. This is not a class action, so the ruling only affects the rights of these thirteen authors—not the countless others whose works Meta used to train its models. And, as should now be clear, this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.

Reply
[-]philh2mo20

Ah, so this is only talking about whether training on the material was fair use. I was surprised by the "getting books illegally and then training on them" thing.

Skimming the pdf (mostly ctrl+f "shadow"), it sounds like "downloading a book might not be illegal depending what you do with it", which I hadn't realized. p. 36:

as already discussed, even though that downloading is a separate use, it must be considered in light of its overall purpose. For instance, imagine a researcher who downloaded books from a shadow library in the process of writing an article on shadow libraries, and only did so for their research. That downloading would almost certainly be a fair use. Of course, in that example, the downloader has less ability to procure the books elsewhere than Meta did. But the point is that downloading from a shadow library, which the plaintiffs refer to as “unmitigated piracy,” must be viewed in light of its ultimate end.

(My previous understanding was that the rule was something like "if you already own something legally, you're allowed to download it; but if not, you're not".)

...but now I don't understand how this differs from the Anthropic case, which I summarized as

  • This judge ruled that training on books you acquired legally is fair use.

  • But Anthropic didn’t acquire the books they trained on legally. This judge also ruled that they’re liable for the copyright infringement involved in getting them illegally.

was that a bad summary?

Reply
[-]Kaj_Sotala2mo60

Your summary seems correct to me. Apparently the Meta judge disagrees with the reasoning in the Anthropic case; the Meta ruling has a brief comment on it:

Speaking of which, in a recent ruling on this topic, Judge Alsup focused heavily on the transformative nature of generative AI while brushing aside concerns about the harm it can inflict on the market for the works it gets trained on. Such harm would be no different, he reasoned, than the harm caused by using the works for “training schoolchildren to write well,” which could “result in an explosion of competing works.” Order on Fair Use at 28, Bartz v. Anthropic PBC, No. 24-cv-5417 (N.D. Cal. June 23, 2025), Dkt. No. 231. According to Judge Alsup, this “is not the kind of competitive or creative displacement that concerns the Copyright Act.” Id. But when it comes to market effects, using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take. This inapt analogy is not a basis for blowing off the most important factor in the fair use analysis.

I think the Anthropic case didn't establish precedent because they're both District Court judges, so allowed to disagree with each other's decisions. A decision by their Court of Appeals or the Supreme Court would establish binding precedent.

Claude's explanation

District court judges - even within the same district - are not bound by each other's decisions. Each district judge has independent authority to interpret the law, which explains why you saw the second judge cite the first ruling only to disagree with it.

Here's how precedent actually works in the US system:

Binding precedent comes from higher courts. California district courts must follow precedents set by the Ninth Circuit Court of Appeals (which covers California) and the Supreme Court. These are called "controlling authorities."

Persuasive precedent includes decisions from other district courts, even within the same district. A judge might consider these rulings, cite them, and explain why they agree or disagree - exactly what you witnessed. The second judge was essentially saying "I've looked at how my colleague handled this issue, but I think they got it wrong for these reasons."

This happens frequently in emerging legal areas like AI and copyright, where there's limited appellate guidance. District courts become testing grounds for different legal theories. Eventually, if these cases get appealed, the Ninth Circuit might resolve the split by establishing binding precedent for all California district courts.

The fair use question you mentioned is particularly ripe for this kind of disagreement since it involves a four-factor balancing test that different judges can reasonably weigh differently, especially when applying established doctrine to novel technology.

This disagreement between district courts actually serves a useful function - it creates a record of different approaches that appellate courts can consider when they eventually do establish binding precedent.

Reply2
[-]robo2mo10

$750 per books seems surprisingly reasonable to me as a royalty rate for a compulsory AI ingest license.  Compulsory licenses are common in e.g. the music industry, you must license your musical work for covers (and get a 12¢ royalty per distribution)

Reply
[-]dr_s2mo20

Wait, I'm curious, what is the rationale for compulsory licensing? Something like covers seems pretty arbitrary. I'd see more of a case for it for stuff like patents over pharmaceuticals.

Reply
[-]philh1mo*20

Clarification: IIUC (which I may well not) this is an industry-level "must", not a legal "must". Along the lines of, there's a powerful union and if you join them they require you to license your work; if you don't join them then a lot of people won't work with you so you won't be able to do a bunch of other stuff you want to do.

Reply
Moderation Log
More from garrison
View more
Curated and popular this week
15Comments

A class action over pirated books exposes the 'responsible' AI company to penalties that could bankrupt it — and reshape the entire industry

This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work.

This piece has been updated to add additional context and clarify some details. 

Anthropic, the AI startup that’s long presented itself as the industry’s safe and ethical choice, is now facing legal penalties that could bankrupt the company. Damages resulting from its mass use of pirated books would likely exceed a billion dollars, with the statutory maximum stretching into the hundreds of billions.

Last week, William Alsup, a federal judge in San Francisco, certified a class action lawsuit against Anthropic on behalf of nearly every US book author whose works were copied to build the company’s AI models. This is the first time a US court has allowed a class action of this kind to proceed in the context of generative AI training, putting Anthropic on a path toward paying damages that could ruin the company.

The judge ruled last month, in essence, that Anthropic's use of pirated books had violated copyright law, leaving it to a jury to decide how much the company owes for these violations. That number increases dramatically if the case proceeds as a class action, putting Anthropic on the hook for a vast number of books beyond those produced by the plaintiffs.

The class action decision came just one day after Bloomberg reported that Anthropic is fundraising at a valuation potentially north of $100 billion — nearly double the $61.5 billion investors pegged it at in March. According to Crunchbase, the company has raised $17.2 billion in total. However, much of that funding has come in the form of Amazon and Google cloud computing credits — not real money.

Santa Clara Law professor Ed Lee warned in a blog post that the ruling means “Anthropic faces at least the potential for business-ending liability.” 

He separately wrote that if Anthropic ultimately loses at trial and a final judgment is entered, the company would be required to post a surety bond for the full amount of damages in order to delay payment during any appeal, unless the judge grants an exception. 

In practice, this usually means arranging a bond backed by 100 percent collateral — not necessarily cash, but assets like cloud credits, investments, or other holdings — plus a 1-2 percent annual premium. The impact on Anthropic’s day-to-day operations would likely be limited at first, aside from potentially higher insurance costs, since the bond requirement would only kick in after a final judgment and the start of any appeals process.

Lee wrote in another post that Judge Alsup “has all but ruled that Anthropic’s downloading of pirated books is [copyright] infringement,” leaving “the real issue at trial… the jury’s calculation of statutory damages based on the number of copyrighted books/works in the class.” 

While the risk of a billion-dollar-plus jury verdict is real, it’s important to note that judges routinely slash massive statutory damages awards — sometimes by orders of magnitude. Federal judges, in particular, tend to be skeptical of letting jury awards reach levels that would bankrupt a major company. As a matter of practice (and sometimes doctrine), judges rarely issue rulings that would outright force a company out of business, and are generally sympathetic to arguments about practical business consequences. So while the jury’s damages calculation will be the headline risk, it almost certainly won’t be the last word.

On Thursday, the company filed a motion to stay — a request to essentially pause the case — in which they acknowledged the books covered likely number “in the millions.” Anthropic’s lawyers also wrote, “the specter of unprecedented and potentially business-threatening statutory damages against the smallest one of the many companies developing [large language models] with the same books data” (though it’s worth noting they have an incentive to amplify the stakes in the case to the judge).

The company could settle, but doing so could still cost billions given the scope of potential penalties.

Anthropic, for its part, told Obsolete it “respectfully disagrees” with the decision, arguing the court “failed to properly account for the significant challenges and inefficiencies of having to establish valid ownership millions of times over in a single lawsuit,” and said it is “exploring all avenues for review.”

The plaintiffs lawyers did not reply to a request for comment.

From “fair use” win to catastrophic liability

Just a month ago, Anthropic and the rest of the industry were celebrating what looked like a landmark victory. Alsup had ruled that using copyrighted books to train an AI model — so long as the books were lawfully acquired — was protected as “fair use.” This was the legal shield the AI industry has been banking on, and it would have let Anthropic, OpenAI, and others off the hook for the core act of model training.

But Alsup split a very fine hair. In the same ruling, he found that Anthropic’s wholesale downloading and storage of millions of pirated books — via infamous “pirate libraries” like LibGen and PiLiMi — was not covered by fair use at all. In other words: training on lawfully acquired books is one thing, but stockpiling a central library of stolen copies is classic copyright infringement.

Thanks to Alsup’s ruling and subsequent class certification, Anthropic is now on the hook for a class action encompassing five to seven million books — although only works with registered US copyrights are eligible for statutory damages, and the precise number remains uncertain. A significant portion of these datasets consists of non-English titles, many of which were likely never published in the US and may fall outside the reach of US copyright law. For example, an analysis of LibGen’s holdings suggests that only about two-thirds are in English.

Assuming that only two-fifths of the five million books are covered and the jury awards the statutory minimum of $750 per work, you still end up with $1.5 billion in damages. And as we saw, the company’s own lawyers just said the number is probably in the millions. 

The statutory maximum and with five million books covered? $150,000 per work, or $750 billion total — a figure Anthropic’s lawyers have called “ruinous.” No jury will award that, but it gives you a sense of the range.

The previous record for a case like this was set in 2019, when a federal jury found Cox Communications liable for $1 billion after the nation’s biggest music labels accused the company of turning a blind eye to rampant piracy by its internet customers. That verdict was overturned on appeal years later and is now under review by the Supreme Court.

But even that historic sum could soon be eclipsed if Anthropic loses at trial.

The decision to treat AI training as fair use was widely covered as a win for the industry — and, to be fair, it was. But Anthropic is now facing an existential threat, with barely a mention. Outside of the legal and publishing press, only Reuters and The Verge have covered the class certification ruling, and neither discussed the fact that this case could spell the end for Anthropic. 

Update: early Friday morning, the LA Times ran a column discussing the potential for a trillion-dollar judgment.

Respecting copyright is “not doable”

The legal uncertainty now facing the company comes as the industry continues an aggressive push in Washington to reshape the rules in their favor. In comments submitted earlier this year to the White House’s “AI Action Plan,” Meta, Google, and OpenAI all urged the administration to protect AI companies’ access to vast training datasets — including copyrighted materials — by clarifying that model training is unequivocally “fair use.” Ironically, Anthropic was the only leading AI company to not mention copyright in its White House submission.

At the Wednesday launch of the AI Action Plan, President Trump dismissed the idea that AI firms should pay to use every book or article in their training data, calling strict copyright enforcement “not doable” and insisting that “China’s not doing it.” Still, the administration’s plan is conspicuously silent on copyright — perhaps a reflection of the fact that any meaningful change would require Congress to amend the Copyright Act. The federal Copyright Office can issue guidance but ultimately have no power to settle the matter. Administration officials told the press the issue should be left to the courts. 

Anthropic made some mistakes

Anthropic isn’t just unlucky to be up first. The judge described this case as the “classic” candidate for a class action: a single company downloading millions of books in bulk, all at once, using file hashes and ISBNs to identify the works. The lawyers suing Anthropic are top-tier, and the judge has signaled he won’t let technicalities slow things down. A single trial will determine how much Anthropic owes; a jury could choose any number between the statutory minimum and maximum.

The order reiterates a basic tenet of copyright law: every time a pirated book is downloaded, it constitutes a separate violation — regardless of whether Anthropic later purchased a print copy or only used a portion of the book for training. While this may seem harsh given the scale, it’s a straightforward application of existing precedent, not a new legal interpretation.

And the company’s handling of the data after the piracy isn’t winning it any sympathy.

As detailed in the court order, Anthropic didn’t just download millions of pirated books; it kept them accessible to its engineers, sometimes in multiple copies, and apparently used the trove for various internal tasks long after training. Even when pirate sites started getting taken down, Anthropic scrambled to torrent fresh copies. After a company co-founder discovered a mirror of “Z-Library,” a database shuttered by the FBI, he messaged his colleagues: “[J]ust in time.” One replied, “zlibrary my beloved.”

That made it much easier for the judge to say: this is “Napster” for the AI age, and the copyright law is clear.

Anthropic is separately facing a major copyright lawsuit from the world’s biggest music publishers, who allege that the company’s chatbot Claude reproduced copyrighted lyrics without permission — a case that could expose the firm to similar per-work penalties from thousands to potentially millions of songs.

Ironically, Anthropic appears to have tried harder than some better-resourced competitors to avoid using copyrighted materials without any compensation. Starting in 2024, the company spent millions buying books, often in used condition — cutting them apart, scanning them in-house, and pulping the originals — to feed its chatbot Claude, a step no rival has publicly matched.

Meta, despite its far deeper pockets, skipped the buy-and-scan stage altogether — damning internal messages show engineers calling LibGen “obviously pirated” data and revealing that the approach was approved by Mark Zuckerberg.

Why the other companies should be nervous

If Anthropic settles, it could end up as the only AI company forced to pay for mass copyright infringement — especially if judges in other cases follow Meta’s preferred approach and treat downloading and training as a single act that qualifies as fair use.

For now, Anthropic’s best shot is to win on appeal and convince a higher court to reject Judge Alsup’s reasoning in favor of the more company-friendly approach taken in the Meta case, which treats the act of training as fair use and effectively rolls the infringing downloads into that single use.

If Anthropic settles, it could end up the only AI company forced to pay out — if judges in other copyright cases follow Meta’s preferred approach and treat downloading and training as a single, potentially fair use act.

Right now, Anthropic’s only real hope is to win on appeal and convince a higher court to reject Judge Alsup's approach and accept the approach of the judge in the Meta case — treating the act of training as fair use that subsumes the infringing act of downloading pirated copies. 

But appeals usually have to wait until after a jury trial — so the company faces a brutal choice: settle for potentially billions, or risk a catastrophic damages award and years of uncertainty. If Anthropic goes to trial and loses on appeal, the resulting precedent could drag Meta, OpenAI, and possibly even Google into similar liability.

OpenAI and Microsoft now face 12 consolidated copyright suits — a mix of proposed class actions by book authors and cases brought by news organizations (including The New York Times) — in the Southern District of New York before Judge Sidney Stein.

If Stein were to certify an authors’ class and adopt an approach similar to Alsup’s ruling against Anthropic, OpenAI’s potential liability could be far greater, given the number of potential covered works.

What’s next

A trial is tentatively set for December 1st. If Anthropic fails to pull off an appellate victory before then, the industry is about to get a lesson in just how expensive “move fast and break things” can be when the thing you’ve broken is copyright law — a few-million times over.

A multibillion dollar settlement or jury award would be a death-knell for almost any four-year-old company, but the AI industry is different. The cost to compete is enormous, and the leading firms are already raising multibillion dollar rounds multiple times a year.

That said, Anthropic has access to less capital than its rivals at the frontier — OpenAI, Google DeepMind, and, now, xAI. Overall, company-killing penalties may be unlikely, but they’re still possible, and Anthropic faces the greatest risk at the moment. And given how fiercely competitive the AI industry is, a multibillion dollar setback could seriously affect the company’s ability to stay in the race. 

And some competitors seem to have functionally unlimited capital. To build out its new superintelligence team, Meta has been poaching rival AI researchers with nine-figure pay packages, and Zuckerberg recently said his company would invest “hundreds of billions of dollars” into its efforts.

To keep up with its peers, Anthropic recently decided to accept money from autocratic regimes, despite earlier misgivings. On Sunday, CEO Dario Amodei issued a memo to staff saying the firm will seek investment from Gulf states, including the UAE and Qatar. The memo, which was obtained and reported on by Kylie Robison at WIRED, admitted the decision would probably enrich “dictators” — something Amodei called a “real downside.” But, he wrote, the company can’t afford to ignore “a truly giant amount of capital in the Middle East, easily $100B or more.”

Amodei apparently acknowledged the perceived hypocrisy of the decision, after his October essay/manifesto “Machines of Loving Grace” extolled how important it is that democracies win the AI race.

In the memo, Amodei wrote, “Unfortunately, I think ‘No bad person should ever benefit from our success’ is a pretty difficult principle to run a business on.”

The timing is striking: the note to staff went out only days after the class action certification suddenly presented Anthropic with potentially existential legal risk.


The question of whether generative AI training can lawfully proceed without permission from rights-holders has become a defining test for the entire industry.

OpenAI and Meta may still wriggle out of similar exposure, depending on how their judges rule and whether they can argue that the core act of AI training is protected by fair use. But for now, it’s Anthropic — not OpenAI or Meta — that’s been forced onto the front lines, while the rest of the industry holds its breath.

Edited by Sid Mahanta and Ian MacDougall, with inspiration and review from my friend Vivian.

If you enjoyed this post, please subscribe to Obsolete. 

Mentioned in
114Anthropic's leading researchers acted as moderate accelerationists