Sep 2025

The Anthropic Settlement Rejection: What Content Creators and Media Companies Need to Know


Recently, a federal judge rejected a $1.5 billion copyright settlement between Anthropic and a class of authors and publishers. The dollar figure made headlines, but the real significance lies elsewhere. After two decades of counseling media companies, broadcasters, and content creators through digital disruption, this case reveals something more important than settlement amounts: it exposes precisely where the legal boundaries sit when AI companies train their models on copyrighted material.

How Anthropic Ended Up in Federal Court

Anthropic, the Claude AI assistant developer, faced litigation over its training data sources. The case differed from typical AI copyright disputes because Anthropic admitted to downloading hundreds of thousands of books from piracy networks, specifically Library Genesis and Pirate Library Mirror. This was not a gray area about fair use. This was using stolen content.

This court ruling ruling drew a bright line:

Training on lawfully purchased books: The court found this constituted fair use. The process was transformative, and AI models do not serve as market substitutes for the original works.

Training on pirated content: This was straightforward copyright infringement. No fair use defense applied.

The proposed settlement offered $3,000 per pirated work across an estimated 500,000 titles. The court rejected it. The court’s concerns centered on fairness to class members, the risk of setting premature legal precedent, and whether $3,000 adequately compensated rightsholders when statutory damages for willful infringement can reach $150,000 per work.

Why This Matters to Your Business

I have watched clients navigate every major shift in content distribution over the past twenty years, from the Napster litigation to the streaming wars. The AI revolution represents something different. AI is moving faster than any technological shift before it. Unlike earlier disruptions that unfolded over years, today’s AI companies are ingesting massive amounts of human-created content in real time—and doing so in a legal environment that’s still trying to catch its breath.

Source Matters More Than Use

Anthropic lost not because it used copyrighted material, but because it used pirated copyrighted material. This distinction matters enormously for anyone creating or licensing content today. Many AI companies work with third-party datasets or access “research libraries” without carefully auditing where that content originated. When I review vendor agreements for clients, technology companies cannot or will not provide clear documentation of their data sourcing.

If you license content to AI companies or work with AI vendors, you need explicit contractual representations about legally acquiring all training data. Standard indemnification clauses may not adequately protect you if your content ends up in a model trained partly on pirated materials.

Your Content Is Already in Play

Music, film scripts, podcasts, articles, video content, and other creative works are potential AI training data. The question is not whether your content might be used, but how it is being used and whether you will have any say in the matter.

In my practice, I advise clients to audit where their content appears online regularly. This includes monitoring for unauthorized copies on piracy sites. Early detection matters. Once your work circulates widely on piracy networks, it becomes exponentially more likely to end up in AI training datasets.

The Math of Statutory Damages

Consider the numbers in the Anthropic case. With 500,000 allegedly infringed works, exposure at the statutory maximum of $150,000 per work totals $75 billion. Even at the minimum statutory rate of $750 per work, the figure reaches $375 million. These numbers exist because copyright law allows statutory damages without requiring proof of actual economic harm.

This creates significant leverage, but only if you have registered your copyrights. Registration is a prerequisite for pursuing statutory damages and attorney fees in federal court. For a few hundred dollars per registration (or less through bulk programs), you can transform your legal position from seeking actual damages (which can be challenging to prove and small in amount) to having access to statutory damages that range from $750 to $150,000 per work.

In my experience, unregistered copyrights dramatically weaken your negotiating position because a work must be registered with the U.S. Copyright Office before you are entitled to use the power of statutory damages. When we approach an infringer on behalf of a client with unregistered works, we can only seek actual damages and profits which is difficult and expensive to assess. When we approach registered work, we can credibly threaten six-figure statutory damages per work. The difference in settlement outcomes is substantial.

The Emerging Licensing Market

Legal uncertainty around AI training has created a new licensing market. Some content owners are waiting for courts to resolve fair use questions through litigation, while others are negotiating proactive licensing deals that compensate them for AI training uses.

I have seen this pattern before. Some rightsholders spent years in litigation when digital music distribution emerged (see Metallica), while others negotiated early licensing deals. The latter group, despite some criticism for “selling out,” often secured more favorable terms than the litigation settlements that came later.

If you control a substantial content library (extensive media archives, music catalogs, or specialized content collections), you should evaluate whether proactive licensing makes strategic sense. Structure these deals carefully. Include audit rights so you can verify usage. Include restrictions on sublicensing and derivative uses. Consider tiered compensation based on model size or commercial success.

What the Settlement Rejection Tells Us

Here, the court’s decision to reject the $3,000-per-work settlement reveals judicial reluctance to cement compensation standards while AI copyright law remains in flux. This leaves everyone in a state of uncertainty. But uncertainty cuts both ways.

For AI companies, uncertainty means continued litigation risk and difficulty obtaining clear rights to training data. For content creators, uncertainty means leverage. You can demand transparency, insist on proper licensing terms, and negotiate compensation that reflects your content’s genuine value in an AI training context.

The Anthropic ruling suggests that lawful acquisition matters significantly in the fair use analysis. AI companies may have stronger fair use defenses when training on legitimately obtained content (purchased books, licensed materials, or properly acquired public content). This does not mean they can use anything freely, but the fair use calculus tilts more favorably when the initial acquisition was legal.

This creates an opening for content creators. Even if courts ultimately find some AI training uses fair, contractual restrictions can provide additional protection. When negotiating distribution agreements, consider including language that explicitly restricts AI training uses or requires additional compensation for such uses.

Practical Steps You Should Take Now

Based on what I have learned from advising clients through similar disputes, here are the actions that will best protect your interests:

Review your existing agreements. Pull your distribution contracts, licensing agreements, and vendor contracts. Look specifically for language addressing AI uses. Most agreements drafted before 2023 contain no such provisions. Older agreements may grant overly broad rights that could be interpreted to permit AI training. If your agreements include sweeping grants of rights “for all purposes” or “in all media now known or hereafter developed,” you may have inadvertently authorized AI training uses.

Implement systematic monitoring. Use digital monitoring services to track where your content appears online. Several vendors offer automated monitoring to detect unauthorized copies across websites, piracy networks, and file-sharing platforms. This is not about being litigious. This is about knowing what is happening with your content so you can make informed decisions about enforcement.

Register strategically. Prioritize copyright registration for your most valuable works. If you have an extensive catalog, investigate bulk registration programs. For published works, you can often register multiple works in a single application. The cost per work drops significantly with bulk registration, making it practical even for large libraries.

Update your contract templates. Every new agreement you enter you should address AI training explicitly. I recommend including specific prohibition clauses unless you negotiate separate compensation for AI uses. Consider requiring approval rights before your content can be used for machine learning. At a minimum, include a representation from the other party that they will not use your content for AI training without express written permission.

Develop internal AI policies. If you use AI tools in your production workflow (and many of my clients do), review the terms of service carefully. Some AI platforms claim broad rights to any content you input. You could inadvertently give up rights to your creative work simply by using certain AI services and agreeing to their terms of use. This is particularly important for journalists, screenwriters, and others who might use AI tools during the creative process.

Evaluate proactive licensing. Rather than waiting for infringement disputes, assess whether licensing your content for AI training makes business sense. This requires analyzing the value of your specific content as training data, understanding the current market rates, and structuring deals that protect your long-term interests. Not every content library has significant AI training value, but some do.

The Broader Context

I began practicing entertainment law before streaming services existed, when the biggest digital rights question was whether a download constituted a sale or a license. Each technological shift brings new challenges and opportunities for those who act strategically rather than reactively.

The AI revolution represents the most significant shift in content rights since digital distribution emerged. The difference this time is that we have robust copyright law and decades of precedent to guide us. We are not writing on a blank slate.

The Anthropic settlement rejection leaves the industry without clear pricing standards. Still, it establishes something more valuable: a clear legal distinction between training on lawfully obtained content and training on pirated content. That distinction gives content creators a foundation for protecting their rights.

Do not wait for litigation to force the issue. Copyright infringement cases take years to resolve and cost hundreds of thousands in legal fees. The AI landscape may have shifted dramatically when a court issues a final ruling. The clients who fare best in technological disruptions act early, negotiate proactively, and ensure their content is adequately protected through registration and contract.

This is a pivotal moment. Your decisions now about protecting, monitoring, and licensing your content will determine whether you benefit from the AI revolution or are swept aside by it.

About the Author

Carrie Ward has twenty years of experience in entertainment, communications, and media law, representing content creators, media companies, broadcasters, and entertainment professionals. Learn more about Carrie here.