Technology News, Tips And Reviews

Judge Dismisses Sarah Silverman’s AI Copyright Case Against Meta

Meta Prevails in Landmark AI Copyright Case, But Legal War Over Training Data Far From Over

In a closely watched ruling with far-reaching implications for the artificial intelligence industry, a federal judge has dismissed a copyright infringement lawsuit against Meta brought by authors including Sarah Silverman and Ta-Nehisi Coates. The decision marks a significant, though sharply limited, victory for tech companies seeking to train generative AI models on copyrighted materials without permission or compensation.

U.S. District Judge Vince Chhabria granted summary judgment to Meta on Wednesday, finding the plaintiffs “made the wrong arguments and failed to develop a record in support of the right one”. The authors had accused Meta of illegally using pirated versions of their books to train its large language model LLaMA, claiming the AI system could reproduce snippets of their protected works. But Chhabria determined they presented insufficient evidence that Meta’s actions would financially harm them or dilute the market for their books, a crucial element in copyright cases.

A Narrow Victory with Broad Caveats

Despite ruling for Meta, Chhabria delivered a sobering warning to the AI industry. “This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful,” he emphasized. He further suggested that in many circumstances, using copyrighted works without permission to train AI models would be illegal, stating companies “will generally need to pay copyright holders for the right to use their materials”.

The judge expressed particular concern about generative AI’s potential to “flood the market with endless images, songs, articles and books using a tiny fraction of the time and creativity” required for human creation. He bluntly dismissed Meta’s public interest defense as “nonsense,” noting AI products are expected to generate billions, even trillions of dollars, for developers.

Diverging Legal Pathways Emerge

The Meta ruling came just one day after another significant AI copyright decision in the same courthouse. U.S. District Judge William Alsup ruled that Anthropic’s use of copyrighted books to train its Claude AI qualified as “fair use” because it was “quintessentially transformative”. However, Anthropic must still face trial for allegedly pirating over 7 million books through shadow libraries.

Legal experts note the judges diverged significantly in their interpretation of fair use. “Judge Chhabria disagreed sharply but respectfully with Judge Alsup on the market dilution theory,” observed James Grimmelmann, Professor of Digital and Internet Law at Cornell University. While Alsup focused primarily on the transformative nature of AI training, Chhabria stressed that “under the fair use doctrine, harm to the market for the copyrighted work is more important than the purpose for which the copies are made”.

The Battle Lines Solidify

The authors’ legal team expressed disappointment with the ruling despite acknowledging the judge’s critical statements about AI companies. “The court ruled that AI companies that ‘feed copyright-protected works into their models without getting permission…’ are generally violating the law,” noted attorneys from Boies Schiller Flexner. “Yet, despite the undisputed record of Meta’s historically unprecedented pirating of copyrighted works, the court ruled in Meta’s favor”.

Meta welcomed the decision, calling fair use a “vital legal framework for building this transformative technology”. The company maintained its position that LLaMA cannot output substantial portions of the authors’ books, stating: “No one can use Llama to read Sarah Silverman’s description of her childhood”.

What Comes Next?

Legal scholars suggest the ruling provides a roadmap for future plaintiffs. “We haven’t seen the last of this novel market dilution theory,” predicted Cardozo Law professor Jacob Noti-Victor. “That might change the game in other cases”. Chhabria specifically suggested authors could succeed by demonstrating AI outputs closely resemble their works, particularly for nonfiction and newer fiction, and that rapid AI generation at scale harms their markets.

With dozens of similar cases pending against OpenAI, Microsoft, and other AI developers and the Supreme Court likely to ultimately weigh in, this week’s rulings represent merely the opening salvos in a protracted legal war. As Randolph May, president of the Free State Foundation, cautioned: “I think it’s premature for Anthropic and others like it to be taking victory laps”.

The core tension remains unresolved: how to balance innovation against creators’ rights when AI systems fundamentally depend on consuming human creativity. In Chhabria’s prescient words: “No matter how transformative LLM training may be, it’s hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works”. For artists and writers watching closely, that statement may ultimately prove more significant than the dismissal itself.

Subscribe to my whatsapp channel

Comments are closed.