Last week, Anthropic, the AI company behind Claude Chatbot, settled in a groundbreaking class action lawsuit for $1.5 billion. In the context of copyright legal cases, the amount is large, but he is responsible for just a small portion of the human race’s estimated $183 billion valuation.
Authors and publishers led by people like Andrea Bartz and Charles Graeber have trained Claude to illegally download millions of tasty books from shadow libraries like the Genesis of the Library, accusing them of violating copyright laws. The settlement compensates approximately 500,000 authors and publishers for about $3,000 for each affected work. Humanity did not admit responsibility, but instead destroyed illegal files and paid the author, avoiding trial. The author’s guild is the result of precedent for content licensing in AI development.
This case raises questions about property rights in the era of large-scale language models (LLMS). The court formulated that recombining existing text into new output would be considered fair use, but humanity litigation rested on copyright infringement itself, not on training processes. What should the law say about indirectly fueling a car? The answer can shape the future quality of AI as well as fairness.
The term “AI Slop” is increasingly describing the low-quality mechanical Zel text generated with minimal human surveillance. If human writing stops becoming a viable career due to inadequate compensation, will LLM lose access to fresh, high-quality training data? Can this create a feedback loop that will train the AI model with degraded output and stagnate? This dilemma reflects the classic “access vs. incentive” intellectual debate property law. Access to a rich corpus of human-created texts today allows entrepreneurs to build powerful and affordable LLMs. However, without the incentive for human authors to continue producing, wells of quality training data can dry out.
This case also blurs the traditional gap between copyright and patent. Copyrighted material saw Static at 11, but now drives “follow on” innovation from the original work. In other words, copyright protection in this case is an effect on the AI content affected by copyrighted material, in a way that previous previous technologies were applied to new technologies built on patented technical inventions. Therefore, the “access vs. incentive” theory applies to the copyright used to apply to patents. Human reconciliation shows that intellectual property laws, which lag behind the rapid evolution of AI, must adapt. While the author may need compensation, stopping AI progress to resolve legal disputes curtails innovation.
At $1.5 billion, the size of the settlement sends a clear message. Legal bypass channels can be costly. This allows AI companies to decide to enter the market. This is especially prone to similar lawsuits against other companies. Before you can, developers may be heading towards improving licensing transactions or public domain data, raising costs and concentrating the AI industry on deep players like humanity, backed by billions of funding. Small startups can struggle as they can afford to license or litigate. This is the case of regulatory barriers that support incumbents. Can humanity’s willingness to pay such a large sum of money discourage start-ups, reflecting the strategic moves to strengthen the moats around properly capitalized AI companies?
In a 2024 post, I speculated that AI companies might wash away with cash and strategically hire authors to replenish the commons of high-quality textbooks. In that post I wrote:
“AI companies have money. Can Openai head into a world where staff have sub-paid authors? Replenishing commons is inexpensive if done strategically in relation to the money being collected for AI companies.”
Human reconciliation partially examines this idea. In the AI Arm Race, where Mark Zuckerberg lures millions of engineers from Openai, seduces $1.5 billion SEM like a modest price for the potential for establishment.
For now, the human case represents a pivotal moment. It highlights the need for a balanced approach and sets stages on how AI and intellectual property law coexist in an era of technological change in Prince Una.
However, at some point, the LLMS could reach a takeoff point. That’s a horizon that I can’t see.
Joy Buchanan is an associate professor of economics at Samford University. She blogs every day writing an economist.