The Courts Just Made Our Libraries Sitting Ducks For AI Plundering
Aron Solomon on the Uses and Abuses of “Fair Use”
The law may call it fair use. But in the age of AI, it’s starting to look like a free lunch.
A California federal judge ruled this week that Meta Platforms Inc. did not violate copyright law when it trained its LLaMA large language models on the published works of authors including Sarah Silverman and Michael Chabon. The authors’ argument that this amounted to wholesale theft of their intellectual property was dismissed as “half-hearted.” The judge concluded that Meta’s use was protected under the doctrine of fair use.
That might be legally correct. But if you squint at it long enough, it starts to look like the law just handed Silicon Valley a blank permission slip to raid the library—no library card required.
Let’s call this what it is: a case about borrowed books and a legal system struggling to reckon with machines that never ask before they take.
The doctrine of fair use was crafted in an era of ink and printing presses. It was designed to allow limited, socially beneficial reuses of copyrighted material: parody, commentary, teaching. It protected the ability to quote, to remix, to criticize—all human acts with a clearly observable “transformative” purpose. It was never built to handle models that devour gigabytes of creative work, crunch it into statistical patterns, and use that substrate to generate eerily familiar prose.
There’s a difference between writing a parody and feeding someone’s work into a billion-parameter model so it can sound convincingly like a novelist without hiring one.Yet in the Meta case, the court accepted the idea that feeding thousands of copyrighted books into a model qualifies as a “transformative” use. Never mind that the transformation happens deep inside a neural network, invisible to the human eye, and that the output can imitate the tone, syntax, and even structure of the input. Because the end product doesn’t reproduce full chapters verbatim—or so the defense claimed—it counts as new enough.
This interpretation stretches the concept of transformation beyond recognition. When the Supreme Court first embraced it in Campbell v. Acuff-Rose in 1994, it did so to defend 2 Live Crew’s raunchy parody of Roy Orbison’s “Oh, Pretty Woman.” That was transformation with intent—commentary through mimicry. Meta’s training of LLaMA on copyrighted texts isn’t commentary. It’s consumption. It’s replication at scale for commercial gain.
There’s a difference between writing a parody and feeding someone’s work into a billion-parameter model so it can sound convincingly like a novelist without hiring one.
The deeper problem here isn’t necessarily the judge’s reasoning—it’s that the law offers no better framework. Our copyright statutes were written for a world of photocopiers and cassette tapes, not cloud infrastructure and self-learning algorithms. They never contemplated a scenario in which the entire back catalog of 20th-century literature could be vacuumed up and repurposed without a single author knowing it happened.
AI developers argue that this process is necessary. That it enables innovation. That no one’s being harmed because the model isn’t regurgitating full books. But innovation doesn’t excuse everything. And just because a machine creates something “new” doesn’t mean it did so fairly. If anything, the Meta ruling confirms a worrying truth: that the law’s current conception of fair use isn’t built for AI—it’s being bent around it.
That’s why this moment matters. Not just for authors, but for the entire architecture of intellectual property.The real danger here isn’t that LLaMA can write a Silverman-style joke. It’s that this logic can be extended infinitely. If it’s “fair” to train on copyrighted books, why not copyrighted music? Visual art? Medical records? Personal emails? Once the principle is accepted—that ingestion at scale for model training is protected—there’s very little to stop the tech giants from scraping everything that isn’t nailed down.
That’s why this moment matters. Not just for authors, but for the entire architecture of intellectual property. A ruling like this doesn’t just resolve a narrow dispute. It helps construct the norms of a new era—an era in which human creativity might become just another feedstock for machines.
To be clear: this isn’t a call to halt progress. AI has the potential to be astonishingly useful. But progress doesn’t require plunder. It’s possible to build advanced models without exploiting the unpaid labor of writers and artists. What’s missing isn’t technology—it’s rules.
Congress has so far failed to act. While the European Union is moving forward with AI regulations that require transparency and opt-outs for rights holders, US lawmakers have largely left the courts to sort it out. But courts can only work with the statutes they have. And in this case, the statute doesn’t even recognize the question being asked.
So the authors who sued Meta may not have won. But their case reveals a much bigger issue: that the legal scaffolding protecting creative work is buckling under the weight of artificial intelligence. The system still sees books as static things—something you read, not something a machine devours to become “smarter.” And in that blind spot, a generation of AI systems is being trained for free, on the backs of people who never agreed to teach them.
What we need isn’t just a new ruling. We need a new understanding. One that asks not just what’s “legal,” but what’s just. One that recognizes that feeding bots on borrowed books without consent or compensation is not innovation—it’s appropriation with better branding.
Until then, the machines will keep reading. And the rest of us may find we’re running out of stories that haven’t already been consumed.