Meta has won against Kadrey et al — the authors’ case against Meta training their Llama LLM on the authors’ books, including on pirate copies. [Order, PDF; case docket] Both sides brought motions f…
@dgerard I was pretty bummed out about this, but the judge in the Meta case seems confident that future lawsuits will find that training on copyrighted works *isn’t* fair use, it’s just that these particular plaintiffs made bad arguements.
As somewhat of an author I fucking can’t understand how.
To win, they’d need to demonstrate specific harms (from specific infringer, to specific book), the “amazon is full of slop” won’t do.
It’s like someone makes a movie without licensing from the book author, and then the judge says that authors must argue that movies harm book sales.
edit: except much much worse because good luck pointing at specific instances of slop and connecting them to specific ai and its training on a specific work. At least with a movie you can point at a movie and at the book its made from.
edit: frankly the whole thing just sounds like both of the judges had to sound neutral, and because Meta’s conduct was more egregious the judge had to write weirder stuff to sound neutral.
Anthropic’s judge can simply slap them with (likely insignificant) fines in the light of theirs displaying “good faith” by buying a bunch of books legally. Meta’s judge had to invent whole new theory of unfair use that plaintiff lawyers can’t possibly support with evidence.
@diz I’m not a legal expert, but I think it’s more straight-forward than your analogy. It’s literally books used to make books to be sold in the same book market. The derivative work is clearly supplanting the original, which fair use law is supposed to prevent. See point 4 in this link, I think what the judge is arguing is that the courts will find that AI’s disruption of the markets that its training data comes from will render it not fair use of the training material.
But he’s saying that plaintiffs need to demonstrate said disruption, to even get to the jury.
He said:
In cases involving uses like Meta’s, it seems like the plaintiffs will often win, at least where those cases have better-developed records on the market effects of the defendant’s use
And what do those records are supposed to look like? Harm has to be specific, always had been. How do you ever demonstrate that a specific AI harmed the market for a specific book?
I honestly think both judges had to try to appear neutral and Meta’s had to work harder to appear neutral because Meta’s conduct was worse, hence a more bizarre argument. Misanthropic can be slapped with a small fine.
@diz That’s fair, and I don’t know, I’ve pretty much gone the limit of my knowledge here. I guess I just can’t bring myself to cede the victory to the AI companies and say “training IS fair use” if the judge in one of the cases in question thinks there’s a good chance it generally isn’t. Maybe in the future it will be settled, but I don’t think we’re there yet.
Thought about it some more, the most charitable I can get is that Meta’s judge thinks someone else could win the case if they have a specific book that was torrented and then they point at the general situation with AI slop in bookstores and argue that AI harms book sales.
I can not imagine that working. At all. So the AI is producing slop of infinitesimally higher quality because it was trained on a pirated copy of your book in particular. Clearly the extra harm to specifically your business due to piracy of specifically your books would be rather small, as this very judge would immediately point out. In fact the AI slop is so shit that people only buy it by mistake, so its quality doesn’t really matter.
Maybe news companies could sometimes win lawsuits like this, but book authors, no way.
I think it is just pure copium to see this ruling in any kind of positive light. Alsup (misanthropic’s judge) at least was willing to ding an AI company for pirating books (although he was probably only willing to ding them for that because it wouldn’t be fatal to them the way it would be to Meta). This guy wouldn’t even do that bare minimum.
And the whole approach is insane. You can’t make a movie without getting a movie rights contract with the author. A movie adaptation of a book is far more transformative than anything AI does. Especially the “training” which is just fucking gradient descent, you nudge a bunch of numbers towards replicating the works, over and over again, in a purely mechanical process.
Nobody ever had to successfully argue that movie sales harm book sales just to treat movie adaptations as derivative work.
@diz I admit, it would be challenging. I don’t think it’s cope, though, because I think there are practical reasons not to cede victory. I think once “AI is fair use” becomes a meme, many will assume it to mean “AI is ethical”, and belief that there are no open legal questions will increase adoption.
Like, the literal fact of the matter is that the courts haven’t decided this categorically. Why get ahead of ourselves and pretend they have, just because it seems inevitable? What’s the benefit?
It’s not about ceding victory, it’s about whether we accept shit talking plaintiff’s lawyers as an adequate substitute for a slap on the wrist, or not. Clearly the judge wants to appear impartial.
Plaintiff made a perfectly good argument that meta downloaded the books illegally, and that this downloading wasn’t necessary to enable a (fair or not) use. A human critic does not get a blanket license to pirate any work he might want to criticize, even though critique is fair use.
If I pirated a book and wrote a review of it, would that make the review copyright infringement? How is it relevant to the case? The plaintiffs essentially argue the market to sell licences to review their books (train LLMs) was disrupted when they should have been arguing that the market for their book was.
@dgerard I was pretty bummed out about this, but the judge in the Meta case seems confident that future lawsuits will find that training on copyrighted works *isn’t* fair use, it’s just that these particular plaintiffs made bad arguements.
https://arstechnica.com/tech-policy/2025/06/book-authors-made-the-wrong-arguments-in-meta-ai-training-case-judge-says/
At the very least, there is currently some legal uncertainty for all the AI companies that behave in this way, which is a good thing, I guess.
As somewhat of an author I fucking can’t understand how.
To win, they’d need to demonstrate specific harms (from specific infringer, to specific book), the “amazon is full of slop” won’t do.
It’s like someone makes a movie without licensing from the book author, and then the judge says that authors must argue that movies harm book sales.
edit: except much much worse because good luck pointing at specific instances of slop and connecting them to specific ai and its training on a specific work. At least with a movie you can point at a movie and at the book its made from.
edit: frankly the whole thing just sounds like both of the judges had to sound neutral, and because Meta’s conduct was more egregious the judge had to write weirder stuff to sound neutral.
Anthropic’s judge can simply slap them with (likely insignificant) fines in the light of theirs displaying “good faith” by buying a bunch of books legally. Meta’s judge had to invent whole new theory of unfair use that plaintiff lawyers can’t possibly support with evidence.
@diz I’m not a legal expert, but I think it’s more straight-forward than your analogy. It’s literally books used to make books to be sold in the same book market. The derivative work is clearly supplanting the original, which fair use law is supposed to prevent. See point 4 in this link, I think what the judge is arguing is that the courts will find that AI’s disruption of the markets that its training data comes from will render it not fair use of the training material.
https://www.law.cornell.edu/uscode/text/17/107
But he’s saying that plaintiffs need to demonstrate said disruption, to even get to the jury.
He said:
And what do those records are supposed to look like? Harm has to be specific, always had been. How do you ever demonstrate that a specific AI harmed the market for a specific book?
I honestly think both judges had to try to appear neutral and Meta’s had to work harder to appear neutral because Meta’s conduct was worse, hence a more bizarre argument. Misanthropic can be slapped with a small fine.
@diz That’s fair, and I don’t know, I’ve pretty much gone the limit of my knowledge here. I guess I just can’t bring myself to cede the victory to the AI companies and say “training IS fair use” if the judge in one of the cases in question thinks there’s a good chance it generally isn’t. Maybe in the future it will be settled, but I don’t think we’re there yet.
Thought about it some more, the most charitable I can get is that Meta’s judge thinks someone else could win the case if they have a specific book that was torrented and then they point at the general situation with AI slop in bookstores and argue that AI harms book sales.
I can not imagine that working. At all. So the AI is producing slop of infinitesimally higher quality because it was trained on a pirated copy of your book in particular. Clearly the extra harm to specifically your business due to piracy of specifically your books would be rather small, as this very judge would immediately point out. In fact the AI slop is so shit that people only buy it by mistake, so its quality doesn’t really matter.
Maybe news companies could sometimes win lawsuits like this, but book authors, no way.
I think it is just pure copium to see this ruling in any kind of positive light. Alsup (misanthropic’s judge) at least was willing to ding an AI company for pirating books (although he was probably only willing to ding them for that because it wouldn’t be fatal to them the way it would be to Meta). This guy wouldn’t even do that bare minimum.
And the whole approach is insane. You can’t make a movie without getting a movie rights contract with the author. A movie adaptation of a book is far more transformative than anything AI does. Especially the “training” which is just fucking gradient descent, you nudge a bunch of numbers towards replicating the works, over and over again, in a purely mechanical process.
Nobody ever had to successfully argue that movie sales harm book sales just to treat movie adaptations as derivative work.
@diz I admit, it would be challenging. I don’t think it’s cope, though, because I think there are practical reasons not to cede victory. I think once “AI is fair use” becomes a meme, many will assume it to mean “AI is ethical”, and belief that there are no open legal questions will increase adoption.
Like, the literal fact of the matter is that the courts haven’t decided this categorically. Why get ahead of ourselves and pretend they have, just because it seems inevitable? What’s the benefit?
It’s not about ceding victory, it’s about whether we accept shit talking plaintiff’s lawyers as an adequate substitute for a slap on the wrist, or not. Clearly the judge wants to appear impartial.
Plaintiff made a perfectly good argument that meta downloaded the books illegally, and that this downloading wasn’t necessary to enable a (fair or not) use. A human critic does not get a blanket license to pirate any work he might want to criticize, even though critique is fair use.
@diz
If I pirated a book and wrote a review of it, would that make the review copyright infringement? How is it relevant to the case? The plaintiffs essentially argue the market to sell licences to review their books (train LLMs) was disrupted when they should have been arguing that the market for their book was.
I think the judgement on whether Meta distributed works while torrenting, as Anthropic did, hasn’t happened yet, see the last paragraph of the judgement.
https://fingfx.thomsonreuters.com/gfx/legaldocs/zgvozmrynpd/META%20AI%20COPYRIGHT%20LAWSUIT%20ruling.pdf