I’m conflicted on a lot of this. At the end of the day it seems like these LLMs are simulating human behavior to an extent - exposure to content and generating similar content from that. Could Sarah Silverman be sued by comedians who influenced her comedy style and routines? generally no. I do understand the risk with letting these ‘AI’ run rampant to displace a huge portion of the creative space which is bad but where should the line be drawn? Is it only the fact they were trained material they dont own people are challenging? What recourse will they have when a LLM is trained on wholly owned IP?
She’s suing for copyright infringement, basically, not the LLM emulating her style.
The LLMs read books from her and many, many others that they didn’t buy, because unauthorized copies had been uploaded to the web (happens to every popular book).
Honestly, I don’t know if she has a case. Going after the people who illegally uploaded her book would be the proper route, but that’s always nearly impossible.
Long and short, LLMs benefited from illegal copies.
I see a lot of people claim the training model included copyrighted works particularly books because it can provide a summary of it. But it can provide a summary of visual media too, and no one is claiming it’s sitting there watching films.
If the argument is it has quite a detailed knowledge of the book, that’s not convincing either. All it needs is a summary and it can make up the blanks, and get it close enough we can’t tell the difference. Nothing is original.
If you upload an illegal copy of a book and I download it, not realizing or caring that it’s pirated, and then I re-upload it elsewhere, you and I have both committed copyright infringement. This feels like the same thing.
I suspect the case will depend largely on whether the ways that the models were trained using her works qualify as fair use.
Your example is faulty. If you upload an illegal copy of a book and I read it then tell people all about it, I am not committing copyright infringement