At the crux of the author’s lawsuit is the argument that OpenAI is ruthlessly mining their material to create “derivative works” that will “replace the very writings it copied.”
The authors shoot down OpenAI’s excuse that “substantial similarity is a mandatory feature of all copyright-infringement claims,” calling it “flat wrong.”
Goodbye Star Wars, Avatar, Tarantino’s entire filmography, every slasher film since 1974…
Uh, yeah, a massive corporation sucking up all intellectual property to milk it is not the own you think it is.
But this is literally people trying to strengthen copyright and its scope. The corporation is, out of pure convenience, using copyright as it exists currently with the current freedoms applied to artists.
Listen, it’s pretty simple. Copyright was made to protect creators on initial introduction to market. In modern times it’s good if an artist has one lifetime, i.e their lifetime of royalties, so that they can at least make a little something - because for the small artist that little something means food on their plate.
But a company, sitting on a Smaug’s hill worth of intellectual property, “forever less a day”? Now that’s bonkers.
But you, scraping my artwork to resell for pennies on the dollar via some stock material portal? Can I maybe crawl up your colon with sharp objects and kindling to set up a fire? Pretty please? Oh pretty please!
Also, if you AI copies my writing style, I will personally find you, rip open your skull AND EAT YOUR BRAINS WITH A SPOON!!! Got it, devboy?
Won’t be Mr Hotshot with a pointy objects and a fire up you ass, as well as less than half a brain… even though I just took a couple of bites.
Chew on that one.
EDIT: the creative writer is doomed, I tells ya! DOOOOOOMED!
AI training isn’t only for mega-corporations. We can already train our own open source models, so we shouldn’t applaud someone trying to erode our rights and let people put up barriers that will keep out all but the ultra-wealthy. We need to be careful not weaken fair use and hand corporations a monopoly of a public technology by making it prohibitively expensive to for regular people to keep developing our own models. Mega corporations already have their own datasets, and the money to buy more. They can also make users sign predatory ToS allowing them exclusive access to user data, effectively selling our own data back to us. Regular people, who could have had access to a corporate-independent tool for creativity, education, entertainment, and social mobility, would instead be left worse off with fewer rights than where they started.
Is actually reminds me of a Sci-Fi I read where in the future, they use an ai to scan any new work in order to see what intellectual property is the big Corporation Zone that may have been used as an influence in order to Halt the production of any new media not tied to a pre-existing IP including 100% of independent and fan-made works.
Which is one of the contributing factors towards the apocalypse. So 500 years later after the apocalypse has been reversed and human colonies are enjoying post scarcity, one of the biggest fads is rediscovering the 20th century, now that all the copyrights expired in people can datamine the ruins of Earth to find all the media that couldn’t be properly preserved heading into Armageddon thanks to copyright trolling.
It’s referred to in universe as “Twencen”
The series is called FreeRIDErs if anyone is curious, unfortunately the series may never have a conclusion, (untimely death of co creator) most of its story arcs were finished so there’s still a good chunk of meat to chew through and I highly recommend it.
OpenAI is trying to argue that the whole work has to be similar to infringe, but that’s never been true. You can write a novel and infringe on page 302 and that’s a copyright infringement. OpenAI is trying to change the meaning of copyright otherwise, the output of their model is oozing with various infringements.
I can quote work that’s already been published, that’s allowable and I don’t have to get to the author’s consent to do that. I don’t have to get consent to do that because I’m not passing the work off my own, I am quoting it with reference.
So if I ask the AI to produce something in the style of Stephen King no copyright is violated because it’s all original work.
If I ask the AI to quote Stephen King (and it actually does it) then it’s a quote and it’s not claiming the work is its own.
Under the current interpretation of copyright law (and current law is broken beyond belief, but that’s a completely different issue) a copyright breach has not occurred in either scenario.
The only arguement I can see working is that if the AI actually can quote Stephen King that will prove that it has the works of Stephen King in its data set, but that doesn’t really prove anything other than the works of Stephen King are in its data set. It doesn’t definitively prove openAI didn’t pay for the works.
You can quote a work under fair use, and if it’s legal depends on your intent. You have to be quoting it for such uses as “commentary, criticism, news reporting, and scholarly reports.”
There is no cheat code here. There is no loophole that LLMs can slide on through. The output of LLMs is illegal. The training of LLMs without consent is probably illegal.
The industry knows that its activity is illegal and it strategy is not to win but rather to make litigation expensive, complex and slow through such tactics as:
- Diffusion of responsibility: (note the companies compiling the list of training works, gathering those works, training on those works and prompting the generation of output are all intentionally different entities). The strategy is that each entity can claim “I was only doing X, the actual infringement is when that guy over there did Y”.
- Diffusion of infringement: so many works are being infringed that it becomes difficult, especially on the output side, to say who has been infringed and who has standing. What’s more, even in clear cut cases like, for instance, when I give an LLM a prompt and it regurgitates some nontrivial recognizable copyrighted work, the LLM trainer will say you caused the infringement with your prompt! (see point 1)
- Pretending to be academic in nature so they could wrap themselves in the thick blanket of affirmative defense that fair use doctrine affords the academy, and then after the training portion of the infringement has occurred (insisting that was fair use because it was being used in an academic context) “whoopseeing” it into a commercial product.
- Just being super cagey about the details of the training sets that were actually used and how they were used. This kind of stuff is discoverable but you have to get to discovery first.
- and finally magic brain box arguments. These is typically some variation of “all artists have influences.” It is a rhetorical argument that would be blown right past in court, but it muddies the public discussion and is useful to them in that way.
Their purpose is not to win. It’s to slow everything down, and limit the number of people who are being infringed who have the resources to pursue them. The goal is that if they can get LLMs to “take over” quickly then they can become, you know, too big and too powerful to be shut down even after the inevitable adverse rulings. It’s classic “ask for forgiveness, not permission” silicon valley strategy.
Sam Altman’s goal in creeping around Washington is to try to get laws changed to carve out exceptions for exactly the types of stuff he is already doing. And it is just the same thing SBF was doing when he was creeping around Washington trying to get a law that would declare his securitized ponzi tokens to be commodities.
It doesn’t definitively prove openAI didn’t pay for the works.
But since they are a business/org and has all of those works and using them for profit. Then it kind of would be provable if openAI did or didn’t pay the correct licenses as they and/or the publisher/Stephen King (if he directly were to handle those agreements) would have a receipt/license document of some kind to show it. I don’t agree with how copyrights are done and agree that things should be public domain much sooner. But a for-profit thing like openAI shouldn’t be just allowed to have all these exceptions that avoids needing any level of permission and paying for ones that ask for it to use it. At least not while us regular people that aren’t using these sources for profits/business also aren’t allowed to just use whatever we want.
The only way that (I at least) see such an open use of everything at the level of all the data/information being fine is in a socialist/communist system of some kind. As the main concern for generally keeping stuff like entertainment/information/art/etc at a creator level is to have money to live in modern society where basic and crucial needs (food/housing/healthcare/etc) costs money. So for the average author/writer/artist/inventor a for-profit company just being able to take their shit and much more directly impact their ability to live.
It is a highly predatory level of capitalism and should not have exceptions. It is just setting up a different version of the shit that needs to also be stopped in the entertainment/technology industries. Where the actual creators/performers/etc are fucked by the studios/labs/corps by not being paid anywhere near the value being brought in and may not have control over it. So all of the companies and the capitalist system are why a private entity/business/org shouldn’t just be allowed to pull this shit.
Speaking of slasher films, does anybody know of any movies that have terrible everything except a really good plot?