Did anyone expect them to go “oh, okay, that makes sense after all”?
At the crux of the author’s lawsuit is the argument that OpenAI is ruthlessly mining their material to create “derivative works” that will “replace the very writings it copied.”
The authors shoot down OpenAI’s excuse that “substantial similarity is a mandatory feature of all copyright-infringement claims,” calling it “flat wrong.”
Goodbye Star Wars, Avatar, Tarantino’s entire filmography, every slasher film since 1974…
Uh, yeah, a massive corporation sucking up all intellectual property to milk it is not the own you think it is.
But this is literally people trying to strengthen copyright and its scope. The corporation is, out of pure convenience, using copyright as it exists currently with the current freedoms applied to artists.
Listen, it’s pretty simple. Copyright was made to protect creators on initial introduction to market. In modern times it’s good if an artist has one lifetime, i.e their lifetime of royalties, so that they can at least make a little something - because for the small artist that little something means food on their plate.
But a company, sitting on a Smaug’s hill worth of intellectual property, “forever less a day”? Now that’s bonkers.
But you, scraping my artwork to resell for pennies on the dollar via some stock material portal? Can I maybe crawl up your colon with sharp objects and kindling to set up a fire? Pretty please? Oh pretty please!
Also, if you AI copies my writing style, I will personally find you, rip open your skull AND EAT YOUR BRAINS WITH A SPOON!!! Got it, devboy?
Won’t be Mr Hotshot with a pointy objects and a fire up you ass, as well as less than half a brain… even though I just took a couple of bites.
Chew on that one.
EDIT: the creative writer is doomed, I tells ya! DOOOOOOMED!
AI training isn’t only for mega-corporations. We can already train our own open source models, so we shouldn’t applaud someone trying to erode our rights and let people put up barriers that will keep out all but the ultra-wealthy. We need to be careful not weaken fair use and hand corporations a monopoly of a public technology by making it prohibitively expensive to for regular people to keep developing our own models. Mega corporations already have their own datasets, and the money to buy more. They can also make users sign predatory ToS allowing them exclusive access to user data, effectively selling our own data back to us. Regular people, who could have had access to a corporate-independent tool for creativity, education, entertainment, and social mobility, would instead be left worse off with fewer rights than where they started.
Is actually reminds me of a Sci-Fi I read where in the future, they use an ai to scan any new work in order to see what intellectual property is the big Corporation Zone that may have been used as an influence in order to Halt the production of any new media not tied to a pre-existing IP including 100% of independent and fan-made works.
Which is one of the contributing factors towards the apocalypse. So 500 years later after the apocalypse has been reversed and human colonies are enjoying post scarcity, one of the biggest fads is rediscovering the 20th century, now that all the copyrights expired in people can datamine the ruins of Earth to find all the media that couldn’t be properly preserved heading into Armageddon thanks to copyright trolling.
It’s referred to in universe as “Twencen”
The series is called FreeRIDErs if anyone is curious, unfortunately the series may never have a conclusion, (untimely death of co creator) most of its story arcs were finished so there’s still a good chunk of meat to chew through and I highly recommend it.
OpenAI is trying to argue that the whole work has to be similar to infringe, but that’s never been true. You can write a novel and infringe on page 302 and that’s a copyright infringement. OpenAI is trying to change the meaning of copyright otherwise, the output of their model is oozing with various infringements.
I can quote work that’s already been published, that’s allowable and I don’t have to get to the author’s consent to do that. I don’t have to get consent to do that because I’m not passing the work off my own, I am quoting it with reference.
So if I ask the AI to produce something in the style of Stephen King no copyright is violated because it’s all original work.
If I ask the AI to quote Stephen King (and it actually does it) then it’s a quote and it’s not claiming the work is its own.
Under the current interpretation of copyright law (and current law is broken beyond belief, but that’s a completely different issue) a copyright breach has not occurred in either scenario.
The only arguement I can see working is that if the AI actually can quote Stephen King that will prove that it has the works of Stephen King in its data set, but that doesn’t really prove anything other than the works of Stephen King are in its data set. It doesn’t definitively prove openAI didn’t pay for the works.
You can quote a work under fair use, and if it’s legal depends on your intent. You have to be quoting it for such uses as “commentary, criticism, news reporting, and scholarly reports.”
There is no cheat code here. There is no loophole that LLMs can slide on through. The output of LLMs is illegal. The training of LLMs without consent is probably illegal.
The industry knows that its activity is illegal and it strategy is not to win but rather to make litigation expensive, complex and slow through such tactics as:
- Diffusion of responsibility: (note the companies compiling the list of training works, gathering those works, training on those works and prompting the generation of output are all intentionally different entities). The strategy is that each entity can claim “I was only doing X, the actual infringement is when that guy over there did Y”.
- Diffusion of infringement: so many works are being infringed that it becomes difficult, especially on the output side, to say who has been infringed and who has standing. What’s more, even in clear cut cases like, for instance, when I give an LLM a prompt and it regurgitates some nontrivial recognizable copyrighted work, the LLM trainer will say you caused the infringement with your prompt! (see point 1)
- Pretending to be academic in nature so they could wrap themselves in the thick blanket of affirmative defense that fair use doctrine affords the academy, and then after the training portion of the infringement has occurred (insisting that was fair use because it was being used in an academic context) “whoopseeing” it into a commercial product.
- Just being super cagey about the details of the training sets that were actually used and how they were used. This kind of stuff is discoverable but you have to get to discovery first.
- and finally magic brain box arguments. These is typically some variation of “all artists have influences.” It is a rhetorical argument that would be blown right past in court, but it muddies the public discussion and is useful to them in that way.
Their purpose is not to win. It’s to slow everything down, and limit the number of people who are being infringed who have the resources to pursue them. The goal is that if they can get LLMs to “take over” quickly then they can become, you know, too big and too powerful to be shut down even after the inevitable adverse rulings. It’s classic “ask for forgiveness, not permission” silicon valley strategy.
Sam Altman’s goal in creeping around Washington is to try to get laws changed to carve out exceptions for exactly the types of stuff he is already doing. And it is just the same thing SBF was doing when he was creeping around Washington trying to get a law that would declare his securitized ponzi tokens to be commodities.
It doesn’t definitively prove openAI didn’t pay for the works.
But since they are a business/org and has all of those works and using them for profit. Then it kind of would be provable if openAI did or didn’t pay the correct licenses as they and/or the publisher/Stephen King (if he directly were to handle those agreements) would have a receipt/license document of some kind to show it. I don’t agree with how copyrights are done and agree that things should be public domain much sooner. But a for-profit thing like openAI shouldn’t be just allowed to have all these exceptions that avoids needing any level of permission and paying for ones that ask for it to use it. At least not while us regular people that aren’t using these sources for profits/business also aren’t allowed to just use whatever we want.
The only way that (I at least) see such an open use of everything at the level of all the data/information being fine is in a socialist/communist system of some kind. As the main concern for generally keeping stuff like entertainment/information/art/etc at a creator level is to have money to live in modern society where basic and crucial needs (food/housing/healthcare/etc) costs money. So for the average author/writer/artist/inventor a for-profit company just being able to take their shit and much more directly impact their ability to live.
It is a highly predatory level of capitalism and should not have exceptions. It is just setting up a different version of the shit that needs to also be stopped in the entertainment/technology industries. Where the actual creators/performers/etc are fucked by the studios/labs/corps by not being paid anywhere near the value being brought in and may not have control over it. So all of the companies and the capitalist system are why a private entity/business/org shouldn’t just be allowed to pull this shit.
Speaking of slasher films, does anybody know of any movies that have terrible everything except a really good plot?
I don’t care what works a neural network gets trained on. How else are we supposed to make one?
Should I care more about modern eternal copyright bullshit? I’d feel more nuance if everything a few decades old was public-domain, like it’s fucking supposed to be. Then there’d be plenty of slightly-outdated content to shovel into these statistical analysis engines. But there’s not. So fuck it: show the model absolutely everything, and the impact of each work becomes vanishingly small.
Models don’t get bigger as you add more stuff. Training only twiddles the numbers in each layer. There are two-gigabyte networks that have been trained on hundreds of millions of images. If you tried to store those image, verbatim, they would each weigh barely a dozen bytes. And the network gets better as that number goes down.
The entire point is to force the distillation of high-level concepts from raw data. We’ve tried doing it the smart way and we suck at it. “AI winter” and “good old-fashioned AI” were half a century of fumbling toward the acceptance that we don’t understand how intelligence works. This brute-force approach isn’t chosen for cost or ease or simplicity. This is the only approach that works.
Models don’t get bigger as you add more stuff.
They will get less coherent and/or “forget” the earlier data if you don’t increase the parameters with the training set.
There are two-gigabyte networks that have been trained on hundreds of millions of images
You can take a huge tiff of an image, put it through JPEG with the quality cranked all the way down and get a tiny file out the other side, which is still a recognizable derivative of the original. LLMs are extremely lossy compression of their training set.
which is still a recognizable derivative of the original
Not in twelve bytes.
Deep models are a statistical distillation of a metric shitload of data. Smaller models with more training on more data don’t get worse, they get more abstract - and in adversarial uses they often kick big networks’ asses.
seethe
Very concerning word use from you.
The issue art faces isn’t that there’s not enough throughput, but rather there’s not enough time, both to make them and enjoy them.
That’s always been the case, though, imo. People had to make time for art. They had to go to galleries, see plays and listen to music. To me it’s about the fair promotion of art, and the ability for the art enjoyer to find art that they themselves enjoy rather than what some business model requires of them, and the ability for art creators to find a niche and to be able to work on their art as much as they would want to.
Headline is stupid.
Millenails journalism is fucking got to stop with these clown word choices…
Copyright is already just a band-aid for what is really an issue of resource allocation.
If writers and artists weren’t at risk of loosing their means of living, we wouldn’t need to concern ourselves with the threat of an advanced tool supplanting them. Nevermind how the tool is created, it is clearly very valuable (otherwise it would not represent such a large threat to writers) and should be made as broadly available (and jointly-owned and controlled) as possible. By expanding copyright like this, all we’re doing is gatekeeping the creation of AI models to the largest of tech companies, and making them prohibitively expensive to train for smaller applications.
If LLM’s are truly the start of a “fourth industrial revolution” as some have claimed, then we need to consider the possibility that our economic arrangement is ill-suited for the kind of productivity it is said AI will bring. Private ownership (over creative works, and over AI models, and over data) is getting in the way of what could be a beautiful technological advancement that benefits everyone.
Instead, we’re left squabbling over who gets to own what and how.
fourth industrial revolution" as some have claimed
The people claiming this are often the shareholders themselves.
prohibitively expensive to train for smaller applications.
There is so much work out there for free, with no copyright. The biggest cost in training is most likely the hardware, and I see no added value in having AI train on Stephen King ☠️
Copyright is already just a band-aid for what is really an issue of resource allocation.
God damn right but I want our government to put a band aid on capitalists just stealing whatever the fuck they want “move fast and break things”. It’s yet another test for my confidence in the state. Every issue, a litmus test for how our society deals with the problems that arise.
There is so much work out there for free, with no copyright
There’s actually a lot less than you’d think (since copyright lasts for so long), but even less now that any online and digitized sources are being locked down and charged for by the domain owners. But even if it were abundant, it would likely not satisfy the true concern here. If there was enough data to produce an LLM of similar quality without using copyrighted data, it would still threaten the security of those writers. What is to say a user couldn’t provide a sample of Stephen King’s writing to the LLM and have it still produce derivative work without having trained it on copyrighted data? If the user had paid for that work, are they allowed to use the LLM in the same way? If they aren’t who is really at fault, the user or the owner of the LLM?
The law can’t address the complaints of these writers because interpreting the law to that standard is simply too restrictive and sets an impossible standard. The best way to address the complaint is to simply reform copyright law (or regulate LLM’s through some other mechanism). Frankly, I do not buy that the LLM’s are a competing product to the copyrighted works.
The biggest cost in training is most likely the hardware
That’s right for large models like the ones owned by OpenAI and Google, but with the amount of data needed to effectively train and fine-tune these models, if that data suddenly became scarce and expensive it could easily overtake hardware cost. To say nothing for small consumer models that are run on consumer hardware.
capitalists just stealing whatever the fuck they want “move fast and break things”
I understand this sentiment, but keep in mind that copyright ownership is just another form of capital.
Thanks for this reply. You’ve shown this issue has depth that I’ve ignored because I like very few of the advocates for the AI we’ve got.
So one thing that trips me up is I thought copyright is about use. As a consumer rather than a creator this makes complete sense - you can read it, if you own it or borrowed it, and do not distribute it in any way. But there are also gentleman’s agreements built in to how we use books and digital prints.
Unintuitively, copying is also very important. Artists copy to learn, for example. Musicians have the right to cover anyone’s music. Engineers will deconstruct. and reverse engineer another’s solution. And businesses cheat off of one another all the time. Even when it has been proven to be wrong, the incentive is high.
So is taking the text of the book, no matter how you got it, and using it as part of a new technology okay?
Clearly the distribution isn’t wrong. You’re not distributing the book, you’ve made a derivative.
The ownership isn’t there, I mean the works were pirated. We’ve been taught that simply having something that was gotten through online copying is not only against the ‘rightholder’ but “piracy” and “stealing”. I have a really simplistic view of this - I just want creators paid for their work, and have autonomy (rights) over what is done with their work. This is rarely the case, we live in a world with publishers.
So it’s that first action. Is that use of the text in another work legal?
My basic understanding of fair use is that fair use is when you add to a work. You critique or reuse that work. Your work is about the other work, but also something new that stands on its own like an essay or a collage, rather than a collection.
I am so confused. Text based AI is run by capitalists. And we only have it FOSS because META can afford to lose money in order to remove OpenAI from the competition. Image based AI is almost certainly wrong, it copied and plugged in all of this other work and now tons of people are suing, Getty images is leveraging their rights management to make an AI that follows the rules we are living with. My gut reaction is a lot of people deserve royalties.
But in the other hand it sounds like AI did not work until they gave it the entire internet worth of data to train on. Training on smaller, legal sets was a failure? Or maybe it was because they took the tech approach of training the AI on every google image of dogs, or cats, etc. Without any real variation. Because they’re engineers, not artists. And not even good engineers, if their best work is just scraping other people’s work and giving it to this weird computer program.
This is all just stealing, right? But stealing is a lot more legal than I thought, especially when it comes to digitally published works of art, or physically published art that’s popular enough to be shared online.