In its submission to the Australian governmentâs review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.
I think the key problem with a lot of the models right now is that they were developed for âresearchâ, without the rights holders having the option to opt out when the models were switched to for-profit. The portfolio and gallery websites, from which the bulk of the artwork came from, didnât even have opt out options until a couple of months ago. Artists were therefore considered to have opted in to their work being used commercially because they were never presented with the option to opt out.
So at the bare minimum, a mechanism needs to be provided for retroactively removing works that would have been opted out of commercial usage if the option had been available and the rights holders had been informed about the commercial intentions of the project. I would favour a complete rebuild of the models that only draws from works that are either in the public domain or whose rights holders have explicitly opted in to their work being used for commercial models.
Basically, you canât deny rightsâ holders an ability to opt out, and then say âhey, itâs not our fault that you didnât opt out, now we can use your stuff to profit ourselvesâ.
Common sense would surely say that becoming a for-profit company or whatever they did would mean theyâve breached that law. I assume they figured out a way around it or Iâve misunderstood something though.
I think they just blatantly ignored the law, to be honest. The UKâs copyright law is similar, where âfair dealingâ allows use for research purposes (legal when the data scrapes were for research), but fair dealing explicitly does not apply when the purpose is commercial in nature and intended to compete with the rights holder. The common sense interpretation is that as soon as the AI models became commercial and were being promoted as a replacement for human-made work, they were intended to be a for profit competition to the rights holders.
If we get to a point where opt outs have full legal weight, I still expect the AI companies to use the data âfor researchâ and then ship the model as a commercial enterprise without any attempt to strip out the works that were only valid to use for research.
So at the bare minimum, a mechanism needs to be provided for retroactively removing works that would have been opted out of commercial usage if the option had been available and the rights holders had been informed about the commercial intentions of the project.
If you do this, you limit access to AI tools exclusively to big companies. They already employ enough artists to create a useful AI generator, theyâll simply add that the artist agrees for their work to be used in training to the employment contract. After a while, the only people who have access to reasonably good AI is are those major corporations, and theyâll leverage that to depress wages and control employees.
The WGAâs idea that the direct output of an AI is uncopyrightable doesnât distort things so heavily in favor of Disney and Hasbro. Itâs also more legally actionable. You donât name Microsoft Word as the editor of a novel because you used spell check even if it corrected the spelling and grammar of every word. Naturally you donât name generative AI as an author or creator.
Though the above argument only really applies when you have strong unions willing to fight for workers, and with how gutted they are in the US, I donât think that will be the standard.
The solution to only big companies having access to AI by using enough artists to create a useful generator isnât to deny all artists globally any ability to control their work, though. If all works can be scraped and added to commercial AI models without any payment to artists, you completely obliterate all artists except for the small handful working for Disney, Hasbro, and the likes.
AI models actually require a constant input of new human-made artworks, because they cannot create anything new or unique themselves, and feeding an AI content produced by AI ends up with very distorted results pretty quickly. So itâs simply not viable to expect the 99% of artists who donât work for big companies to continuously provide new works for AI models, for free, so that others can profit from them. Therefore, artists need either the ability to opt out or they need to be paid.
(The word âartistâ here is used to refer to everyone in the creative industries. Writing and music are art just like paintings and drawings are.)
Unfortunately, copyright protection doesnât extend that far. AI training is almost certainly fair use if it is copying at all. Styles and the like cannot be copyrighted, so even if an AI creates a work in the style of someone else, it is extremely unlikely that the output would be so similar as to be in violation of copyright. Though I do feel that it is unethical to intentionally try to reproduce someoneâs style, especially if youâre doing it for commercial gain. But that is not illegal unless you try to say that you are that artist.
https://www.eff.org/deeplinks/2023/04/how-we-think-about-copyright-and-ai-art-0
Practically you would have to separate model architecture from weights. Weights are licensed as research use only, while the architecture is the actual scientific contribution. Maybe some instructions on best train the model.
Only problem is that you canât really prove if someone just retrained research weights or trained from scratch using randomized weights. Also certain alterations to the architecture are possible, so only the âheadlessâ models are used.
I think thereâs some research into detecting retraining, but I can imagine itâs not fool proof.
I kind of think that as proof-of-concepts, the AI models are kind of interesting. I donât like the content they produce much, because it is just so utterly same-y, so I havenât yet seen anything that made me go âwow, thatâs amazingâ. But the actual architecture behind them is pretty cool.
But at this point, theyâve gone beyond researching an interesting idea into full on commercial enterprises. If we donât have an effective means of retraining the existing models to remove the data that isnât licenced for commercial use (which is most of it), then it seems the only ethical way to move forward would be to start again with more selective training data, including only what is commercially licenced. Now the research has been done in how to create these models, it should be quicker to build new ones with more ethically sourced training data.