Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.
At the point of training it was viewing images that the artists had published in a public gallery. Nothing pirated at that point either. They don’t need “permission” to do that, the images are on display.
Learning from art is one of the previously-established purposes you speak of. No “derivative work” is made when an AI trains a model, the model does not contain any copyrightable part of the imagery it is trained on.
Bring publicly viewable doesn’t make them public domain. Bring able to see something doesn’t give you the right to use it for literally any other reason.
Full stop.
My gods, you’re such an insufferable bootlicking fanboy of bullshit code jockies. Make a good faith effort to actually understand why people dislike these exploitative assholes who are looking to make a buck off of other people’s work for once, instead of just reflexively calling them all phillistines who “just don’t understand”.
Some of us work on machine learning systems for a living. We know what they are and how they work, and they’re fucking regurgitation machines. And people deserve to have control over whether we use their works in our regurgitation machines.
Of course they need permission to process images. No computer system can merely “view” an image without at least creating a copy for temporary use, and the purposes for which that can be done are strictly defined. Doing whatever you want just because you have access to the image is often copyright infringement.
People have the right to learn from images available publicly for personal viewing. AI is not yet people. Your whole argument relies on anthropomorphizing a tool, but it wouldn’t even be able to select images to train its model without human intervention, which is done with the intent to replicate the artist’s work.
I’m not one to usually bat for copyright but the disregard AI proponents have for artists’ rights and their livelihood has gone long past what’s acceptable, like the article shows.
If I run an image from the web through a program that generates a histogram of how bright its pixels are, am I suddenly a dirty pirate?
If you run someone’s artwork through a filter is it completely fine and new just because the output is not exactly like the input and it deletes the input after it’s done processing?
There is a discussion to be made, in good faith, of where the line lies, what ought to be the rights of the audience and what ought to be the rights of the artists, and what ought to be the rights of platforms, and what ought to be the limits of AI. To be fair, that’s a difficult situation to determine, because in many aspects copyright is already too overbearing. Legally, many pieces of fan art and even memes are copyright infringement. But on the flipside automating art away is too far to the other side. The reason why Copyright even exists, at least ideally, is so that the rights and livelihood of artists is protected and they are incentivized to continue creating.
Lets not pretend that is just analysis for the sake of academic understanding, there is a large amount of people who are feeding artists’ works into AI with the express purpose of getting artworks in their style without compensating them, something many artists made clear they are not okay with. While they can’t tell people not to practice styles like theirs, they can definitely tell people not to use their works in ways they do not allow.