Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.
Not at the point of generation, but at the point of training it was. One of the sticking points of AI for artists is that their developers didn’t even bother to seek permission. They simply said it was too much work and crawled artists’ galleries.
Even publicly displayed art can only be used for certain previously-established purposes. By default you can’t use them for derivative works.
At the point of training it was viewing images that the artists had published in a public gallery. Nothing pirated at that point either. They don’t need “permission” to do that, the images are on display.
Learning from art is one of the previously-established purposes you speak of. No “derivative work” is made when an AI trains a model, the model does not contain any copyrightable part of the imagery it is trained on.
Of course they need permission to process images. No computer system can merely “view” an image without at least creating a copy for temporary use, and the purposes for which that can be done are strictly defined. Doing whatever you want just because you have access to the image is often copyright infringement.
People have the right to learn from images available publicly for personal viewing. AI is not yet people. Your whole argument relies on anthropomorphizing a tool, but it wouldn’t even be able to select images to train its model without human intervention, which is done with the intent to replicate the artist’s work.
I’m not one to usually bat for copyright but the disregard AI proponents have for artists’ rights and their livelihood has gone long past what’s acceptable, like the article shows.
If I run an image from the web through a program that generates a histogram of how bright its pixels are, am I suddenly a dirty pirate?
Bring publicly viewable doesn’t make them public domain. Bring able to see something doesn’t give you the right to use it for literally any other reason.
Full stop.
My gods, you’re such an insufferable bootlicking fanboy of bullshit code jockies. Make a good faith effort to actually understand why people dislike these exploitative assholes who are looking to make a buck off of other people’s work for once, instead of just reflexively calling them all phillistines who “just don’t understand”.
Some of us work on machine learning systems for a living. We know what they are and how they work, and they’re fucking regurgitation machines. And people deserve to have control over whether we use their works in our regurgitation machines.
They were not used for derivative works. The AI’s model produced by the training does not contain any copyrighted material.
If you click this link and view the images there then you are just as much a “pirate” as the AI trainers.
The models themselves are the derivative works. Those artists’ works were copied and processed to create that model. There is a difference between a person viewing a piece of work and putting that work to be processed through a system. The way copyright works as defined, being allowed to view a work is not the same as being allowed to use it in any way you see fit. It’s also innacurate to speak of AIs as if they have the same abilities and rights as people.