Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

You are viewing a single thread.
View all comments View context
13 points

pirating photoshop is a well-understood part of many peoples’ workflows. that doesn’t make it legal or condoned by adobe

permalink
report
parent
reply
11 points

I don’t know what this has to do with anything. Nothing was “pirated”, either.

permalink
report
parent
reply
1 point

His work was used in a publicly available product without license or compensation. Including his work in the training dataset was, to the online vernacular use of the word, piracy.

They violated his copyright when they used his work to make their shit.

permalink
report
parent
reply
3 points

The product does not contain his work. So no copying was done, therefore no “piracy.”

permalink
report
parent
reply
2 points

Was he paid for his art to be included?

permalink
report
parent
reply
5 points

Not at the point of generation, but at the point of training it was. One of the sticking points of AI for artists is that their developers didn’t even bother to seek permission. They simply said it was too much work and crawled artists’ galleries.

Even publicly displayed art can only be used for certain previously-established purposes. By default you can’t use them for derivative works.

permalink
report
parent
reply
4 points

At the point of training it was viewing images that the artists had published in a public gallery. Nothing pirated at that point either. They don’t need “permission” to do that, the images are on display.

Learning from art is one of the previously-established purposes you speak of. No “derivative work” is made when an AI trains a model, the model does not contain any copyrightable part of the imagery it is trained on.

permalink
report
parent
reply
3 points

They were not used for derivative works. The AI’s model produced by the training does not contain any copyrighted material.

If you click this link and view the images there then you are just as much a “pirate” as the AI trainers.

permalink
report
parent
reply
5 points

i’m not making a moral comment on anything, including piracy. i’m saying “but it’s part of my established workflow” is not an excuse for something morally wrong.

only click here if you understand analogy and hyperbole

if i say “i can’t write without kicking a few babies first”, it’s not an excuse to keep kicking babies. i just have to stop writing, or maybe find another workflow

permalink
report
parent
reply
4 points

The difference is that kicking babies is illegal whereas training and running an AI is not. Kind of a big difference.

permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.9K

    Monthly active users

  • 3K

    Posts

  • 55K

    Comments