Setting aside the usual arguments on the anti- and pro-AI art debate and the nature of creativity itself, perhaps the negative reaction that the Redditor encountered is part of a sea change in opinion among many people that think corporate AI platforms are exploitive and extractive in nature because their datasets rely on copyrighted material without the original artists’ permission. And that’s without getting into AI’s negative drag on the environment.

You are viewing a single thread.
View all comments
7 points

People talk about A.I. art threatening artist jobs but everything I’ve seen created by A.I. tools is the most absolute dogshit art ever made, counting the stuff they found in Saddam Hussein’s mansions.

So, I would think the theft of IP for training models is the larger objection. No one thinks a Balder’s Gate 3 fan was gonna commission an artist to make a drawing for them. They’re pissed their work was used without permission.

permalink
report
reply
-1 points

I think AI art looks neat

permalink
report
parent
reply
0 points

That won’t get you into art school but it also won’t get you kicked out.

*adjusts horn-rims* yesyes very neat do you have anything else to say but that you’re whimsical?

permalink
report
parent
reply
-1 points

There are so many possibilities for AI art, to say it’s all bad is painting it all with one brush

permalink
report
parent
reply
4 points

That isn’t art. That’s just commodity.

permalink
report
parent
reply
-1 points

Have you just woken up from a year long coma? AI can create stunning pictures now.

permalink
report
parent
reply
8 points
*

stunning but uncreative af.

that still depends on the operator.

permalink
report
parent
reply
3 points

I mean, just like any other tool.

permalink
report
parent
reply
3 points

Yeah and there are tons of angles and gestures for human subjects that AI just can’t figure out still. Any time I’ve seen a “stunning” AI render it’s some giant FOV painting with no real subject or the subject takes up a 12th of the canvas.

permalink
report
parent
reply
1 point
*

You should check out this article by Kit Walsh, a senior staff attorney at the EFF, and this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.

Using things “without permission” forms the bedrock on which artistic expression and free speech as a whole are built upon. I am glad to see that the law aligns with these principles and protects our ability to engage openly and without fear of reprisal, which is crucial for fostering a healthy society.

I find myself at odds with the polarized argumentation about AI. If you don’t like it, that’s understandable, but don’t make it so that if someone uses AI, they have to defend themselves from accusations of exploiting labor and the environment. Those accusations are often times incorrect or made without substantial evidence.

I’m open to that conversation, as long as we can keep it respectful and productive. Drop a reply if you want, it’s way better than unexplained downvoting.

permalink
report
parent
reply
8 points

Yes, using existing works as reference is obviously something that real human artists do all the time, there’s no arguing that is the case. That’s how people learn to create art to begin with.

But, the fact is, generative AI is not creative, nor does it understand what creativity is, nor will it ever. Because all it is doing is performing complex data statistical analysis algorithms to generate a matrix of pixels or a string of words.

Im sorry, but the person entering in the prompt to instruct the algorithm is also not doing anything creative either. Do you think it is art to go through a fast food drive through and place an order? That’s what people are objecting to - people calling themselves artists because they put some nonsense word salad together and then think what they get out of it is some unique thing that they feel they created and take ownership of. If not for the AI model they are using and the creative works it was trained on, they could not have created it or likely even imagined it without it.

People are actively losing their livelihoods because AI tech is being oversold and overhyped as something that it’s not. Execs are all jumping on the bandwagon and because they see AI as something that will save them a bunch of money, they are laying off people they think aren’t needed anymore. So, just try to incorporate that sentiment into your understanding of why people are also upset about AI. You may not be personally affected, but there are countless that are. In fact, over the next two years, as many as 203,000 entertainment workers in the US alone could be affected

Generative AI Impact Study

You want to have fun creating fancy kitbashed images based off of other people’s work, go right ahead. Just don’t call it art and call yourself an artist, unless you could actually make it yourself using practical skills.

Also, good luck trying to copyright it because guess what, you can’t.

https://crsreports.congress.gov/product/pdf/LSB/LSB10922

permalink
report
parent
reply
0 points
*

Yes, using existing works as reference is obviously something that real human artists do all the time, there’s no arguing that is the case. That’s how people learn to create art to begin with.

But, the fact is, generative AI is not creative, nor does it understand what creativity is, nor will it ever. Because all it is doing is performing complex data statistical analysis algorithms to generate a matrix of pixels or a string of words.

Im sorry, but the person entering in the prompt to instruct the algorithm is also not doing anything creative either. Do you think it is art to go through a fast food drive through and place an order? That’s what people are objecting to - people calling themselves artists because they put some nonsense word salad together and then think what they get out of it is some unique thing that they feel they created and take ownership of. If not for the AI model they are using and the creative works it was trained on, they could not have created it or likely even imagined it without it.

I’d like to ask you what experience you have with generative art, because I’d like to explain a bit of what I know,

There’s also a spectrum of involvement depending on what tool you’re using. I know with web based interfaces don’t allow for a lot of freedom due to wanting to keep users from generating things outside their terms of use, but with open source models based on Stable Diffusion you can get a lot more involved and get a lot more freedom. We’re in a completely different world from March 2023 as far as generative tools go. Take a quick look at things work.

Let’s take these generation parameters for instance: sarasf, 1girl, solo, robe, long sleeves, white footwear, smile, wide sleeves, closed mouth, blush, looking at viewer, sitting, tree stump, forest, tree, sky, traditional media, 1990s \(style\), <lora:sarasf_V2-10:0.7>

Negative prompt: (worst quality, low quality:1.4), FastNegativeV2

Steps: 21, VAE: kl-f8-anime2.ckpt, Size: 512x768, Seed: 2303584416, Model: Based64mix-V3-Pruned, Version: v1.6.0, Sampler: DPM++ 2M Karras, VAE hash: df3c506e51, CFG scale: 6, Clip skip: 2, Model hash: 98a1428d4c, Hires steps: 16, "sarasf_V2-10: 1ca692d73fb1", Hires upscale: 2, Hires upscaler: 4x_foolhardy_Remacri, "FastNegativeV2: a7465e7cc2a2",

ADetailer model: face_yolov8n.pt, ADetailer version: 23.11.1, Denoising strength: 0.38, ADetailer mask blur: 4, ADetailer model 2nd: Eyes.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur 2nd: 4, ADetailer confidence 2nd: 0.3, ADetailer inpaint padding: 32, ADetailer dilate erode 2nd: 4, ADetailer denoising strength: 0.42, ADetailer inpaint only masked: True, ADetailer inpaint padding 2nd: 32, ADetailer denoising strength 2nd: 0.43, ADetailer inpaint only masked 2nd: True

To break down a bit of what’s going on here, I’d like to explain some of the elements found here. sarasf is the token for the LoRA of the character in this image, and <lora:sarasf_V2-10:0.7> is the character LoRA for Sarah from Shining Force II. LoRA are like supplementary models you use on top of a base model to capture a style or concept, like a patch. Some LoRA don’t have activation tokens, and some with them can be used without their token to get different results.

The 0.7 in <lora:sarasf_V2-10:0.7> refers to the strength at which the weights from the LoRA are applied to the output. Lowering the number causes the concept to manifest weaker in the output. You can blend styles this way with just the base model or multiple LoRA at the same time at different strengths. Furthurmore you can adjust the UNet and Text Encoder by adding another colon like so : <lora:sarasf_V2-10:1:0.7> for even more varied results. Doing this allows you to separate the “idea” from the “look” of the LoRA. You can even use a monochrome LoRA and take the weight into the negative to get some crazy colors.

The Negative Prompt is where you include things you don’t want in your image. (worst quality, low quality:1.4), here are quality tags and have their attention set to 1.4. Attention is sort of like weight, but for tokens. LoRA bring their own weights to add onto the model, whereas attention on tokens works completely inside the weights they’re given. In this negative prompt FastNegativeV2 is an embedding known as a Textual Inversion. It’s sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself or mess around with the attention manually. Embeddings you put in the negative prompt are known as Negative Embeddings.

In the next part, Steps stands for how many steps you want the model to take to solve the starting noise into an image. More steps take longer. VAE is the name of the Variational Autoencoder used in this generation. The VAE is responsible for working with the weights to make each image unique. A mismatch of VAE and model can yield blurry and desaturated images, so some models opt to have their VAE baked in, Size are the dimensions in pixels the image will be generated at. Seed is the number representation of the starting noise for the image. You need this to be able to reproduce a specific image.

Model is the name of the model used, and Sampler is the name of the algorithm that solves the noise into an image. There are a few different samplers, also known as schedulers, each with their own trade-offs for speed, quality, and memory usage. CFG is basically how close you want the model to follow your prompt. Some models can’t handle high CFG values and flip out, giving over-exposed or nonsense output. Hires steps represents the amount of steps you want to take on the second pass to upscale the output. This is necessary to get higher resolution images without visual artifacts. Hires upscaler is the name of the model that was used during the upscaling step, and again there are a ton of those with their own trade-offs and use cases.

After ADetailer are the parameters for Adetailer, an extension that does a post-process pass to fix things like broken anatomy, faces, and hands. We’ll just leave it at that because I don’t feel like explaining all the different settings found there.

https://youtu.be/-JQDtzSaAuA?t=97

https://youtu.be/1d_jns4W1cM

https://www.youtube.com/watch?v=HtbEuERXSqk

Not all selfies are art, but you can make art with cameras. I think the same applies here.

People are actively losing their livelihoods because AI tech is being oversold and overhyped as something that it’s not. Execs are all jumping on the bandwagon and because they see AI as something that will save them a bunch of money, they are laying off people they think aren’t needed anymore. So, just try to incorporate that sentiment into your understanding of why people are also upset about AI. You may not be personally affected, but there are countless that are. In fact, over the next two years, as many as 203,000 entertainment workers in the US alone could be affected

This EFF article by Katharine Trendacosta and Cory Doctorow touches on this. I think it’s worth a read.

You want to have fun creating fancy kitbashed images based off of other people’s work, go right ahead.

This is misinformation, and not how the technology works. Here’s a quick video explanation,

Just don’t call it art and call yourself an artist, unless you could actually make it yourself using practical skills.

This is just snobbery that people have always used to devalue the efforts of others. Punching down and gatekeeping won’t solve your problems, the people you’re really mad at are above you.

Art is about bringing your ideas into the world, anything beyond that is fetish. Spending hundreds of hours learning a skill isn’t art, it’s work. While I believe the effort invested in a work can contribute to its depth and meaning, that doesn’t make them better than works without as much effort.

cont.

permalink
report
parent
reply
-2 points

Part 2

Also, good luck trying to copyright it because guess what, you can’t.

https://crsreports.congress.gov/product/pdf/LSB/LSB10922

This looks like it’s set to change. The US Copyright Office is proactively exploring and evolving its understanding of this topic and are actively seeking expert and public feedback. You shouldn’t expect this to be their final word on the subject.

It’s also important to remember Copyright Office guidance isn’t law. Their guidance reflects only the office’s interpretation based on experience, it isn’t binding in courts or other parties. Guidance from the office is not a substitute for legal advice, and it does not create any rights or obligations for anyone. They are the lowest rung on the ladder for deciding what law means.

Let’s keep it civil and productive. Jeering dismissive language like “Also, good luck trying to copyright it because guess what, you can’t.” isn’t helping your argument, they’re just mean spirited. Let’s have a civil discussion, even if we disagree. I’m open to keep talking, but I will quit replying if you continue being disrespectful.

permalink
report
parent
reply
37 points
*

The problem is artists often make their actual living doing basic boiler plate stuff that gets forgotten quickly.

In graphics it’s Company logos, advertising, basic graphics for businesses.

In writing it’s copy for websites, it’s short articles, it’s basic stuff.

Very few artists want to do these things, they want to create the original work that might not make money at all. That work potentially being a winning lottery ticket but most often being an act of expressing themselves that doesn’t turn into a payday.

Unfortunately AI is taking work away from artists. It can’t seem to make very good art yet but it can prevent artists who could make good art getting to the point of making it.

It’s starving out the top end of the creative market by limiting the easy work artists could previously rely on to pay the bills whilst working on the big ideas.

permalink
report
parent
reply
21 points

The problem is that most artists make money from commercial clients and most clients don’t want “good”.

The want “good enough” and “cheap”.

And that’s why it is taking artists jobs.

permalink
report
parent
reply
43 points
*

It’s not replacing artists who make beautiful art, it’s going to replace artists who work for a living. Doesn’t matter if the quality is bad when it’s costs nothing.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 543K

    Comments