86 points

This is tough. If it was just a sicko who generated the images for himself locally… that is the definition of a victimless crime, no? And it might actually dissuade him from seeking out real CSAM…

BUT, iirc he was actually distributing the material, and even contacted minors, so… yeah he definitely needed to be arrested.

But, I’m still torn on the first scenario…

permalink
report
reply
66 points

But, I’m still torn on the first scenario…

To me it comes down to a single question:

“Does exposure and availability to CSAM for pedophiles correlate with increased or decreased likelihood of harming a child?”

If there’s a reduction effect by providing an outlet for arousal that isn’t actually harming anyone - that sounds like a pretty big win.

If there’s a force multiplier effect where exposure and availability means it’s even more of an obsession and focus such that there’s increased likelihood to harm children, then society should make the AI generated version illegal too.

permalink
report
parent
reply
52 points

Hoooooly hell, good luck getting that study going. No ethical concerns there!

permalink
report
parent
reply
13 points

How they’ve done it in the past is by tracking the criminal history of people caught with csam, arrested for abuse, or some combination thereof, or by tracking the outcomes of people seeking therapy for pedophilia.

It’s not perfect due to the sample biases, but the results are also quite inconsistent, even amongst similar populations.

permalink
report
parent
reply
19 points

I’m willing to bet it’ll differ from person to person, to complicate matters further

permalink
report
parent
reply
15 points

I think the general consensus is that availability of CSAM is bad, because it desensitizes and makes harming of actual children more likely. But I must admit that I only remember reading about that and don’t have a scientific source.

permalink
report
parent
reply
12 points

What is the AI trained on?

permalink
report
parent
reply
53 points

Image-generating AI is capable of generating images that are not like anything that was in its training set.

permalink
report
parent
reply
0 points

In that case probably the strongest argument is that if it were legal, many people would get off charges of real CSAM because the prosecuter can’t prove that it wasn’t AI generated.

permalink
report
parent
reply
-26 points

AI can compose novel looking things from components it has been trained on - it can’t imagine new concepts. If CSAM is being generated it’s because it was included in it’s training set which is highly suspected as we know the common corpus had CSAM in it: https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse

permalink
report
parent
reply
1 point

Very, very good point. Depending on the answer, I retract the “victimless” narrative.

permalink
report
parent
reply
7 points

I’m fine with it just being illegal, but realistically you could just ban the transmission and distribution of it and then you cover enforceable scenarios. You can police someone sending or posting that stuff, it’s probably next to impossible to police someone generating it at home.

permalink
report
parent
reply
3 points

Agreed. And props for making a point that isn’t palatable. The first one is complicated. Not many folk I talk to can set aside their revulsion and consider the situation logically. I wish we didn’t have to in the first place.

permalink
report
parent
reply
1 point

It’s interesting your bring this up. Not long ago I was having basically this exact same discussion with my brother. Baring you second point, I honestly don’t know how I feel.

On the one hand - if it’s strictly images for himself and it DOES dissuade seeking out real CSAM (I’m not convinced of this) then I don’t really see the issue.

On the other hand - I feel like it could be a gateway to something more (your second point). Kinda like a drug, right? You need a heavier and heavier hit to keep the same high. Seems like it wouldn’t be a stretch to go from AI generated imagery to actual CSAM.

But yeah, I don’t know. We live in an odd time for sure.

permalink
report
parent
reply
15 points

On the other hand - I feel like it could be a gateway to something m

You mean like marijuana and violent video games?

permalink
report
parent
reply
6 points

Except in the case of pornography, it’s an open question if viewing it has a net increase or decrease in sexual desire.
With legal pornography, it’s typically correlated with higher sexual desire. This tracks intuitively, since the existence of pornography does not typically seem to line up with a drop in people looking for romantic partners.

There’s little reason to believe it works the other way around for people attracted to children.
What’s unknown is if that desire is enough to outweigh the legal consequences they’re aware of, or any social or ethical boundaries present.
Studies have been done, but finding people outside of the legal system who abuse children is exceptionally difficult, even before the ethical obligation to report them to the police would trash the study.
So the studies end up focusing either on people actively seeking treatment for unwanted impulses (less likely to show a correlation), or people engaged with the legal system in some capacity (more likely to show correlation).

permalink
report
parent
reply
-5 points

Holy strawman, Batman! Just because someone uses the term “gateway” doesn’t mean they think that games and weed are going to turn all people and frogs gay and violent.

permalink
report
parent
reply
13 points

First off, this is obviously a sticky topic. Every conversation is controversial and speculative.

Second, I don’t really see a lot of legitimacy to the “gateway” concept. The vast majority of people use some variety of drug (caffeine, alcohol, nicotine), and that doesn’t really reliably predict “harder” drug use. Lots of people use marijuana and that doesn’t reliably predict hard drug use. Obviously, the people who use heroin and meth have probably used cocaine and ketamine, and weed before that, and alcohol/caffeine/nicotine before that, but that’s not really a “gateway” pipeline so much as paying through finer and finer filters. As far as I know, the concept has fallen pretty heavily out of favor with serious researchers.

In light of that perspective, I think you have to consider the goal. Is your goal to punish people, or to reduce the number and severity of victims? Mine is the latter. Personally, I think this sort of thing peels off many more low-level offenders to low-effort outlets than it emboldens to higher-severity outlets. I think this is ultimately a mental-health problem, and zero-tolerance mandatory reporting (while well-meaning) does more harm than good.

I’d rather that those with these kinds of mental issues have 1. the tools to take the edge off in victimless ways 2. safe spaces to discuss these inclinations without fear of incarceration. I think blockading those avenues yields a net increase the number and severity of victims.

This seems like a net benefit, reducing the overall number and severity of actual victims.

permalink
report
parent
reply
2 points
*

Thanks for being honest and well-meaning. Sorry you’re getting downvoted, we both said pretty much exactly the same thing! A difficult subject, but important to get right…

permalink
report
parent
reply
54 points

Fuck that guy first of all.

What makes me think is, what about all that cartoon porn showing cartoon kids? What about hentai showing younger kids? What’s the difference if all are fake and being distributed online as well?

Not defending him.

permalink
report
reply
30 points

I think there’s certainly an argument here. What if the hentai was more lifelike? What if the AI stuff was less realistic? Where’s the line?

At least in the US, courts have been pretty shitty at defining things like “obscenity”. This AI stuff might force them to delineate more clearly.

permalink
report
parent
reply
14 points

What if someone draws their own CSAM and they’re terrible at drawing but it’s still recognizable as CSAM?

permalink
report
parent
reply
17 points

Ethically is one question, but the law is written such that it’s pretty narrowly covering only photograph-style visual depictions that are virtually indistinguishable from an actual child engaged in explicit conduct in the view of a reasonable person that is also lacking in any other artistic or cultural significance.
Or in short: if it looks like an actual image of actual children being actually explicit, then it’s illegal.

permalink
report
parent
reply
3 points

Makes sense.

permalink
report
parent
reply
2 points

So it’s all good as long as they have elf ears or that counts as realistic too?

permalink
report
parent
reply
5 points

Two things:

So long as an ordinary person would know that it’s not a real child being abused, or a real child being depicted (placing a real child’s face on a compromising photo), it’s protected, albeit extremely unpleasant, speech.

permalink
report
parent
reply
-8 points
*

While I think Hentai showing that stuff is disgusting AI is worse because you need to get the training material from somewhere so its far from victimless. Edit: I just learned that it does not have to be in the dataset though there should be regulations that forces the companies to open source the data set.

permalink
report
parent
reply
1 point

https://commoncrawl.org/

https://laion.ai/

It’s not what’s used by all of them, but it’s pretty popular.

permalink
report
parent
reply
37 points

One thing to consider, if this turned out to be accepted, it would make it much harder to prosecute actual csam, they could claim “ai generated” for actual images

permalink
report
reply
22 points

I get this position, truly, but I struggle to reconcile it with the feeling that artwork of something and photos of it aren’t equal. In a binary way they are, but with more precision they’re pretty far apart. But I’m not arguing against it, I’m just not super clear how I feel about it yet.

permalink
report
parent
reply
3 points

I’m a professional artist and have no issue banning ai generated CSAM. People can call it self expression if they want, but that doesn’t change the real world consequences of it.

Allowing ai generated CSAM basically creates camouflage for real CSAM. As ai gets more advanced it will become harder to tell the difference. The scum making real CSAM will be emboldened to make even more because they can hide it amongst the increasing amounts of ai generated versions, or simply tag it as AI generated. Now authorities will have to sift through all of it trying to decipher what’s artifical and what isn’t.

The liklihood of them being able to identify, trace, and convict child abusers will become even more difficult as more and more of that material is generated and uploaded to various sites with real CSAM mixed in.

Even with hyper realistic paintings you can still tell it’s a painting. Anime loli stuff can never be mistaken for real CSAM. Do I find that sort of art distasteful? Yep. But it’s not creating an environment where real abusers can distribute CSAM and have a higher possibility of getting away with it.

permalink
report
parent
reply
1 point

I guess my question is, why would anyone continue to “consume” – or create – real csam? If fake and real are both illegal, but one involves minimal risk and 0 children, the only reason to create real csam is for the cruelty – and while I’m sure there’s a market for that, it’s got to be a much smaller market. My guess is the vast majority of “consumers” of this content would opt for the fake stuff if it took some of the risk off the table.

I can’t imagine a world where we didn’t ban ai generated csam, like, imagine being a politician and explaining that policy to your constituents. It’s just not happening. And i get the core point of that kind of legislation – the whole concept of csam needs the aura of prosecution to keep it from being normalized – and normalization would embolden worse crimes. But imagine if ai made real csam too much trouble to produce.

AI generated csam could put real csam out of business. If possession of fake csam had a lesser penalty than the real thing, the real stuff would be much harder to share, much less monetize. I don’t think we have the data to confirm this but my guess is that most pedophiles aren’t sociopaths and recognize their desires are wrong, and if you gave them a way to deal with it that didn’t actually hurt chicken, that would be huge. And you could seriously throw the book at anyone still going after the real thing when ai content exists.

Obviously that was supposed to be children not chicken but my phone preferred chicken and I’m leaving it.

permalink
report
parent
reply
3 points

So long as the generation is without actual model examples that are actual minors there’s nothing technically illegal about having sexual material of what appears to be a child. They would then have a mens rea question and a content question, what actual defines in a visual sense a child? Could those same things equally define a person of smaller stature? And finally could someone like tiny texie be charged for producing csam as she by all appearance or of context looks to be a child.

permalink
report
parent
reply

The problem is that the only way to train an AI model is on real images, so the model can’t exist without crimes and suffering having been committed.

permalink
report
parent
reply
-1 points

It is illegal in Canada to have sexual depictions of a child whether its a real image or you’ve just sat down and drawn it yourself. The rationale being that behavior escalated, and looking at images goes to wanting more

It borders on thought crime which I feel kind of high about but only pedophiles suffer which I feel great about. There’s no legitimate reason to have sexualized image of a child whether computer geneerate, hand drawn, or whatever.

permalink
report
parent
reply
-2 points

It’s not a difficult test. If a person can’t reasonably distinguish it from an actual child, then it’s CSAM.

permalink
report
parent
reply
9 points

Just to play devil’s advocate:

What about hentai where little girls get fondled by tentacles? (Please please please don’t make this be my most up voted post)

permalink
report
parent
reply
2 points
*

What he probably means is that for a “photo”, an actual act of photography must be performed. While “artwork” can be fully digital. Now, legal definition aside, the two acts are indeed different even if the resulting “image” is a bit-by-bit equivalent. A computer could just output something akin to a photograph but no actual act of photography has taken place. I said the legal definition aside because I know the legal definition only looks at the resulting image. Just trying to convey the commenter words better.

Edit to clarify a few things.

permalink
report
parent
reply
0 points

This would also outlaw “teen” porn as they are explicitly trying to look more childlike as well as models that only appear to be minors.

I get the reason people think it’s a good thing but all censorship has to be narrowly tailored to content lest it be too vague or overly broad.

permalink
report
parent
reply
25 points

I hate the future

permalink
report
reply
12 points

that was just the present

permalink
report
parent
reply
24 points

I find it interesting that the relabeling of CP to CSAM weakens their argument here. “CP generated by AI is still CP” makes sense, but if there’s no abusee, it’s just CSM. Makes me wonder if they would have not rebranded if they knew about the proliferation of AI pornography.

permalink
report
reply
31 points

The problem is that it abets the distribution of legitimate CSAM more easily. If a government declares “these types of images are okay if they’re fake”, you’ve given probable deniability to real CSAM distributors who can now claim that the material is AI generated, placing the burden on the legal system to prove it to the contrary. The end result will be a lot of real material flying under the radar because of weak evidence, and continued abuse of children.

Better to just blanket ban the entire concept and save us all the trouble, in my opinion. Back before it was so easy to generate photorealistic images, it was easier to overlook victimless CP because illustrations are easy to tell apart from reality, but times have changed, and so should the laws.

permalink
report
parent
reply
12 points

Not necessarily. There’s been a lot of advances in watermarking AI outputs.

As well, there’s the opposite argument.

Right now, pedophile rings have very high price points to access CSAM or require users to upload original CSAM content, adding a significant motivator to actually harm children.

The same way rule 34 artists were very upset with AI being able to create what they were getting commissions to create, AI generated CSAM would be a significant dilution of the market.

Is the average user really going to risk prison, pay a huge amount of money or harm a child with an even greater prison risk when effectively identical material is available for free?

Pretty much overnight the CSAM dark markets would lose the vast majority of their market value and the only remaining offerings would be ones that could demonstrate they weren’t artificial to justify the higher price point, which would undermine the notion of plausible deniability.

Legalization of AI generated CSAM would decimate the existing CSAM markets.

That said, the real question that needs to be answered from a social responsibility perspective is what the net effect of CSAM access by pedophiles has on their proclivity to offend. If there’s a negative effect then it’s an open and shut case that it should be legalized. If it’s a positive effect than we should probably keep it very much illegal, even if that continues to enable dark markets for the real thing.

permalink
report
parent
reply
5 points
*

Not necessarily. There’s been a lot of advances in watermarking AI outputs.

That presumes that the image generation is being done by some corporation or government entity that adds the watermarks to AI outputs and doesn’t add them to non-AI outputs. I’m not thrilled that AI of this sort exists at all, but given that it does, I’d rather not have it controlled by such entities. We’re heading towards a world where we can all run that stuff on our own computers and control the watermarks ourselves. Is that good or bad? Probably bad, but having it under the exclusive control of megacorps has to be even worse.

permalink
report
parent
reply
2 points

Is the average user really going to risk prison, pay a huge amount of money or harm a child with an even greater prison risk when effectively identical material is available for free?

Average users aren’t pedophiles and it would appear that yes they would considering he did exactly that. He had access to tools that generated the material for free, which he then used to entice boys.

permalink
report
parent
reply
5 points

I agree, just the linguistics are interesting.

permalink
report
parent
reply
4 points

Better to just blanket ban the entire concept and save us all the trouble, in my opinion.

That’s the issue though, blindly banning things that can be victimless crimes never ends, like prohibition.

permalink
report
parent
reply
0 points

Well, you don’t hear many people decrying the places that already have. Canada many US states, parts of Europe too have outlawed sexual imagery of children, real or fake.

I am just proposing that that should be the standard approach going forward, for the sole fact that the fake stuff is identical to the real stuff and real stuff can be used to make more convincing “fake” stuff.

permalink
report
parent
reply
3 points

placing the burden on the legal system to prove it to the contrary.

That’s how it should be. Everyone is innocent until proven otherwise.

permalink
report
parent
reply
-1 points

Right, but what I am suggesting is that laws should be worded to criminalize any sexualized depiction of children, not just ones with a real victim. It is no longer as simple to prove a photograph or video is actual CSAM with a real victim, making it easier for real abuse to avoid detection.

permalink
report
parent
reply
4 points

Have to agree. Because I have no clue what CSAM is. My first glance at the title made me think it was CSPAN (the TV channel)… So CP is better identifier, as of at least recognize the initialism.

If we could stop turning everything, and especially important things, into acronyms and initialisms that’d be great.

permalink
report
parent
reply
4 points

Who’s even in charge of that lol

permalink
report
parent
reply
-12 points

A generative AI could not generate CSAM without access to CSAM training data. Abuse was a necessary step in the generation.

permalink
report
parent
reply

News

!news@lemmy.world

Create post

Welcome to the News community!

Rules:

1. Be civil

Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.

Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.

Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.

Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.

Posts must be news from the most recent 30 days.


6. All posts must be news articles.

No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.

If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.

Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.

The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body

For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

Community stats

  • 15K

    Monthly active users

  • 18K

    Posts

  • 466K

    Comments