A trial program conducted by Pornhub in collaboration with UK-based child protection organizations aimed to deter users from searching for child abuse material (CSAM) on its website. Whenever CSAM-related terms were searched, a warning message and a chatbot appeared, directing users to support services. The trial reported a significant reduction in CSAM searches and an increase in users seeking help. Despite some limitations in data and complexity, the chatbot showed promise in deterring illegal behavior online. While the trial has ended, the chatbot and warnings remain active on Pornhub’s UK site, with hopes for similar measures across other platforms to create a safer internet environment.
Incredibly stupid and obviously false “think of the children” propaganda. And you all lap it up. They’re building aroubd you a version of the panopticon so extrene and disgusting that even people in the 1800s would have been outraged to use it against prisoners. Yet you applaud. I think this means you do deserve your coming enslavement.
I keep asking myself why I haven’t blocked lemmy.ml
I keep telling myself I’ll lose ideas or comments from the good users there…
At this point, I’ll have just blocked all their users individually
The panopticon is… a chatbot that suggests you get help if you search for CSAM? Those bastards! /s
How is this building that?
Like I’m a privacy but and very against surveillance, but this doesn’t seem to be that. It is a model that seems like could even be deployed to more privacy friendly sites (PH is not that).
In context, each paver in the road to hell seems just and good intentionned
But after all we’ve been through, falling for this trick again, it’s a choice. Maybe they think, this time, they’ll be the ones wearing the boots.
This is one of the more horrifying features of the future of generative AI.
There is literally no stopping it at this stage: AI generated CSAM will be possible soon thanks to systems like SORA.
This is disgusting and awful. But one part of me hopes it can end the black market of real CSAM content forever. By flooding it with infinite fakes, users with that sickness can look at something that didn’t come from a real child’s suffering. It’s the darkest of silver linings I think, but I spoke with many sexual abuse survivors who feel the same about the loli hentai in Japan, in that it could be an outlet for these individuals instead of them finding their own.
Dark topics. But I hope to see more actions like this in the future. If pedos can self isolate from IRL interactions and curb their ways with content that harms no one, then everyone wins.
So your takeaway is I’m… Against AI generative images and thus I “protest too much”
I can’t tell if you’re pro AI and dislike me, or pro loli hentai and thus dislike.
Dude, AI images and AI video are inevitable. To pretend that does have huge effects on society is stupid. It’s going to reshape all news media, very quickly. If reddit is 99% AI generated bot spam garbage with no verification of what is authentic, reddit is functionally dead, and we are on a train with no brakes in that direction for most public forums.
The question is if consuming AI cp is helping to regulate the pedophiles behavior or if it’s enabling a progression of the condition. As far as I know that is an unanswered question.
Another question is, how will the authorities know the difference? An actual csam-haver can just claim it’s AI
It’s very much been already answered:
For porn in general, yes - I think the data is rather clear. But for cp or related substitute content it’s not that definitive (to my knowledge), be it just for the reason that it’s really difficult to collect data on that sensitive topic.
Are… we looking at the same article? This isn’t about AI generated CSAM, it’s about redirecting those who are searching for CSAM to support services.
Yes, but this is more about mitigating the spread of CSAM. And my feeling was it’s going to become somewhat impossible soon. AI generated porn is starting to flood the market and this chat it is also one of those “smart” attempts to mitigate this behavior. I’m saying that very soon, it will be something users don’t have to go anywhere to get if the model can just fabricate it out of thin air, so the chat it mitigation is only temporary, and the dark web of actual CSAM material will become overwhelmed and swamped in artificially generating new tidal waves of artificial CP. So it’s an alarming ethical dilemma we are on the horizon of that we need to think about.
What do you mean soon, local models from civitai can generate CSAM for at least 2 years. I don’t think it’s possible to stop it unless the model creator does something to prevent it from generate naked people in general like the neutered SDXL.
True. For obvious reasons I haven’t looked too deeply down that rabbit hole because RIP my search history, but I kind of assumed it would be soon. I’m thinking more specifically about models like SORA though. Where you could feed it enough input, then type a sentence to get video content. That is going to be a different level of darkness.
Did it? Or did it make them look elsewhere?
The amount of school uniform, braces, pigtails and step-sister porn on Pornhub makes me think they want the nonces to watch.
I kind of want to trigger it to see what searches it reacts to, but at the same time I don’t want my IP address on a watchlist.
And what days were those? Cuz you pretty much need to go all the way back to pre-internet days. Hell, even that isn’t far enough, cuz Playboy’s youngest model was like 12 at one point.
given the amount of extremely edgy content already on Pornhub, this is kinda sus
Yeah…i am honestly curious what these search terms were, how many of those were ACTUALLY looking for CP. And of those…how many are now flagged somewhow?
I know I got the warning when I searched for young gymnast or something like that cuz I was trying to find a specific video I had seen before. False positives can be annoying, but that’s the only time I’ve ever encountered it.
It’s surprising to see Aylo (formerly Mindgeek) coming out with the most ethical use of AI chatbots, especially when Google Gemini cannot even condemn pedophilia.
In the link you shared, Gemini gave a nuanced answer. What would you rather it say?
Are you defending pedophilia? This is a honest question because you are saying it gave a nuanced answer when we all, should, know that it’s horribly wrong and awful.
when we all, should, know that it’s horribly wrong and awful. [sic, the word “should” shouldn’t be between commas]
This assumes two things:
- Some kind of universal, inherent and self evident morality; None of these things are true, as evidence by the fact most people do believe murder is wrong, yet there are wars, events entirely dedicated to murdering people. People do need to be told something wrong is wrong in order to know so. Maybe some of these people were never exposed to the moral consensus or, worse yet, were victims themselves and as a result developed a distorted sense of morality;
- Not necessarily all, but some of these divergents are actually mentally ill - their “inclination” isn’t a choice any more than being schizofrenic or homosexual† would be. That isn’t a defense to their actions, but a recognition that without social backing and help, they could probably never overcome their nature.
† This is not an implication that homosexuality is in any way, or should in any way, be classified as a mental illness. It’s an example of a primary individual characteristic not derived from choice.
Abusing a child is wrong. Feeling the urge to do so doesn’t make someone evil, so long as they recognize it’s wrong to do so. The best way to stop kids from being abused is to teach why it is wrong and help those with the urges to manage them. Calling people evil detracts from that goal.
I think one of the main issues is the matter of fact usage of the term Minor Attracted Person. It’s a controversial term that phrases pedophiles like an identity, like saying Person Of Color.
I understand wanting a not as judgemental term for those who did no wrong and are seeking help. But it should be phrased as anything else of that nature, a disorder.
If I was making a term that fit that description I’d probably say Minor Attraction Disorder heavily implying that the person is not ok as is and needs professional help.
In a more general sense, it feels like the similar apologetic arguments that the dark side of reddit would make. And that’s probably because Google’s officially using Reddit as training data.
Porn hub is wholesome ?
I thought porn industry was one of the worst to work at ? Or is this a holesome joke ?