A sex offender convicted of making more than 1,000 indecent images of children has been banned from using any “AI creating tools” for the next five years in the first known case of its kind.
Anthony Dover, 48, was ordered by a UK court “not to use, visit or access” artificial intelligence generation tools without the prior permission of police as a condition of a sexual harm prevention order imposed in February.
The ban prohibits him from using tools such as text-to-image generators, which can make lifelike pictures based on a written command, and “nudifying” websites used to make explicit “deepfakes”.
Dover, who was given a community order and £200 fine, has also been explicitly ordered not to use Stable Diffusion software, which has reportedly been exploited by paedophiles to create hyper-realistic child sexual abuse material, according to records from a sentencing hearing at Poole magistrates court.
Just a note - csam has been found in model training sets: https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
Ok? Hundreds of images of anything isn’t going to necessarily train a model based on billions of images. Have you ever tried to get Stable Diffusion to draw a bow and arrow? Just because it has ever seen something doesn’t mean that it has learned it, nor, more importantly, does that mean that is the way it learned it, since we can see that it can infer many concepts from related concepts- pregnant old women, asian nazis, black george washingtons (NONE OF WHICH actually have ever existed or been photographed)… is unclothed children really more of a leap than any of those?
It is, yes. A black George Washington is one known visual motif (a George Washington costume) combined with another known visual motif. A naked prepubescent child isn’t just the combination of “naked adult” and “child” naked children don’t look like naked adults simply scaled down.
AI can’t tell us what something we’ve never seen looks like… a kid who knows what George Washington and a black woman looks like can imagine a black George Washington. That’s probably a helpful analogy, AI can combine simple concepts but it can’t innovate - it can dream, but it can’t know something that we haven’t told it about.
It isn’t misinformation, though, generative AI needs a basis for it’s generation.
The misinformation you’re spreading is related to how it works. A generative AI system will (without prompting away from it) create people with 3 heads, 8 fingers on each hand and multiple legs connecting to each other. Do you think it was trained on that? This argument of “it can generate it, therefore it was trained on it” is ridiculous. You clearly don’t understand how it works.
You’re extremely correct when it comes to combining different aspects of existing works to generate something new - but AI can’t generate something it doesn’t know about. If a generative model knows what a prepubescent naked body looks like it has been exposed to them before. The most generous way to excuse this is that medical diagrams exist and supplied the majority of inputs for any prompts about cp to work off of. A must more realistic view is that some cp made it into the training set.
I don’t disagree with any of your assessments but if you wanted a Van Gogh painting of a Glorp from Omnicron Persei 8, you’ll get out… something, but because the model has no reference for Glorps it’ll be hallucinations or guesses based on other terms it can find.
To be clear, I’m coming at this from the angle as someone who has trained and evaluated models in a company that’s used them for the better part of a decade.
I understand I’m going up against your earnestly held belief, but I’ve seen behind the curtain on a lot of this stuff and hopefully in time the way it works becomes demystified for more people.