Study shows AI image-generators being trained on explicit photos of children::Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built

-1 points
Removed by mod
permalink
report
reply

All of our protect the children legislation is typically about inhibiting technology that might be used to cause harm and not about assuring children have access to places of safety, adequate food and comfort, time with and access to parents, freedom to live and play.

Y’know, all those things that help make kids resilient to bullies and challenges of growing up. Once again, we leave our kids cold and hungry in poverty while blaming the next new thing for their misery.

So I call shenanigans. Again.

permalink
report
reply
9 points

It’s still abhorrent, but if AI generated images prevent an actual child from being abused…

It’s a nuanced topic for sure.

permalink
report
parent
reply
-1 points
*
Removed by mod
permalink
report
parent
reply
5 points

We need to better understand what causes pedophilic tendencies, so that the environmental, social and genetic factors can someday be removed.

Otherwise children will always be at risk from people who have perverse intentions, whether that person is responsible or not for those intentions.

permalink
report
parent
reply
2 points

I don’t think it’ll ever be gotten rid of. At it’s core, pedophilia is a fetish, not functionally different from being into feet. And like some fetishes, it doesn’t mean a person will ever act on it.

I’m sure that many of them hate the fact that they are wired wrong. What really needs to happen is for them to have the ability to seek professional help without worrying about legal repercussions.

permalink
report
parent
reply
6 points

Honest question, why is this a problem rather than a solution?

If these kids don’t exist, and having those fake pictured make some people content, what’s the harm?

Kinda reminds me of furries getting horny over wolf drawings, who cares?

permalink
report
reply
2 points
*

I agree with you in instances where it’s not generating a real person. But there are cases where people use tools like this to generate realistic-looking but fake images of actual, specific real-life children. This is of course abusive to that child. And it’s still bad when it’s done to adults too, it’s sort of a form of defamation.

I really do hope legislation around this issue is narrowly tailored to actual abuse similar to what I described above, but given the “protect the children” nonsense they constantly moan about just about every technology including end to end encryption I’m not very optimistic.

Another thing I wonder about, is if AI could get so realistic that it becomes impossible to prove beyond a reasonable doubt that anyone with actual CSAM (where the image/victim isn’t known so they can’t prove it that way) is guilty, since any image could plausibly be fake. This of course is an issue far beyond just child abuse. It would probably discredit video footage for robberies and that sort of thing too. We really are venturing into the unknown and government isn’t exactly know for adapting to technology…

But I think you’re mostly correct, because the moral outrage on social media seems to be about the entire concept of fake sexual depictions of minors existing at all, rather than only about abusive ones

permalink
report
parent
reply
-4 points

Did you just compare furries to pedophiles? One of those is harmless, the other is not.

permalink
report
parent
reply
8 points

Ironically, I was giving them as an example of something OK. My point just went over your head.

permalink
report
parent
reply
-6 points

No, it didn’t but it seems like mine went over yours. Furries usually are fine outside a few bad actors but pedophiles are mentally ill and should not be allowed them to generate AI CSAM just to satisfy them. They should be seeking help, not jerking it to fake kiddies.

permalink
report
parent
reply
-2 points

These kinds of things are inevitable unfortunately. AI is just a tool and it just really depends on how it’s used. Just like a gun. I think the biggest issue here is that AI was adopted so quickly. It’s been used by tons of companies that actually don’t understand what it is and does so it’s the wild west right now without any regulations. I’m not entirely sure what kind of regulations you can even put on AI if any.

permalink
report
reply
34 points

Let me guess its a miniscule percent that was included by mistake. And its been used as justification to prevent u running your own model but its safe for large corporations to run them?

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 14K

    Posts

  • 597K

    Comments