Not a good look for Mastodon - what can be done to automate the removal of CSAM?
Is this Blahaj.zone admin “child abuse material” or actual child abuse material?
Or maybe it’s better to err on the side of caution when it comes to maybe one of the worst legal offences you can do?
I’m tired of people harping on this decision when it’s a perfectly legitimate one from a legal standpoint. There’s a reason tons of places are very iffy about nsfw content.
The article points out that the strength of the Fediverse is also it’s downside. Federated moderation makes it challenging to consistently moderate CSAM.
We have seen it even here with the challenges of Lemmynsfw. In fact they have taken a stance that CSAM like images with of age models made to look underage is fine as long as there is some dodgy ‘age verification’
The idea is that abusive instances would get defederated, but I think we are going to find that inadequate to keep up without some sort of centralized reporting escalation and ai auto screening.
The problem with screening by AI is there’s going to be false positives, and it’s going to be extremely challenging and frustrating to fight them. Last month I got a letter for a speeding infraction that was automated: it was generated by a camera, the plate read in by OCR, the letter I received (from “Seat Pleasant, Maryland,” lol) was supposedly signed off by a human police officer, but the image was so blurry that the plate was practically unreadable. Which is what happened: it got one of the letters wrong, and I got a speeding ticket from a town I’ve never been to, had never even heard of before I got that letter. And the letter was full of helpful ways to pay for and dispense with the ticket, but to challenge it I had to do it it writing, there was no email address anywhere in the letter. I had to go to their website and sift through dozens of pages to find one that had any chance of being able to do something about it, and I made a couple of false steps along the way. THEN, after calling them up and explaining the situation, they apologized and said they’d dismiss the charge–which they failed to do, I got another letter about it just TODAY saying a late fee had now been tacked on.
And this was mere OCR, which has been in use for multiple decades and is fairly stable now. This pleasant process is coming to anything involving AI as a judging mechanism.
Off topic, but a few years ago a town in Tennessee had their speed camera contractor screw up in this way. Unfortunately for them, they tagged an elderly couple whose son was a very good attorney. He sued the town for enough winnable civil liability to bankrupt them and force them to disincorporate.
Speed cameras are all but completely illegal in TN now.
When I lived in Clarksville, they had intersection cameras to ticket anyone that ran a red light. Couple problems with it.
- Drivers started slamming on their brakes; causing more accidents
- The city outsourced the cameras, so they received only pennies on the dollar for every ticket.
I think they eventually removed them, but I can’t recall. I visited last September to take a class for work, and I didn’t see any cameras, so they might be gone.
THEN, after calling them up and explaining the situation, they apologized and said they’d dismiss the charge–which they failed to do
That sounds about right. When I was in college I got a speeding ticket halfway in between the college town and the city my parents lived in. Couldn’t afford the fine due to being a poor college student, and called the court and asked if an extension was possible. They told me absolutely, how long do you need, and then I started saving up. Shortly before I had enough, I got a call from my Mom that she had received a letter saying there was a bench warrant for my arrest over the fine
According to corporate news everything outside of the corporate internet is pedophiles.
Well, terrorists became boring, and they still want the loony wing of the GOP’s clicks, so best to back off on Nazis and pro-Russians, leaving pedophiles as the safest bet.
Nazis not being the go-to target for a poisoning the well approach worries me in many different levels
I’m not actually going to read all that, but I’m going to take a few guesses that I’m quite sure are going to be correct.
First, I don’t think Mastodon has a “massive child abuse material” problem at all. I think it has, at best, a “racy Japanese style cartoon drawing” problem or, at worst, an “AI generated smut meant to look underage” problem. I’m also quite sure there are monsters operating in the shadows, dogwhistling and hashtagging to each other to find like minded people to set up private exchanges (or instances) for actual CSAM. This is no different than any other platform on the Internet, Mastodon or not. This is no different than the golden age of IRC. This is no different from Tor. This is no different than the USENET and BBS days. People use computers for nefarious shit.
All that having been said, I’m equally sure that this “research” claims that some algorithm has found “actual child porn” on Mastodon that has been verified by some “trusted third part(y|ies)” that may or may not be named. I’m also sure this “research” spends an inordinate amount of time pointing out the “shortcomings” of Mastodon (i.e. no built-in “features” that would allow corporations/governments to conduct what is essentially dragnet surveillance on traffic) and how this has to change “for the safety of the children.”
How right was I?
The content in question is unfortunately something that has become very common in recent months: CSAM (child sexual abuse material), generally AI-generated.
AI is now apparently generating entire children, abusing them, and uploading video of it.
Or, they are counting “CSAM-like” images as CSAM.