Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

156 points
*

While the study itself is a good read and I agree with the conclusions—Mastodon, and decentralized social media need better moderation tools—it’s hard to not read the Verge headline as misleading. One of the study authors gives more context here https://hachyderm.io/@det/110769470058276368. Basically most of the hits came from a large Japanese instance that no one federates with; the author even calls out that the blunt instrument most Mastodon admins use is to blanket defederate with instances hosted in Japan due to their more lax (than the US) laws around CSAM. But the headline seems to imply that there’s a giant seedy underbelly to places like mastodon.social[1] that are rife with abuse material. I suppose that’s a marketing problem of federated software in general.

  1. There is a seedy underbelly of mainstream Mastodon instances, but it’s mostly people telling you how you’re supposed to use Mastodon if you previously used Twitter.
permalink
report
reply
37 points

In my opinion the biggest issue the author points out is that cached materials are sometimes retained even after moderator action. Which honestly just sounds like a straight up bug more than anything. Though if I were running an instance, the feds showing up at my door with a warrant because I’ve been accidentally distributing CSAM would be my nightmare scenario. And of course jurisdiction plays a part, too: an American user on a Canadian server might see drawn depictions of sexualized minors, think “weird but not illegal,” and now the Canadian admin has content that’s illegal in Canada on their Canadian server and has no idea.

IMO I think the best solution to this is something similar to what Renaud Chaput (Mastodon’s resident infra boffin) described in his recent blog post. Effectively, give admins a way to hand this off to pluggable third-party services. Admins that are worried about this sort of thing can then have some degree of safety via e.g. PhotoDNA, whereas others can take on additional risk and preserve additional privacy.

All that said: yeah the headline makes it sound like .social is some 8chan-esque hellhole, whereas in reality my feed is 99% German programmers sharing milquetoast political takes.

permalink
report
parent
reply
9 points
*
Deleted by creator
permalink
report
parent
reply
9 points

While I agree that CSAM material needs to be addressed it’s also worth pointing out that cloudflare has some privacy issues.

permalink
report
parent
reply
24 points
*

The person outright rejects defederation as a solution when it IS the solution, if an instance is in favor of this kind of thing you don’t want to federate with them, period.

I also find worrying the amount of calls for a “Fediverse police” in that thread, scanning every image that gets uploaded to your instance with a 3rd party tool is an issue too, on one side you definitely don’t want this kinda shit to even touch your servers and on the other you don’t want anybody dictating that, say, anti-union or similar memes are marked, denounced and the person who made them marked, targeted and receiving a nice Pinkerton visit.

This is a complicated problem.

Edit: I see somebody suggested checking the observations against the common and well used Mastodon blocklists, to see if the shit is contained on defederated instances, and the author said this was something they wanted to check, so i hope there’s a followup

permalink
report
parent
reply
2 points
*

The person outright rejects defederation as a solution when it IS the solution

It’s the solution in the sense that it removes it from view of users of the mainstream instances. It is not a solution to the overall problem of CSAM and the child abuse that creates such material. There is an argument to be made that is the only responsibility of instance admins, and that past that is the responsibility of law enforcement. This is sensible, but it invites law enforcement to start overtly trawling the Fediverse for offending content, and create an uncomfortable situation for admins and users, as they will go after admins who simply do not have the tools to effectively monitor for CSAM.

Defederation also obviously does not prevent users of the instance from posting CSAM. Admins even unknowingly having CSAM on their instance can easily lead to the admins being prosecuted and the instance taken down. Section 230 does not apply to material illegal on a federal level, and SESTA requires removal of material that violates even state level sex trafficking laws.

permalink
report
parent
reply
71 points

Yeah I recall that the Japanese instances have a big problem with that shit. As for the rest of us, Facebook actually open sourced some efficient hashing algorithms for use for dealing with CSAM; Fediverse platforms could implement these, which would just leave the issue of getting an image hash database to check against. All the big platforms could probably chip in to get access to one of those private databases and then release a public service for use with the ecosystem.

permalink
report
reply
14 points

That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.

We’re not gonna fix society using tech unless we’re all hooked up to some all knowing AI under government control.

permalink
report
parent
reply
19 points

That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.

No it wouldn’t, because it’d still be significantly easier for instances to deal with CSAM content with this functionality built into the platforms. And I highly doubt there’s going to be a mass migration from any Fediverse platform that implements such a feature (though honestly I’d be down to defederate with any instance that takes serious issue with this).

permalink
report
parent
reply
3 points

And the instances who want to engage with that material would all opt for the fork and be done with it. That’s all I meant.

permalink
report
parent
reply
12 points

That’s not the point. Yes, child porn sites can host child porn. Other sites/instances can’t stop that. But what other instances can stop, is redistributing said child porn. And for that purpose, such technology would be useful.

permalink
report
parent
reply
5 points

researchers found 112 instances of known CSAM across 325,000 posts on the platform

So you’re willing to vacuum up the hashes of every image file uploaded on thousands of decentralized systems into a centralized systems (that is out of “our” control and coupled with direct access for law enforcement and corporations) to prevent the distribution of 0.034% of files that are CSAM and that could just as well be reported and deleted by admins and moderators? Remember how Snowden warned us about metadata?

If you think that’s a wise tradeoff, I guess, go ahead. But then I’d have to question the entire goal of being decentralized in the first place. If it’s all about “a billionare can’t wreak havok upon my social network”, then yeah, I guess decentralization helps a bit but even that remains to be seen.

But if you’re actually willing to do that, you’d probably also be in favor of having government backdoors into chat encryption (and thus rendering the entire concept moot, because you can’t have backdoors that cannot be discovered by other nefarious actors) and even more censorship-resistant systems like Tor because evil people use it to exchange CSAM anonymously as well?

permalink
report
parent
reply
10 points

Facebook actually being good for once? Unheard of

permalink
report
parent
reply
9 points
*

As much as we can (and should) lambast Facebook/Meta’s C-Suite for terrible decisions, their engineers are generally pretty legit.

permalink
report
parent
reply
5 points

They actually contribute a lot of useful stuff to the web dev world, like React.js. It’s just all the other shit they do that’s awful.

permalink
report
parent
reply
52 points

Pedos that got banned from platforms turn to other platform who hasnt done it yet

In other news: the sky is blue

permalink
report
reply
2 points

While white knights propose ways to control everyone everywhere everytime, in the name of catching the pedos who will just hop to the next platform (or have already).

permalink
report
parent
reply
50 points
*

I’m not fully sure about the logic and perhaps hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn’t use it?).

permalink
report
reply
31 points
*

It doesn’t help to bring whataboutism into this discussion. This is a known problem with the open nature of federation. So is bigotry and hate speech. To address these problems, it’s important to first acknowledge that they exist.

Also, since fed is still in the early stages, now is the time to experiment with mechanisms to control them. Saying that the problem is innate to networks is only sweeping it under the rug. At some point there will be a watershed event that’ll force these conversations anyway.

The challenge is in moderating such content without being ham-fisted. I must admit I have absolutely no idea how, this is just my read of the situation.

permalink
report
parent
reply
27 points

@mudeth @pglpm you really don’t beyond our current tools and reporting to authorities.

This is not a single monolithic platform, it’s like attributing the bad behavior of some websites to HTTP.

Our existing moderation tools are already remarkably robust and defederating is absolutely how this is approached. If a server shares content that’s illegal in your country (or otherwise just objectionable) and they have no interest in self-moderating, you stop federating with them.

Moderation is not about stamping out the existence of these things, it’s about protecting your users from them.

If they’re not willing to take action against this material on their servers, then the only thing further that can be done is reporting it to the authorities or the court of public opinion.

permalink
report
parent
reply
14 points
*

Maybe my comment wasn’t clear or you misread it. It wasn’t meant to be sarcastic. Obviously there’s a problem and we want (not just need) to do something about it. But it’s also important to be careful about how the problem is presented - and manipulated - and about how fingers are pointed. One can’t point a finger at “Mastodon” the same way one could point it at “Twitter”. Doing so has some similarities to pointing a finger at the http protocol.

Edit: see for instance the comment by @while1malloc0@beehaw.org to this post.

permalink
report
parent
reply
7 points

Understood, thanks. Yes I did misread it as sarcasm. Thanks for clearing that up :)

However I disagree with @shiri@foggyminds.com in that Lemmy, and the Fediverse, are interfaced with as monolithic entities. Not just by people from the outside, but even by its own users. There are people here saying how they love the community on Lemmy for example. It’s just the way people group things, and no amount of technical explanation will prevent this semantic grouping.

For example, the person who was arrested for CSAM recently was running a Tor exit node, but that didn’t help his case. As shiri pointed out, defederation works for black-and-white cases. But what about in cases like disagreement, where things are a bit more gray? Like hard political viewpoints? We’ve already seen the open internet devolve into bubbles with no productive discourse. Federation has a unique opportunity to solve that problem starting from scratch, and learning from previous mistakes. Defed is not the solution, it isn’t granular enough for one.

Another problem defederation is that it is after-the-fact and depends on moderators and admins. There will inevitably be a backlog (pointed out in the article). With enough community reports, could there be a holding-cell style mechanism in federated networks? I think there is space to explore this deeper, and the study does the useful job of pointing out liabilities in the current state-of-the-art.

permalink
report
parent
reply
14 points

This is exactly what I thought. The story here is that the human race has a massive child abuse material problem.

permalink
report
parent
reply
1 point

The problem is even bigger: some places (ejem Reddit) you will get deplatformed for explaining and documenting why there is a problem.

(even here, I’ll censor myself, and hopefully restrict to content not too hard to moderate)

permalink
report
parent
reply
2 points

The internet itself is a network with major CSAM problems

Is it, though?

Over the last year, I’ve seen several reports on TV of IRL group abuse of children, by other children… which left everyone scratching their heads as to what to do since none of the perpetrators are legally imputable.

During that same time, I’ve seen exactly 0 (zero) instances of CSAM on the Internet.

Sounds to me like IRL has a major CSAM, and general sex abuse, problem.

permalink
report
parent
reply
45 points

“massive child abuse material problem”

“112 instances of known CSAM across 325,000 posts”

While any instance is unacceptable, does 112/325,000 constitute a “massive problem”?

0.0000034462% of posts are unacceptable! Massive problem!

permalink
report
reply
37 points

You moved the period in the wrong direction. It’s 0.034462%.

permalink
report
parent
reply
9 points

That’s just the material they knew was CSAM from previous investigations.

There were also 713 uses of the top 20 CSAM-related hashtags across the Fediverse on posts that contained media, as well as 1,217 text-only posts that pointed to “off-site CSAM trading or grooming of minors.” The study notes that the open posting of CSAM is “disturbingly prevalent.”

permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.8K

    Monthly active users

  • 2.9K

    Posts

  • 53K

    Comments