Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.
I’m not fully sure about the logic and perhaps hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn’t use it?).
The internet itself is a network with major CSAM problems
Is it, though?
Over the last year, I’ve seen several reports on TV of IRL group abuse of children, by other children… which left everyone scratching their heads as to what to do since none of the perpetrators are legally imputable.
During that same time, I’ve seen exactly 0 (zero) instances of CSAM on the Internet.
Sounds to me like IRL has a major CSAM, and general sex abuse, problem.
It doesn’t help to bring whataboutism into this discussion. This is a known problem with the open nature of federation. So is bigotry and hate speech. To address these problems, it’s important to first acknowledge that they exist.
Also, since fed is still in the early stages, now is the time to experiment with mechanisms to control them. Saying that the problem is innate to networks is only sweeping it under the rug. At some point there will be a watershed event that’ll force these conversations anyway.
The challenge is in moderating such content without being ham-fisted. I must admit I have absolutely no idea how, this is just my read of the situation.
@mudeth @pglpm you really don’t beyond our current tools and reporting to authorities.
This is not a single monolithic platform, it’s like attributing the bad behavior of some websites to HTTP.
Our existing moderation tools are already remarkably robust and defederating is absolutely how this is approached. If a server shares content that’s illegal in your country (or otherwise just objectionable) and they have no interest in self-moderating, you stop federating with them.
Moderation is not about stamping out the existence of these things, it’s about protecting your users from them.
If they’re not willing to take action against this material on their servers, then the only thing further that can be done is reporting it to the authorities or the court of public opinion.
Maybe my comment wasn’t clear or you misread it. It wasn’t meant to be sarcastic. Obviously there’s a problem and we want (not just need) to do something about it. But it’s also important to be careful about how the problem is presented - and manipulated - and about how fingers are pointed. One can’t point a finger at “Mastodon” the same way one could point it at “Twitter”. Doing so has some similarities to pointing a finger at the http protocol.
Edit: see for instance the comment by @while1malloc0@beehaw.org to this post.
Understood, thanks. Yes I did misread it as sarcasm. Thanks for clearing that up :)
However I disagree with @shiri@foggyminds.com in that Lemmy, and the Fediverse, are interfaced with as monolithic entities. Not just by people from the outside, but even by its own users. There are people here saying how they love the community on Lemmy for example. It’s just the way people group things, and no amount of technical explanation will prevent this semantic grouping.
For example, the person who was arrested for CSAM recently was running a Tor exit node, but that didn’t help his case. As shiri pointed out, defederation works for black-and-white cases. But what about in cases like disagreement, where things are a bit more gray? Like hard political viewpoints? We’ve already seen the open internet devolve into bubbles with no productive discourse. Federation has a unique opportunity to solve that problem starting from scratch, and learning from previous mistakes. Defed is not the solution, it isn’t granular enough for one.
Another problem defederation is that it is after-the-fact and depends on moderators and admins. There will inevitably be a backlog (pointed out in the article). With enough community reports, could there be a holding-cell style mechanism in federated networks? I think there is space to explore this deeper, and the study does the useful job of pointing out liabilities in the current state-of-the-art.
This is exactly what I thought. The story here is that the human race has a massive child abuse material problem.
I don’t trust stanford to not work on behalf of the CIA or other 3 alphabet orgs. They kind of turn a blind eye to CSA in churches but a federated media? This sounds like a smear job.
Total tangent, but we kid ourselves if we think the fediverse is somehow censorship-immune in comparison to Reddit or Twitter.
There are more moderators and administrators across all instances which can federate/defederate at will and can delete posts and propagate this deletion through the network. At the same time governments don’t need to negotiate with a large company, but only need to hint they could destroy one person’s livelihood to remove undesirable content from the network. And to avoid the Streisand effect instead of requesting to delete one specific piece of subversive content (which could backfire), just insinuate some illegal material (CSAM being the most obvious, but anything goes, really) has been found to force shut down or takeover of the whole instance.
The same goes for big companies instead of governments: if a large corporation has launched their own Mastodon clone, the first thing they’d reasonably fund are smearpieces by “journalists” and/or “scientists” hinting at harm to befall server owners by continuing to host Mastodon instances.
I personally hate, what crypto has become (if I wanted to destroy crypto, I’d have invented crypto bros as a psy op), but the fediverse isn’t really federated enough to be resistant to influence by corporations and governments and something blockchain adjacent could have been the solution. For example: if the server admin and their hoster is totally unable to decrypt whatever is stored on their own server and the network as a whole is distributing all the content probabilistically across every federated server, the network would only get stronger and more censorship resistant with each new instance. If the government is forcing you for any reason to take down your server your content is not gone but stored with all the other nodes. If you are able to retrieve your key, you could even move to a new instance and authenticate as your old instance (don’t forget: you are not “sending” BTC from one wallet to another, you are only telling as much nodes as sensible that BTC on the chain belongs to a new key now; the same would go for content. Take down one node with a “wallet” doesn’t change which wallet the BTC on the chain belongs to. I propose the same, just with content). If federation between instances would work in a comparable way as it is now, this would additionally increase the probability to root out bad faith actors trying to flood the whole network with illegal content, since their content would be stored on much less nodes in a pseudo-predictable way: as soon as each major instance would defederate, their content would not be stored on their nodes and unfederated third-party-nodes.
“massive child abuse material problem”
“112 instances of known CSAM across 325,000 posts”
While any instance is unacceptable, does 112/325,000 constitute a “massive problem”?
0.0000034462% of posts are unacceptable! Massive problem!
That’s just the material they knew was CSAM from previous investigations.
There were also 713 uses of the top 20 CSAM-related hashtags across the Fediverse on posts that contained media, as well as 1,217 text-only posts that pointed to “off-site CSAM trading or grooming of minors.” The study notes that the open posting of CSAM is “disturbingly prevalent.”
I for one am all for instances being forcibly taken down by police if they can’t moderate CSAM appropriately.
Moderation is a very real challenge. The internet at large aimed to solved it by centralizing everything to a few mega corps with AI moderation. The fediverse aims to solve it by keeping instances small and holding both mods and users accountable.
This is a whataboutist counterpoint at best. Universities and their researchers are not a monolith.
OP unironically linked an article referring to a 50k donation in 2004 to a physics department to show that the Stanford Cyber Policy center somehow is not out for wellbeing of kids? Imagine being this delusional.