The Fediverse is a great system for preventing bad actors from disrupting “real” human-human conversations, because all of the mods, developers and admins are all working out of a desire to connect people (as opposed to “trust and safety” teams more concerned about user retention).

Right now it seems that the Fediverses main protection is that it just isn’t a juicy enough target for wide scale spam and bad faith agenda pushers.

But assuming the Fediverse does grow to a significant scale, what (current or future) mechanisms are/could be in place to fend off a flood of AI slop that is hard to distinguish from human? Even the most committed instance admins can only do so much.

For example, I have a feeling all “good” instances in the near future will eventually have to turn on registration applications and only federate with other instances that do the same. But it’s not crazy to imagine that GPT could soon outmaneuver most registration questions which means registrations will only slow the growth of the problem but not manage it long-term.

Any thoughts on this topic?

-1 points

As you said, a 44k monthly active users plateform is probably not worth investing time from spammers and agenda pushers.

If at some point we’ll make it, we’ll see. Seems like we are still quite far.

permalink
report
reply
4 points

You say that, but they’re already here. I see completely automated commercial spam posts every few days. And we all know there’s already political agenda-pushers. Hell, Lemmy was created by some.

permalink
report
parent
reply
2 points

see completely automated commercial spam posts every few days.

Don’t get those accounts banned quite fast?

And we all know there’s already political agenda-pushers. Hell, Lemmy was created by some.

It’s community-dependent. Lemmy.ml communities are far from being the most popular on Lemmy: https://lemmyverse.net/communities?order=active_month

permalink
report
parent
reply
6 points

“The fediverse” really can’t. That’s just the reality of a decentralized system. It’s going to be up to individual instances to sort it out.

But that’s a good thing, because what it means is that different instances can and will try different approaches, and between them, they’ll sooner or later hit on the one(s) that will be most effective.

permalink
report
reply
1 point

Any speculation as to what those tools might look like?

permalink
report
parent
reply
2 points

Ban it outright in the rules of individual instances, bully AI piglets for printing the lowest-value content online in the same way NFT goobers are ostracised, run AI image and writing detectors on suspect posts. The common denominator of any AI post is that it’s going to be shit and it should just be treated like someone repeatedly posting a Lorem ipsum copypasta or spam email.

permalink
report
parent
reply
1 point

I don’t have the foggiest idea.

And really, if I did have a good idea, I wouldn’t post it publicly anyway. That’d just be tipping my hand to the astroturfers.

permalink
report
parent
reply
46 points

Reminds me of this one:

- source

permalink
report
reply
2 points
Removed by mod
permalink
report
parent
reply
2 points
*
Removed by mod
permalink
report
parent
reply
12 points

What’s the incentive to operate an LLM on the fediverse that is truly helpful and not just trying to secretly sell something/push an agenda?

permalink
report
parent
reply
9 points
*

Well, I am not saying that the scenario is a perfect match, just that it reminded me of that:-).

Though to answer your question, if Reddit were all AI slop whereas we were not, then they would be foolish to not exploit (for moar profitz) the source of legitimately true info that could be useful to answer people’s questions, e.g. on topics such as whether and how to use Arch Linux btw. :-P

permalink
report
parent
reply
1 point

To train it to mimic genuine human behaviour for applications elsewhere.

permalink
report
parent
reply
Deleted by creator
permalink
report
reply
1 point

I fully agree. What worries me is if bad actors create bots that are able to overwhelm the human moderators.

permalink
report
parent
reply
8 points
*

Instead of trying to detect and block it, just disincentivize it.

Most AI spam on social media tries to exploit various systems intended to predict “good” content on the basis of a user’s past activity, by tracking reputation/karma/etc. Bots build up karma by posting a massive amount of innocuous (but usually insipid) content, then leverage that karma to increase the visibility of malicious content. Both halves of this process result in worse content than if the karma system didn’t exist in the first place.

permalink
report
reply

Fediverse

!fediverse@lemmy.ml

Create post

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of “federation” and “universe”.

Getting started on Fediverse;

Community stats

  • 1K

    Monthly active users

  • 842

    Posts

  • 13K

    Comments