The Fediverse is a great system for preventing bad actors from disrupting “real” human-human conversations, because all of the mods, developers and admins are all working out of a desire to connect people (as opposed to “trust and safety” teams more concerned about user retention).

Right now it seems that the Fediverses main protection is that it just isn’t a juicy enough target for wide scale spam and bad faith agenda pushers.

But assuming the Fediverse does grow to a significant scale, what (current or future) mechanisms are/could be in place to fend off a flood of AI slop that is hard to distinguish from human? Even the most committed instance admins can only do so much.

For example, I have a feeling all “good” instances in the near future will eventually have to turn on registration applications and only federate with other instances that do the same. But it’s not crazy to imagine that GPT could soon outmaneuver most registration questions which means registrations will only slow the growth of the problem but not manage it long-term.

Any thoughts on this topic?

4 points

I don’t think there is any way to have a genuine “open forum” amongst complete strangers. There have always been human troll farms pushing narratives using sock puppet accounts, AI is just enabling it to reach new scales.

I actually am for echo chambers when it comes to social media, but one in which you only follow people you know or trust and ignore complete strangers and to make sure you get news and critical information from OUTSIDE social media, again with institutions you trust.

permalink
report
reply
1 point

Yes, strong moderation by members of the community is sufficient to recognize and remove bad (human) actors. The question is one of volume and overwhelming those human mods. GPT can create hundreds of bad-faith accounts.

permalink
report
parent
reply
4 points
*
Deleted by creator
permalink
report
reply
1 point

I fully agree. What worries me is if bad actors create bots that are able to overwhelm the human moderators.

permalink
report
parent
reply
6 points

I think that being human scale is largely the appeal of the Fediverse. Each instance isn’t meant to grow to the size of a centralized platform, but to be a relatively small community of people with some shared interests. I look at it similarly to the way IRC channels worked back in the day. You tend to have a group of people whom you interact with frequently and that’s how you know they’re human. If some bot enters the community then it becomes obvious very quickly.

permalink
report
reply
4 points

I have had similar thoughts, I think the answer ultimately lies in active mods that can really get to know a community and it’s users and identify when users are pushing a narrative even if they can’t confirm if they are a bot or not.

Also as @dessalines@lemmy.ml pointed out, user registrations. On startrek.website we have a question that is easy for a star trek fan to answer but not easy for a bot (although getting back to your concern, chatGPT probably would have no problem)

permalink
report
reply
1 point
*

What can be done? Smarter people can probably list plenty of things. But in the end, it’s a constant race trying to out compete. And with LLMs/AI, you can literally train it on the system you want it to overcome with that express purpose and let it work out the “how” and you’re back to square one again.

I think it can best be put in song

Or put another way: how do you make a bear proof trashcan that can defeat a bear but not the dumbest of humans?

permalink
report
reply

Fediverse

!fediverse@lemmy.ml

Create post

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of “federation” and “universe”.

Getting started on Fediverse;

Community stats

  • 1.4K

    Monthly active users

  • 862

    Posts

  • 13K

    Comments