It appears to me that the current state of Lemmy is similar to other platforms when they were smaller and more insular, and that insularity is somewhat protecting it.
I browse Lemmy, and it feels a bit like other platforms did back in 2009, before they became overwhelmed and enshitified.
If I understand it correctly, Lemmy has a similar “landed gentry” moderation scheme, where the first to create a community control it. This was easily exploited on other platforms, particularly in regards to astroturfing, censorship, and controlling a narrative.
If/when Lemmy starts to experience its own “eternal September”, what protections are in place to ensure we will not be overwhelmed and exploited?
Yep. People around here love to attribute some magic powers to decentralization it definitely does not have. The assumption that crappy behavior is somehow localized to a specific instance is bizarre, nothing is keeping people from spamming accounts on instances with free signups. If anything, the decentralization makes it significantly harder to scale up moderation, on top of all the added costs of hosting volunteer social media servers.
That said, I’m not concerned at this point. There is nowhere near enough growth happening to make this be a problem for a long time. Masto worried about it legitimately for like twenty minutes back in some of the first few exodus incidents, before all the normies got alienated and landed on Bluesky.
Don’t get me wrong, I like it here, it feels all retro and kinda like 90s forums, but “what if it gets so popular it’s swamped with bad actors” is VERY low in my list of priorities. We have like two spammers and they’ve become local mascots. Mass malicious engagement is NOT the concern at the moment.
The assumption that crappy behavior is somehow localized to a specific instance is bizarre, nothing is keeping people from spamming accounts on instances with free signups.
I disagree. If that is your primary concern, look at what Beehaw (another Lemmy instance) did. They closed their signups to prevent the bad actor spam accounts on their own instances, and they defederate from instance that allow the easy signups.
Its extreme, yes. It limits conversation from the wider fediverse, yes. However it does mitigate the exact problem you’re citing. I personally prefer to deal with the spammers for the wider audience, but I don’t fault Beehaw for their actions and choices.
No, see, you’re assuming that this is a problem for one instance. Which makes sense because there’s nobody here and not much incentive to target people who are.
If you’re the size of a Twitter (and that’s a couple hundred million accounts) or a Facebook (about ten times that), then there are more than enough people to be targeted by more than enough bad actors to swamp EVERY instance with more spam sign-ins than Beehaw ever had, legitimate or not.
And you have nothing to stop bad actors from spinning up entire instances, which you then have to moderate individually, too.
You can’t defederate from every instance that gets hostile accounts because the logical thing if you’re a malicious actor is to automate signups to ALL available instances. Spam is spam is spam. You do it at scale. And you can’t shut down all signups on all instances if you want to provide the service at scale.
There is no systemic solution to malicious use. If there was, commercial social media would have deployed it to save money, at least when they were still holding to the pretense that they moderate things to meet regulations. Moderation is hard and expensive, and there are no meaningful federation-wide tools to manage it in place. I don’t even know if there can be. The idea that defederation and closing signups will be enough at scale is clearly not accurate. I don’t even think most of the big players in making federation apps would disagree with that. I think the hope is the tools will grow as the needs for them do. I’m not super sure of how well that will go, but I’m also not sure things will get big enough for that to matter at any point soon.
then there are more than enough people to be targeted by more than enough bad actors to swamp EVERY instance with more spam sign-ins than Beehaw ever had, legitimate or not.
I don’t see how your statement applies to a Beehaw type response. Who cares how many bad actors there are if you’re allowing zero signups at your own instance, and you are defederating from instances that do? I don’t know the bowels of Lemmy code well enough to know if there is an “instance federation allowlist” opt-in as opposed to a “defederate from X instance” opt-out. If the former doesn’t exist yet, then it would likely be added to Lemmy code to combat the exact example you give of an infinite number of spam instances being spun up.
Moderation is hard and expensive,
I agree with this.
and there are no meaningful federation-wide tools to manage it in place.
I’m arguing there doesn’t haven’t to be federation-wide tool. There are instance level tools that give enough control depending on how extreme a response the instance wants to enact.
There is no systemic solution to malicious use.
I agree. I argue a systemic solution isn’t a requirement. You’re looking for one thing that solves the problem for the entire Fediverse. That’s a rather un-fediverse concept. The point of the fediverse is decentralization allowing instances to enact their own rules that work for them.
I don’t know how old you are, but decades before giant social media existed, internet forums were a common community posting system. This is an old and known problem. There are a number of approaches that apply from those days to modern Lemmy instances. Yes, many of these would require raising the walls of the garden, but again, these approaches exist. Is it perfect? No, but if thats what it takes, then that will be the result, and the tools exist in Lemmy to do that.
Decentralization provides a lot of important benefits, such as protection against worsening the whole system for profit, or imposing unpopular network-wide rules. I like it here; it’s fun in the way the old web was and the corporate web isn’t.
I think we’re in agreement that preventing moderators of popular communities from being assholes and handling large-scale abuse as OP asked about are not among those benefits.
Agreed, for sure. If anything, decentralization makes those things harder, I’d say. And also agreed that there are benefits to decentralization along the lines you mention. Those two things can be true at the same time.
I think it’d be cool to figure out what the toolset to handle those issues is before they become a problem. Or, honestly, just because figure it out would be a meaningful challenge and may move the sorry state of social media in the right direction just in general. That said, there is a LOT of overcomplacent assumptions, at least in the userbase, regarding decentralization being a magic bullet. I think the development side is a lot less… I don’t want to say naive, but a lot more realistic about the challenges, in any case.