The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.
Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I’m sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.
Reddit had ways to automatically catch people trying to manipulate votes though, at least the obvious ones. A friend of mine posted a reddit link for everyone to upvote on our group and got temporarily suspended for vote manipulation like an hour later. I don’t know if something like that can be implemented in the Fediverse but some people on github suggested a way for instances to share to other instances how trusted/distrusted a user or instance is.
I got suspended multiple times because my partner and daughter were also in our city’s sub, and sometimes one of them would upvote my comments without realizing it was me. It got really fucking annoying, and of course there’s no way to talk to a real person at reddit to prove we’re different people. I’d appeal every time and they’d deny it every time. How reddit could have gotten so huge without realizing that multiple people can live in the same household is beyond me. In the end they both just stopped upvoting anything in the sub because it was too risky (for me).
An automated trust rating will be critical for Lemmy, longer term. It’s the same arms race as email has to fight. There should be a linked trust system of both instances and users. The instance ‘vouches’ for the users trust score. However, if other instances collectively disagree, then the trust score of the instance is also hit. Other instances can then use this information to judge how much to allow from users in that instance.
LLM bots has make this approach much less effective though. I can just leave my bots for a few months or a year to get reputation, automate them in a way that they are completely indistinguishable from a natural looking 200 users, making my opinion carry 200x the weight. Mostly for free. A person with money could do so much more.
It’s the same game as email. An arms race between spam detection, and spam detector evasion. The goal isn’t to get all the bots with it, but to clear out the low hanging fruit.
In your case, if another server noticed a large number of accounts working in lockstep, then it’s fairly obvious bot-like behaviour. If their home server also noticed the pattern and reports it (lowers the users trust rating) then it wont be dinged harshly. If it reports all is fine, then it’s also assumed the instance might be involved.
If you control the instance, then you can make it lie, but this downgrades the instance’s score. If it’s someone else’s, then there is incentive not to become a bot farm, or at least be honest in how it reports to the rest.
This is basically what happens with email. It’s FAR from perfect, but a lot better than nothing. I believe 99+% of all emails sent are spam. Almost all get blocked. The spammers have to work to get them through.