![User's banner](/_next/image?url=https%3A%2F%2Flemmy.dbzer0.com%2Fpictrs%2Fimage%2Fe0138ad6-4276-4c5a-95ce-d209a7e01b16.webp&w=2048&q=75)
![Avatar](/_next/image?url=https%3A%2F%2Flemmy.dbzer0.com%2Fpictrs%2Fimage%2F184d53c7-aa5b-4795-a227-ce5eaf6465c7.webp&w=3840&q=75)
![Avatar](/_next/image?url=%2Flemmy-icon-96x96.webp&w=3840&q=75)
WrittenInRed [any]
Sure, whatever you say. He might not technically be exonerated, but he might as well be. You had 4 fucking years to actually do something about Trump, but this is the system working as intended. The US keeps drifting further and further right with power further consolidating in the hands of billionaires and corporations, while the Democrats sit on their asses and act like they’re completely helpless to do anything meaningful to actually help ordinary people while in power. Then when they lose elections they get to blame minorities or leftists and use it as an excuse to drift further right.
I’ve been thinking recently about chain of trust algorithms and decentralized moderation and am considering making a bot that functions a bit like fediseer but designed more for individual users where people can be vouched for by other users. Ideally you end up with a network where trust is generated pseudo automatically based on interactions between users and could have reports be used to gauge whether a post should be removed based on the trust level of the people making the reports vs the person getting reported. It wouldn’t necessarily be a perfect system but I feel like there would be a lot of upsides to it, and could hopefully lead to mods/admins only needing to remove the most egregious stuff but anything more borderline could be handled via community consensus. (The main issue is lurkers would get ignored with this, but idk if there’s a great way to avoid something like that happening tbh)
My main issue atm is how to do vouching without it being too annoying for people to keep up with. Not every instance enables downvotes, plus upvote/downvote totals in general aren’t necessarily reflective of someone’s trustworthiness. I’m thinking maybe it can be based on interactions, where replies to posts/comments can be ranked by a sentiment analysis model and then that positive/negative number can be used? I still don’t think that’s a perfect solution or anything but it would probably be a decent starting point.
If trust decays over time as well then it rewards more active members somewhat, and means that it’s a lot harder to build up a bot swarm. If you wanted any significant number of accounts you’d have to have them all posting at around the same time which would be a lot more obvious an activity spike.
Idk, this was a wall of text lol, but it’s something I’ve been considering for a while and whenever this sort of drama pops up it makes me want to work on implementing something.