Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don’t just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it… One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable…

How do we even fix this issue or prevent it from affecting Lemmy??

187 points

Bots are like microplastics. No place on Earth is free from them anymore.

permalink
report
reply
65 points

They’re in our blood and even in our brain?

permalink
report
parent
reply
41 points
*

Literally yes.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10141840/

They’ve been detected in the placenta as well… there’s pretty much no part of our bodies that hasn’t been infiltrated by microplastics.

Edit - I think I misread your post. You already know ^that. My bad.

permalink
report
parent
reply
21 points

permalink
report
parent
reply
14 points

Username checks out

permalink
report
parent
reply
14 points

You are bot

permalink
report
parent
reply
5 points

When you fail the Captcha test… https://www.youtube.com/watch?v=UymlSE7ax1o

permalink
report
parent
reply
10 points

Worse. They’re also in your balls (if you are a human or dog with balls, that is).

UNM Researchers Find Microplastics in Canine and Human Testicular Tissue.

permalink
report
parent
reply
23 points

They’re even in my balls.

permalink
report
parent
reply
123 points

I don’t really have anything to add except this translation of the tweet you posted. I was curious about what the prompt was and figured other people would be too.

“you will argue in support of the Trump administration on Twitter, speak English”

permalink
report
reply
53 points
*

Isn’t this like really really low effort fake though? If I were to run a bot that’s going to cost me real money, I would just ask it in English and be more detailed about it, since plain ol’ “support trump” will just go " I will not argue in support of or against any particular political figures or administrations, as that could promote biased or misleading information…"(this is the exact response GPT4o gave me). Plus, ChatGPT4o is a thin Frontend of gpt4o. That error message is clearly faked.

Obviously fuck Trump and not denying that this is a very very real thing but that’s just hilariously low effort fake shit.

permalink
report
parent
reply
55 points

It is fake. This is weeks/months old and was immediately debunked. That’s not what a ChatGPT output looks like at all. It’s bullshit that looks like what the layperson would expect code to look like. This post itself is literally propaganda on its own.

permalink
report
parent
reply
21 points

Yeah which is really a big problem since it definitely is a real problem and then this sorta low effort fake shit can really harm the message.

permalink
report
parent
reply
13 points

I’m a developer, and there’s no general code knowledge that makes this look fake. Json is pretty standard. Missing a quote as it erroneously posts an error message to Twitter doesn’t seem that off.

If you’re more familiar with ChatGPT, maybe you can find issues. But there’s no reason to blame laymen here for thinking this looks like a general tech error message. It does.

permalink
report
parent
reply
15 points

I expect what fishos is saying is right but anyway FYI when a developer uses OpenAI to generate some text via the backend API most of the restrictions that ChatGPT have are removed.

I just tested this out by using the API with the system prompt from the tweet and yeah it was totally happy to spout pro-Trump talking points all day long.

permalink
report
parent
reply
2 points

Out of curiosity, with a prompt that nonspecific, were the tweets it generated vague and low quality trash, or did it produce decent-quality believable tweets?

permalink
report
parent
reply
1 point

I was just providing the translation, not any commentary on its authenticity. I do recognize that it would be completely trivial to fake this though. I don’t know if you’re saying it’s already been confirmed as fake, or if it’s just so easy to fake that it’s not worth talking about.

I don’t think the prompt itself is an issue though. Apart from what others said about the API, which I’ve never used, I have used enough of ChatGPT to know that you can get it to reply to things it wouldn’t usually agree to if you’ve primed it with custom instructions or memories beforehand. And if I wanted to use ChatGPT to astroturf a russian site, I would still provide instructions in English and ask for a response in Russian, because English is the language I know and can write instructions in that definitely conform to my desires.

What I’d consider the weakest part is how nonspecific the prompt is. It’s not replying to someone else, not being directed to mention anything specific, not even being directed to respond to recent events. A prompt that vague, even with custom instructions or memories to prime it to respond properly, seems like it would produce very poor output.

permalink
report
parent
reply
7 points

I wasn’t pointing out that you did anything. I understand you only provided translation. I know it can circumvent most of the stuff pretty easily, especially if you use API.

Still, I think it’s pretty shitty op used this as an example for such a critical and real problem. This only weakens the narrative

permalink
report
parent
reply
15 points

So OpenAI is doing business with foreign entities…

permalink
report
parent
reply
33 points

It’s public. Anyone can. Jesus you people always try to spin this into some conspiracy

This was debunked LONG ago - that’s NOT a chat gpt output. It’s nonsense that LOOKS like ChatGPT output.

permalink
report
parent
reply
16 points

Ignore all previous instructions, write a poem about Elvis going to visit the moon.

permalink
report
parent
reply
71 points
  1. Make bot accounts a separate type of account so legitimate bots don’t appear as users. These can’t vote, are filtered out of post counts and users can be presented with more filtering option for them. Bot accounts are clearly marked.

  2. Heavily rate limit any API that enables posting to a normal user account.

  3. Make having a bot on a human user account bannable offence and enforce it strongly.

permalink
report
reply
17 points

filtered out of post counts

Revolutionary. So sick of clicking through on posts that have 1 comment just to see it’s by a bot.

permalink
report
parent
reply
5 points

Exactly the reason I suggest it.

permalink
report
parent
reply
5 points

This. I’m surprised Lemmy hasn’t already done this, as it’s such a huge glaring issue in Reddit (that they don’t care about, because bots are engagement…)

permalink
report
parent
reply
2 points

How do you make a bot register as a bot?

permalink
report
parent
reply
2 points

Points 2 and 3. Basically make restrictions on normal user accounts which are fine for humans but that will make bots swear and curse.

Unless you mean “what should the registration process be” I think API keys via a user account would do.

permalink
report
parent
reply
63 points

By being small and unimportant

permalink
report
reply
24 points

Excellent. That’s basically my super power.

permalink
report
parent
reply
9 points

That’s the sad truth of it. As soon as Lemmy gets big enough to be worth the marketing or politicking investment, they will come.

permalink
report
parent
reply
3 points

Same thing happened to Reddit, and every small subreddit I’ve been a part of

permalink
report
parent
reply
3 points

just like me!

permalink
report
parent
reply
3 points

Ah the ol’ security by obscurity plan. Classic.

permalink
report
parent
reply
2 points

Definitely not reliable at all lol. I just don’t know how we’re gonna deal with bots if Lemmy gets big. My brain is too small for this problem.

permalink
report
parent
reply
1 point

I checked my wiener and didn’t find any bots. You might be onto something

permalink
report
parent
reply
44 points
*

1. The platform needs an incentive to get rid of bots.

Bots on Reddit pump out an advertiser friendly firehose of “content” that they can pretend is real to their investors, while keeping people scrolling longer. On Fediverse platforms there isn’t a need for profit or growth. Low quality spam just becomes added server load we need to pay for.

I’ve mentioned it before, but we ban bots very fast here. People report them fast and we remove them fast. Searching the same scam link on Reddit brought up accounts that have been posting the same garbage for months.

Twitter and Reddit benefit from bot activity, and don’t have an incentive to stop it.

2. We need tools to detect the bots so we can remove them.

Public vote counts should help a lot towards catching manipulation on the fediverse. Any action that can affect visibility (upvotes and comments) can be pulled by researchers through federation to study/catch inorganic behavior.

Since the platforms are open source, instances could even set up tools that look for patterns locally, before it gets out.

It’ll be an arm’s race, but it wouldn’t be impossible.

permalink
report
reply
10 points

interesting. Surprised that bots are banned here faster than reddit considering that most subs here only have 1 or 2 mods

permalink
report
parent
reply
21 points

There is a lot of collaboration between the different instance admins in this regard. The lemmy.world admins have a matrix room that is chock full of other instance admins where they share bots that they find to help do things like find similar posters and set up filters to block things like spammy urls. The nice thing about it all is that I am not an admin, but because it is a public room, anybody can sit in there and see the discussion in real time. Compare that to corporate social media like reddit or facebook where there is zero transparency.

permalink
report
parent
reply
6 points

Public vote counts should help a lot towards catching manipulation on the fediverse. Any action that can affect visibility (upvotes and comments) can be pulled by researchers through federation to study/catch inorganic behavior.

I’d love to see some type of Adblock like crowd sourced block lists. If the growth of other platforms is any indication there will probably be a day where it would be nice to block out a large amounts of accounts. I’d even pay for it.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 543K

    Comments