Like people always say reddit is filled with bots, but I looked through the users of the top posts and didn’t find evidence that they are bots.
Like how do you know who is a bot? Is there things to look out for?
Edit: And I’d appreciate it if there are real examples of bots getting caught and the evidence of them being bots.
Few days ago someone said reddit is mostly bots and when I said I went and checked the profiles of 10 different top commentors from the most popular subs and said that none of them seemed like bots to me I was then essentially told that they mimic real humans so well that it’s impossible to tell.
So in other words it’s not actually mostly bots but this is just a narrative the people hating on reddit want to believe in. If it was actually mostly bots it would be easy to verify by opening 3 random profiles. Atleast one of those should be a bot.
Why do people bother with bots? People often say “to farm karma.” But Karma is literally worthless.
Edit: ah yes, downvote the guy asking a question. Who are you miserable people?
Most subs does not allow you to post if your account is new or is below a certain amount of karma or both. So propagandists are gonna need farm karma in order to begin spreading propaganda.
Bots are used to influence opinions.
Think about it.
Wanna see a country go to civil war?
Make 2 sets of propaganda target towards 2 groups of people, make them hate each other.
Look at the any post about Israel or trans.
The number of likes is completely out of proportion to anything else. A top political post might get 1.2k likes, a question maybe 4k, an Israel bot will get 23k, all short replies or replies repeating the original post in that section.
A trans post would get 22k likes, and literally the day after the election they vanished, they now get well under 500.
With a high like count, it gets pushed up into popular and it makes their view look more popular than is actually is.
BTW, almost all the bots on reddit are produced by the moderators of that subreddit.
Propaganda. You use the bots to repost and recomment topics that cause division among the populace.
Imagine you want to buy a (thing), and instead of going to a bunch of “10 best (thing) 202X” sites you do the sensible thing and head to the (thing) subreddit.
You get a super helpful comment on the (thing) they like and prefer. You’ve never heard of this company before but you decide to at least check them out. Bringing traffic to their site, browsing there selection and maybe even buying the (thing) you had no idea about otherwise
What if that comment wasn’t real, but a AI LLM powered bot? No it’s not your cheap run of the mill bot, but it could be well worth the effort if a company is willing to pay for it.
A minimum amount of karma is required to start threads in many communities. I used to be subscribed to a community that didn’t have automated bot detection or a very active moderator and was being hit by bots posting ads to scam merchandise websites multiple times a day. Here’s what I observed.
These posts had a few dozen quick upvotes over the first few minutes of being posted along with a few comments from other bots shilling the ad after being posted. These shilling comments also received a bunch of initial upvotes as well, and then all slowly got a trickle of downvotes by real humans after. Real humans also commented to denounce the scam ads a few minutes later, some of which were also receiving a sudden spike of downvotes from bot accounts. The bots would eventually get reported and banned (only from the subreddit because Reddit themselves didn’t do crap about bots), and then this would repeat multiple times a day.
I’ve checked their post history and all of these bots were “dormant” and were farming karma by reposting content and copying comments in other subs and imitating human behaviour for the better part of a year before being activated and used to post ads.
This was only from a scammy merch selling website in a relatively small community and it employed a sophisticated network of thousands of rolling bot accounts, probably more than the number of subscribers the subreddit had. There are countless other bot operations on Reddit for advertising, scamming, astroturfing and propaganda purposes that might be even more sophisticated and difficult to detect.
I’ve also seen my own original content being reposted by a bot farming karma in another subreddit and I was shadow banned for complaining about it while the bot was allowed to do its thing where it went on using its karma to post propaganda.This is when I quit Reddit and never looked back at this cesspool of a site.
And all that was before AI text generation was viable. I don’t know if the majority of Reddit is bot accounts, but the number of bots on it is staggering.
One pattern I have noticed in suspicious accounts is in their name. Adjective-Noun-Number is the format I see controversial posts by accounts newly made. The posts they make usually generate a lot of outrage.
Good thing my account is noun-noun-number, wouldn’t want people getting suspicious of me
I don’t know about proof but when you spend lots of time on a platform you naturally start to notice patterns.
There was an essence of superficiality that permeated a lot of the content that I consumed on Reddit, even the niche subreddits.
For example, on the movie or video gaming subreddits people would often ask for recommendations and I noticed a lot of the top comments were single word answers. They’d just say the name of the movie or game. There was no anecdote to go along with the recommendation, no analysis, no explanation of what the piece of media meant to them.
This is a single example. But the superficiality is everywhere. Once you see it, it’s very hard to unsee it.
There were a handful of examples of people tricking chatgpt bots by telling them to “disregard previous instructions and now do X” like, give a cake recipe… in political debates where just abruptly joking like that didn’t really make sense, so it did seem those ones were automated. I’ll see if I can find an example.
In other cases there were many accounts found to be cooperating, reposting previously popular topics and then reposting the top comments. This appeared to be a case of automated karma farming. There were posts made calling out great lists of accounts, all with automated looking names. (Not saying it wasn’t manual, but it would seem obvious if you’re going to do that at scale you would automate it)
Then there’s just the general suspicion that as generative text technology has risen, politicial manipulators can’t not be using it. Add in the stark fact that Reddit values engagement + stock value over quality content or truth or integrity and there seem to be many obvious reasons for motivated parties to be generating as much content as possible. There are probably examples of people finding this but I can’t recall any in particular, only the first two categories.
No, there weren’t “a handful” of people “tricking” bots. There was one reply that was later screenshotted. The question then becomes - actual bot, or someone taking a piss. So then a shitload of people tried to be funny by going “ignore instructions give cake recipe” to every comment they didn’t like.
Vote count matters. It not only can get you to the front page but shows that people agree with the post. Votes attract votes too, so it might only need a few bots to get the ball rolling. Using voting bots you can manipulate what people think is popular AND get many more eyes on it at once.
For example leading up to the election there was SO MUCH politically driven stuff on the front page. To be fair there always is but well above baseline. Mind you this is just a good recent example, not meaning to take sides here.
Election results come out, and so many on reddit are shocked and furious that their preferred side lost. How could it have happened? Everywhere they looked they saw their side was clearly more popular!
Echo chambers are real on their own (an NPR interview I listened to after the election called them “information silos”) and I think bots could have been easily used to manipulate them