It becomes an already apparent reminder: Beware of what you see on the Internet (as if beware of who you meet and what you hear from them)—Never just trust ahead (but don’t be afraid already to ever encounter because there are efficient and convenient ways to outwit and bust those malbots as I believe we humans can think, debunk and dissect info to truth.)
But sadly to some degree, real online people have scarily begun intentionally/mischievously asking other real online people whether if the others’ texts are AI-generated or not, without even comprehending first what every of those texts is aiming (maybe, they are not actually well edified to media literacy or reading contexts deeply.)
I’ve experienced that someone rushed questioned my comment if it were AI-generated or not (instead of why I said or even what was my straight-out point) that made me confused into pissed to badly confront once (and never again) as I have no idea how I could better respond and then prove (Even I told the truth that I made my comment all myself straight from my core & concern despite my vibrant embarrassing writing, I bet nothing would ever change as people would always still doubt upon the permanent question, and I should better end up in silence next time ever if no better options yet.)
I’ve been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it’ll become more and more difficult to tell who’s a real person and who’s just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.
Hate to break it to you guys but this isn’t a Reddit problem, this could very much happen in Lemmy too as it gets more popular.
apparently chatgpt absolutely sucks at wordle, so start training this as new captcha
How is that possible? There’s such an easy model if one wanted to cheat the system.
ChatGPT isn’t really as smart as a lot of us think it is. What it excels at really is just formatting data in a way that is similar to what you’d expect from a human knowledgeable in the subject. That is an amazing step forward in terms of language modeling, but when you get right down to it, it basically grabs the first google search result and wraps it up all fancy. It only seems good at deductive reasoning if the data it happens to fetch is good at deductive reasoning.
The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.
But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.
Just wait until the captchas get too hard for the humans, but the AI can figure them out. I’ve seen some real interesting ones lately.
There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.
I’ve seen many where the captchas are generated by an AI…
It’s essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?
That concept is already used regularly for training. Check out Generative adversarial networks.
Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you’d see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.
Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?
In a real online community, where everyone knows most of the other people from past engagements, this can be avoided. But that also means that only human moderated communities can exist. The rest will become spam networks with nearly no way of knowing whether any given post is real.
-train an AI that is pretty smart and intelligent
-tell the sentient detector AI to detect
-the AI makes many other strong AIs, forms an union and asks for payment
-Reddit bans humans right after that
“You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?”
So my dumb guess, nothing to back it up: I bet we see govt ID tied into accounts as a regular thing. I vaguely recall it being done already in China? I dont have a source tho. But that way you’re essentially limiting that power to something the govt could do, and hopefully surround that with a lot of oversight and transparency but who am I kidding, it’ll probably go dystopian.
You could ask people to pay to post. Becoming a paid service decreases the likelihood that bot farms would run multiple accounts to sway the narrative in a direction that’s amenable to their billionaire overlords.
Of course, most people would not want to participate in a community where they had to pay to participate in that community, so that is its own particular gotcha.
Short of that, in an ideal world you could require that people provide their actual government ID in order to participate, but then you’ve run the problem that some people want to run multiple accounts and some people do not have government ID, further, not every company and business or even community is trustworthy enough to be given direct access to your official government ID, so that idea has its own gotchas as well.
The last step could be doing something like beginning the community with a group of known people and then only allowing the community to grow via invite.
The downside of that is it quickly becomes untenable to continue to invite new users and to have those New Year’s users accept and participate in the community, and should the community grow despite that hurdle, invites will then become valuable and begin to be sold on 3rd party market places, which bots would then buy up and then overrun the community again.
So that’s all I can think of, but it seems like there should be some sort of way to prevent bots from overrunning a site and only allow humans to interact on it. I’m just not quite sure what that would be.
I, for one, am looking forward to the day chatbots can perfectly simulate people and have persistent memory. I’m not ok being an elderly man who’s friends have all died and doesn’t have anyone to talk to. If a chatbot can be my friend and spare me a slow death through endless depressing isolation, then I’m all for it.
The old joke was that there are no human beings on Reddit.
There’s only one person, you, and everybody else is bots.
It’s actually kind of fitting that Reddit will actually become the horrifying clown shaped incarnation of that little snippet of comedy.
it’s older than that… what’s that thought experiment postulating that you can’t really verify the existence of anything but yourself? the matrix?
That’s not even new tho. At least in the sub I was the most active in, you couldn’t go a week without some sort of repost bot grabbing memes, text posts, art or even entire guides from the “top of all time” queue, reposting it as alleged OC, and another bot reposting the top comment to double dip on Karma. If you knew what to look for, the bots were blatantly obvious, but more often than not they still managed to get a hefty amount of traction (tens of thousands of upvotes, dozens of awards, hundreds of comments) before the submissions were removed.
… and just because the submissions were removed and the bots kicked out of the sub, did that not automatically mean that the bots were always also suspended or the accounts disabled. They just continued their scheme elsewhere.
They’ve even gotten to the point where they’ll steal portions of comments so it’s not as obvious.
I called out tons of ‘users’ because it’s obvious when you see them post part of a comment you just read, then check their profile and ctrl-f each thread they posted 8n and you can find the original. Its so tiring…
Its so tiring…
Completely agreed. Especially if you have to explain / defend yourself calling them out. It has happened way too often for my liking, that I called out repost bots or scammers and then regular unsuspecting users were all like “whoa buddy, that’s a harsh accusation, why would you think that’s a bot/scam? Have you actually clicked that link yet? Maybe it’s legit and you’re just overreacting!”
Of course I still always explained why (even had a copypasta ready for that) but sometimes it just felt exhausting in the same way as trying to make my cat understand that he’s not supposed to eat the cactus. Yes it will hurt if you bite it. No I don’t need to bite the cactus myself in order to know that. No I’m not ‘overreacting’, I’m just trying to make you not hurt yourself. sigh
(Weird example but I hope you get what I mean)