ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them
They’re only getting redditors comment data, not CoD multiplayer transcripts.
I’ve found the sexism on Reddit to be on par with the racism. Goodness help you if you’re a female of color, unless you’ve been working the same job for multiple decades, or don’t want kids, then you’ll be an inspiration to that community.
Reddit is, alas, not the only forum exhibiting such hate.
… sure … but you don’t prepare a kid for racism with a sheltered upbringing in a pretend world where discrimination doesn’t exist. You point out bad behaviour and tell them why it’s not OK.
My son is three years old, he has two close friends - one is an ethnic minority (you could live an entire year in my city without even walking past a single person of their ethnic background on the street). His other close friend is a girl. My kid is already witnessing (but not understanding) discrimination against both of his two closest friends in the playground and we’re doing what we can to help him navigate that. Things like “I don’t like him he looks funny” and “she’s a girl, she can’t ride a bicycle”.
Large Language Model training is exactly the same - you need to include discrimination in your training set. That’s a necessary step to train a model that doesn’t discriminate. Reddit has worse discrimination than some other place and that’s a good thing.
The worst behaviour is easier to recognise and can help you learn to recognise more subtle discrimination such as “I don’t want to play with that kid” which is not an obviously discriminatory statement, but definitely could be discrimination (and you should probably investigate before agreeing with the person).
Yes you need to include ideology/prejudice ( 2 sides of same coin ) in training a new mind, BUT
-
you must segregate the thinking this way is good training-data from the thinking this way is wrong training-data, AND
-
doing that takes work, which is why I doubt it’s being done as actually required, by any AI company, anywhere.
As Musk said about the training-stuff for their mythological self-driving neural-net, classification was too costly, so they created an AI to do it for them…
“I wonder” why it is that their full-self-driving never got reliable enough for release…
_ /\ _
Reminds me of Tay, the Microsoft chat bot that learned from Twitter and became racist in a day https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist