ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

You are viewing a single thread.
View all comments View context
40 points

When it starts to become very racist we know.

permalink
report
parent
reply
23 points

I’ve found the sexism on Reddit to be on par with the racism. Goodness help you if you’re a female of color, unless you’ve been working the same job for multiple decades, or don’t want kids, then you’ll be an inspiration to that community.

Reddit is, alas, not the only forum exhibiting such hate.

permalink
report
parent
reply
-6 points
*

… sure … but you don’t prepare a kid for racism with a sheltered upbringing in a pretend world where discrimination doesn’t exist. You point out bad behaviour and tell them why it’s not OK.

My son is three years old, he has two close friends - one is an ethnic minority (you could live an entire year in my city without even walking past a single person of their ethnic background on the street). His other close friend is a girl. My kid is already witnessing (but not understanding) discrimination against both of his two closest friends in the playground and we’re doing what we can to help him navigate that. Things like “I don’t like him he looks funny” and “she’s a girl, she can’t ride a bicycle”.

Large Language Model training is exactly the same - you need to include discrimination in your training set. That’s a necessary step to train a model that doesn’t discriminate. Reddit has worse discrimination than some other place and that’s a good thing.

The worst behaviour is easier to recognise and can help you learn to recognise more subtle discrimination such as “I don’t want to play with that kid” which is not an obviously discriminatory statement, but definitely could be discrimination (and you should probably investigate before agreeing with the person).

permalink
report
parent
reply
6 points

Yes you need to include ideology/prejudice ( 2 sides of same coin ) in training a new mind, BUT

  • you must segregate the thinking this way is good training-data from the thinking this way is wrong training-data, AND

  • doing that takes work, which is why I doubt it’s being done as actually required, by any AI company, anywhere.

As Musk said about the training-stuff for their mythological self-driving neural-net, classification was too costly, so they created an AI to do it for them…

“I wonder” why it is that their full-self-driving never got reliable enough for release…

_ /\ _

permalink
report
parent
reply
10 points
*

Reminds me of Tay, the Microsoft chat bot that learned from Twitter and became racist in a day https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

permalink
report
parent
reply
7 points

They’re only getting redditors comment data, not CoD multiplayer transcripts.

permalink
report
parent
reply
3 points

I know.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 505K

    Comments