ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

23 points

This is the best summary I could come up with:


In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.

Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.

On its official status page, OpenAI noted the issues, but did not give any explanation of why they might be happening.

“We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”.

It is not the first time that ChatGPT has changed its manner of answering questions, seemingly without developer OpenAI’s input.

Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions.


The original article contains 519 words, the summary contains 150 words. Saved 71%. I’m a bot and I’m open source!

permalink
report
reply
88 points

Its being trained on us. Of course its acting unexpectedly. The problem with building a mirror is proding the guy on the other end doesnt work out.

permalink
report
reply
74 points

To be honest this is the kind of outcome I expected.

Garbage in, garbage out. Making the system more complex doesn’t solve that problem.

permalink
report
parent
reply
28 points

I am happy to report I did my part on feeding it garbage. I only ever speak to chatGPT thru a pirate translator. And I only ever ask it for harry potter fan fic. Pay me if you want me to train it meaningfully.

permalink
report
parent
reply
4 points

And Its only to get worse as more of the public is aware.

permalink
report
parent
reply
109 points

Garbage in, garbage out.

permalink
report
parent
reply
28 points

Thank you for your service

permalink
report
parent
reply
1 point

Bamalam

permalink
report
parent
reply
5 points

It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.

As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.

permalink
report
parent
reply
-3 points

LLM’s are not “machine learning”, they are neural-networks.

Different category.

ML is small potatoes, ttbomk.

Decision-tree stuff.

Neural-nets are black-boxes, with back-propagation training of the neural-net to get closer to ( layer by layer, training-instance by training-instance ) the intended result.

ML is what one does on one’s own machine with some python libraries,

ChatGPT ( 3, 3.5, or 4, don’t know which ) cost something like $100,000,000 to rent the machines required for mixing the training-data & the model ( I’m assuming about $20/hr per machine, so an OCEAN of machines, to do it )

_ /\ _

permalink
report
parent
reply
49 points

The development of LLMs is possibly becoming self defeating, because the training data is being filled not just with human garbage, but also AI garbage from previous, cruder LLMs.

We may well end up with a machine learning equivalent of Kessler syndrome, with our pool of available knowledge eventually becoming too full of junk to progress.

permalink
report
parent
reply
13 points

God I hope all those CEOs and greedy fuckheads that fired hundreds of thousands of people wayyyyy too soon to replace them with this get their pants shredded by the fallout.

Naturally they’ll get their golden parachutes and land on their feet even richer than before, but it’s nice to dream lol

permalink
report
parent
reply
19 points

I mean, surely the solution to that would be to use curated/vetted training data? Or at the very least, data from before LLMs became commonplace?

permalink
report
parent
reply
8 points

This is called model collapse and imo has to be solved if LLMs are to be a long term thing. I could see it wrecking this current AI push until people step back and reevaluate how data gets sucked up

permalink
report
parent
reply
7 points

I really hope so. I still have to see a meaningful use case for these kind of LLMs that just get fed with all kinds of data. LLMs “on premise” that are used for specific jobs are fine, but this…I really hope a Kessler-Like syndrome blows it out the water, for countless reasons…

permalink
report
parent
reply
2 points

but also AI garbage from previous, cruder LLMs

And now I’m picturing it training on a bunch of chats with Eliza…

permalink
report
parent
reply
-1 points

Damn.

Thank you VERY much for that insight: AI’s version of Kessler-syndrome.

EXACTLY.

Damn, damn, damn, that gets the truth right in its marrow.

_ /\ _

permalink
report
parent
reply
2 points

just how google search results feel these days…

permalink
report
parent
reply
12 points

The solution is paying intelligent people to interact with it and give honest feedback.

Like, I’m sure you can pay grad students $15/hr to talk to one about their subject matter.

But with as many as they’d need, it would get expensive.

So they train with low quality social media comments, or using copywritten text without paying the owners.

It’s not that we can’t do it, it’s just expensive. So a capitalist society wont.

If we had an FDR style president, this would be a great area for a new jobs program.

permalink
report
parent
reply
2 points

I imagine it more as a parent child relationship.

We’re trailer park trash with no higher education, believe in ghosts, angels and gods in the sky, refuse to ever believe we could be wrong … and now we’ve just had a baby with no one to help us raise it.

We’re going to raise a highly intelligent psychopath

permalink
report
parent
reply
22 points

Eh, it just had a few beers that’s all. Let it rest for a few hours.

permalink
report
reply
7 points

We all know that robots need beer to function properly. It’s more likely that it hasn’t received enough beer, that’s what really messes up robots.

permalink
report
parent
reply
15 points

Someone messed up the quantisation when rolling out an update hehe

permalink
report
reply
5 points

I wonder if its LLM got poisoned. Was it Nightshade or Glaze that promised to do that?

permalink
report
reply
13 points

Those are for messing up image generators and they have already been defeated via de-glazing tools

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 543K

    Comments