ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them
This is the best summary I could come up with:
In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.
Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.
On its official status page, OpenAI noted the issues, but did not give any explanation of why they might be happening.
“We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”.
It is not the first time that ChatGPT has changed its manner of answering questions, seemingly without developer OpenAI’s input.
Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions.
The original article contains 519 words, the summary contains 150 words. Saved 71%. I’m a bot and I’m open source!
Its being trained on us. Of course its acting unexpectedly. The problem with building a mirror is proding the guy on the other end doesnt work out.
To be honest this is the kind of outcome I expected.
Garbage in, garbage out. Making the system more complex doesn’t solve that problem.
It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.
As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.
LLM’s are not “machine learning”, they are neural-networks.
Different category.
ML is small potatoes, ttbomk.
Decision-tree stuff.
Neural-nets are black-boxes, with back-propagation training of the neural-net to get closer to ( layer by layer, training-instance by training-instance ) the intended result.
ML is what one does on one’s own machine with some python libraries,
ChatGPT ( 3, 3.5, or 4, don’t know which ) cost something like $100,000,000 to rent the machines required for mixing the training-data & the model ( I’m assuming about $20/hr per machine, so an OCEAN of machines, to do it )
_ /\ _
The development of LLMs is possibly becoming self defeating, because the training data is being filled not just with human garbage, but also AI garbage from previous, cruder LLMs.
We may well end up with a machine learning equivalent of Kessler syndrome, with our pool of available knowledge eventually becoming too full of junk to progress.
God I hope all those CEOs and greedy fuckheads that fired hundreds of thousands of people wayyyyy too soon to replace them with this get their pants shredded by the fallout.
Naturally they’ll get their golden parachutes and land on their feet even richer than before, but it’s nice to dream lol
I mean, surely the solution to that would be to use curated/vetted training data? Or at the very least, data from before LLMs became commonplace?
I really hope so. I still have to see a meaningful use case for these kind of LLMs that just get fed with all kinds of data. LLMs “on premise” that are used for specific jobs are fine, but this…I really hope a Kessler-Like syndrome blows it out the water, for countless reasons…
The solution is paying intelligent people to interact with it and give honest feedback.
Like, I’m sure you can pay grad students $15/hr to talk to one about their subject matter.
But with as many as they’d need, it would get expensive.
So they train with low quality social media comments, or using copywritten text without paying the owners.
It’s not that we can’t do it, it’s just expensive. So a capitalist society wont.
If we had an FDR style president, this would be a great area for a new jobs program.
I imagine it more as a parent child relationship.
We’re trailer park trash with no higher education, believe in ghosts, angels and gods in the sky, refuse to ever believe we could be wrong … and now we’ve just had a baby with no one to help us raise it.
We’re going to raise a highly intelligent psychopath
Eh, it just had a few beers that’s all. Let it rest for a few hours.
Someone messed up the quantisation when rolling out an update hehe
I wonder if its LLM got poisoned. Was it Nightshade or Glaze that promised to do that?